להאיץ את האתר בעזרת Google App Engine / מדריך
שירות האירוח של האתר שלכם נמצא במקום מסוים בעולם, נניח בישראל. זה טוב לישראלים, אך מעט פחות טוב לגולשים מחו"ל.
נניח וג'ון האמריקאי מבקש לראות עמוד מהאתר שלכם, הבקשה שלו צריכה לבצע מסע "עליה" לישראל ולאחר מכן המידע מהאתר שלכם צריך לעזוב את הארץ ולהגיע עד ארה"ב, אל מחשבו של ג'ון. זאת דרך די ארוכה שלוקחת זמן.
זה היה יכול להיות נחמד אם היה אפשר לפתוח "סניפים" לאתר שלכם ברחבי העולם, לא? כך גולשים ישראלים היו מקבלים את המידע מה-"סניף" הישראלי של האתר שלכם, והאמריקאים מה-"סניף" האמריקאי שלכם. למעשה הכי טוב יהיה לפתוח סניפים במקומות חשובים ברחבי העולם.
זה בדיוק מה שנעשה! נפתח סניפים של האתר שלנו בכל העולם.
מהו CDN?
הרעיון שתיארתי למעלה הוא פחות או יותר הרעיון של CDN.
CDN הם ראשי תיבות של Content Delivery Network. זוהי רשת מחשבים שמכילה העתקים של מידע מסוים בנקודות שונות ברשת. צורך המידע ניגש להעתקים של המידע המקורי שהכי קרובים אליו.
אנחנו נשתמש בשירותים החינמיים של גוגל כ-CDN.
האם השירות החינמי של גוגל באמת מהיר כמו CDN בתשלום?
לא. השימוש בשירותים של גוגל לא יבצעו עבודה טובה כמו חברות המספקות פיתרונות CDN. אבל עם גוגל זה בחינם ועדיין תקבלו האצה משמעותית לאתר שלכם.
מהו Google App Engine?
ה-Google App Engine שמעתה נקרא לו רק GAE היא פלטפורמה להרצת אפליקציות אינטרנט על התשתית של גוגל.
נוכל לנצל פיצ'ר אחד של התשתית הזאת על מנת להפיץ העתקים של האתר שלנו ברחבי העולם. למעשה לא נפיץ העתקים של כל האתר שלנו ברחבי העולם, אלא רק של הנתונים הסטאטים שלנו:
תמונות
סקריפטים של Javascript
דפי HTML סטאטים
קבצי CSS
כל תוכן סטאטי אחר
מוכנים להתחיל?
מדריך לפעולה
המדריך אינו מסובך אך דורש ידע בסיסי בתפעול המחשב, ידע בסיסי ב-HTML וכדומה. תקראו את כל השלבים ותחליטו אם אתם "בנויים לזה".
1. התקנת Python
Python היא שפת תיכנות ש-GAE משתמש בה. הורידו את Python מהאתר הרשמי והתקינו.
Read more: NewsGeek Charity Raising Money To Buy Used Satellite
Posted by
jasper22
at
12:03
|
For those of us who live in the developed world, internet access has become pretty much a given. It's become so ubiquitous that we almost expect to have it at all times and in all places, but even in this 'Information Age,' the majority of the world's population lacks access to the internet – either because service isn't available where they are, or they can't afford it. Kosta Grammatis has a plan, however. Through his charity group ahumanright.org, Grammatis aims to set up a network of satellites that will provide free internet access to everyone in the world. He's starting by attempting to buy a single used satellite that's already in orbit and moving it to a location above a developing country Read more: Slashdot
Windows Server 2008 R2 and Windows 7 SP1 Releases to Manufacturing Today
Posted by
jasper22
at
11:19
|
Hi I’m Michael Kleef, Senior Technical Product Manager with the Windows Server and Cloud division. Today, on behalf of the team, I’m pleased to announce the Release to Manufacturing (RTM) of Windows Server 2008 R2 Service Pack 1 (SP1), along with Windows 7 SP1. SP1 will be made generally available for download on February 22. Two new features in Windows Server SP1, Dynamic Memory and RemoteFX, enable sophisticated desktop virtualization capabilities. These features build on the comprehensive virtualization functionality already included in the Windows Server operating system. Our first new feature, Dynamic Memory, takes Windows Server’s Hyper-V feature to a whole new level. Dynamic Memory lets you increase virtual machine density with the resources you already have—without sacrificing performance or scalability. In our lab testing, with Windows 7 SP1 as the guest operating system in a Virtual Desktop Infrastructure (VDI) scenario, we have seen a 40% increase in density from Windows Server 2008 R2 RTM to SP1. We achieved this increase simply by enabling Dynamic Memory. This increased density does not come at the expense of security, as is the case with other offerings in the industry. Dynamic Memory preserves Windows 7 security without compromising density. My colleague Jeff Woolsey goes into detail in a recent post on this topic at the virtualization blog. In addition, you get immediate benefit from the moment you turn on the virtual machine. There’s no waiting for memory management algorithms to work. Nor do you have to tweak the hypervisor with custom settings for specific workloads to maximize density. It’s an awesome out-of-box experience for all your virtualization workloads. The second new feature, RemoteFX, is a first-to-market technology that we have demonstrated at multiple events. In fact, I was honored to be the first to show it publically at the Desktop Virtualization Hour last March.
should be version: 7601.17514.101119-1850
45 Most Useful And Inspiring Blogger templates: Enjoy Freebies
Posted by
jasper22
at
11:12
|
Blogger templates can be used by any individual or group to set up their website. Once a template is downloaded, the user will replace all general information included in the web template with their own personal, organizational or product information. Blogger templates usually used in different categories, such as personal information or daily activities as in a blog, sell products online, information about business or organization, photo galleries, graphic and web designer’s portfolios and many other purposes.
Today I’ve bring together 45 best designed templates for bloggers from B-templates, it’s a wonderful site to find hundreds of blog templates in all catefories. I’ve spend a lot of time to search some most useful and inspiring templates from a huge list and making it easy to chose right one for you, have a look and enjoy the list below…
Today I’ve bring together 45 best designed templates for bloggers from B-templates, it’s a wonderful site to find hundreds of blog templates in all catefories. I’ve spend a lot of time to search some most useful and inspiring templates from a huge list and making it easy to chose right one for you, have a look and enjoy the list below…
Instapaper Releases A Full API — With A Brilliant, Unique Twi$t
Posted by
jasper22
at
10:14
|
I love Instapaper. Blah blah blah — you all know that by now. But today developer Marco Arment has released something significant that could alter the way the service is used: a full API. And perhaps even more interesting is how he’s released it. In his blog post on the matter, Arment dives into his tough decision making process when it comes to the API. The main problem is that unlike a lot of startups, Instapaper has taken no funding so they have to be profitable each month or they’ll go out of business. That makes an API hard because the core idea behind it is usually to allow people to use your service (including your servers) without actually visiting your ad-driven site and/or paying for your app. So Arment had to come up with a solution. The obvious choice would have been to either limit the API or charge for it. But neither of those are very appealing options to Arment. Limiting the API leads to half-baked, “fragile” apps, as he calls them. Charging developers for API access or taking a cut of their app sales is tricky from a financial perspective, he notes. Instead, Arment came up with a smart third way of doing things. Thanks to the subscription test feature Instapaper launched in October of last year, Arment has a way to get paid for API usage — sort of. You see, the only way users of any service can get access to the Instapaper API is if they’re paying the $1-a-month subscription fee. In other words, has just ensured that he gets directly paid for people hitting his API. And he has just given plenty of users a reason to subscribe. Smart. Read more: TechCrunch
4 Ways to Make LinkedIn Your Company's Best Friend
Posted by
jasper22
at
10:14
|
Visitors to Websites tailored more toward business professionals than consumers are increasingly choosing to log in using their existing LinkedIn identities, said social tool provider Gigya on Tuesday. In fact, whereas only three percent of users to such sites chose to sign in using their LinkedIn identities in a Gigya study last July, that number had increased all the way up to 20 percent by January, the company reported this week in a blog post on the topic. Gigya helps integrate online businesses with social networks such as Facebook, Twitter and LinkedIn, including providing the technology that enables what it calls "social sign-on," or the ability to sign in using an existing identity from a social network. Gigya technology is used by more than 280 million users each month across more than 500,000 sites, it says. While it's still more common to see sites that allow visitors to sign in using their existing Facebook and Twitter identities, LinkedIn began providing similar functionality about a year ago. Now, it looks like users of business-oriented Websites are taking full advantage of that capability. In January, LinkedIn accounted for 27 percent of the social sign-ons at stock market news site SeekingAlpha, Gigya notes, while the Harvard Business Review saw 20 percent of its social sign-ons come through LinkedIn. The Internet Advertising Bureau, meanwhile, saw 14 percent come in that way, Gigya reported. Companies Large and SmallGigya also reports some descriptive data about the users who sign in to business-focused sites via LinkedIn. Of particular note are that finance, high-tech, and medical are the industries most often represented, while sales was the most frequently seen job function. A full 30 percent of the people signing in via LinkedIn came from companies with 1000 or fewer employees, whereas the greatest proportion--41 percent--came from firms with more than 10000.Read more: Business Center
Twitter as Tech Bubble Barometer
Posted by
jasper22
at
10:10
|
As Internet valuations climb and bankers and would-be buyers circle Silicon Valley in an increasingly frothy tech market, many eyes are on one particularly desirable, if still enigmatic, target: Twitter. Discussions with at least some potential suitors have produced an estimated valuation of $8 billion to $10 billion. Executives at both Facebook Inc. and Google Inc., among other companies, have held low-level talks with those at Twitter Inc. in recent months to explore the prospect of an acquisition of the messaging service, according to people familiar with the matter. The talks have so far gone nowhere, these people say. But what's remarkable is the money that people familiar with the matter say frames the discussions with at least some potential suitors: an estimated valuation in the neighborhood of $8 billion to $10 billion. This for a company that, people familiar with the matter said, had 2010 revenue of $45 million—but lost money as it spent on hiring and data centers—and estimates its revenue this year at between $100 million and $110 million. Read more: Wall Street Journal
Silverlight 4 Property Triggers
Posted by
jasper22
at
10:07
|
I spent a little time this week messing around with the newly added Triggers and TriggerActions available through the new Expression Blend 4 SDK.Triggers and Behaviors are really just ways to attach functionality to an existing element, and the base classes that are included in the newer version of Silverlight 4 really make the job easier. I’m going to walk through adding a trigger that fires when one of the properties on my ViewModel changes to true. Now allegedly there is an existing trigger (DataStoreChangedTrigger) that will fire actions based on when a bound property changes, but I want to only fire my actions when my bound property becomes a specific value. Our Goal<ItemsControl
Margin="0 130 0 0"
HorizontalAlignment="Left"
VerticalAlignment="Top"
Opacity="0.0"
ItemsSource="{Binding Items}">
<i:Interaction.Triggers>
<local:BooleanPropertyTrigger
Binding="{Binding FinishedLoading}"
TriggerValue="True">
<local:StoryboardAction>
<Storyboard>
<DoubleAnimation
To="1.0"
Duration="00:00:0.7"
Storyboard.TargetProperty="Opacity" />
</Storyboard>
</local:StoryboardAction>
</local:BooleanPropertyTrigger>
</i:Interaction.Triggers>
</ItemsControl>
The Codez[Download the PropertyTrigger Example Source Project and play along at home] To start out with, I create a base PropertyChangedTrigger class that will do most of the heavy lifting for us. Essentially, we want to inherit from the TriggerBase<…> generic base class and specify that we want our Trigger to attach to a FrameworkElement (I suppose you could use another type of control class, but FrameworkElement will encompass just about any element with a DataContext, which I find useful). Our PropertyChangedTrigger will expose a Binding property that will allow us to attach an event handler when our bound property changes so we can invoke our TriggerActions. /// <summary>
/// A base property changed trigger that
/// fires whenever the bound property changes.
/// </summary>
public class PropertyChangedTrigger : TriggerBase<FrameworkElement>
{
/// <summary>
/// The <see cref="Binding" /> dependency property's name.
/// </summary>
public const string BindingPropertyName = "Binding"; /// <summary>
/// Gets or sets the value of the <see cref="Binding" />
/// property. This is a dependency property.
/// </summary>
public object Binding
{
get
{
return (object)GetValue(BindingProperty);
}
set
{
SetValue(BindingProperty, value);
}
}Read more: ClarityConsulting blog
Margin="0 130 0 0"
HorizontalAlignment="Left"
VerticalAlignment="Top"
Opacity="0.0"
ItemsSource="{Binding Items}">
<i:Interaction.Triggers>
<local:BooleanPropertyTrigger
Binding="{Binding FinishedLoading}"
TriggerValue="True">
<local:StoryboardAction>
<Storyboard>
<DoubleAnimation
To="1.0"
Duration="00:00:0.7"
Storyboard.TargetProperty="Opacity" />
</Storyboard>
</local:StoryboardAction>
</local:BooleanPropertyTrigger>
</i:Interaction.Triggers>
</ItemsControl>
The Codez[Download the PropertyTrigger Example Source Project and play along at home] To start out with, I create a base PropertyChangedTrigger class that will do most of the heavy lifting for us. Essentially, we want to inherit from the TriggerBase<…> generic base class and specify that we want our Trigger to attach to a FrameworkElement (I suppose you could use another type of control class, but FrameworkElement will encompass just about any element with a DataContext, which I find useful). Our PropertyChangedTrigger will expose a Binding property that will allow us to attach an event handler when our bound property changes so we can invoke our TriggerActions. /// <summary>
/// A base property changed trigger that
/// fires whenever the bound property changes.
/// </summary>
public class PropertyChangedTrigger : TriggerBase<FrameworkElement>
{
/// <summary>
/// The <see cref="Binding" /> dependency property's name.
/// </summary>
public const string BindingPropertyName = "Binding"; /// <summary>
/// Gets or sets the value of the <see cref="Binding" />
/// property. This is a dependency property.
/// </summary>
public object Binding
{
get
{
return (object)GetValue(BindingProperty);
}
set
{
SetValue(BindingProperty, value);
}
}Read more: ClarityConsulting blog
Video Training – Windows Phone 7 Development series for Android Developers
Posted by
jasper22
at
10:06
|
MSDev has released the new videos on Windows Phone 7 for Android Developers which helps the Android developers to kick start the Windows Phone 7 Development .
The series includes 8 Videos and is presented by Nancy Strickland and Bill Lodin .
Read more: Windows Phone 7
The series includes 8 Videos and is presented by Nancy Strickland and Bill Lodin .
- Windows Phone 7 for Android Developers: The User Interface
- Windows Phone 7 for Android Developers: Data
- Windows Phone 7 for Android Developers: Graphics
- Windows Phone 7 for Android Developers: Performance
- Why Windows Phone 7 for the iPhone & Android Developer
- How Do I: Migrate an Android Application to a Windows Phone 7 Application?
Read more: Windows Phone 7
Enums and inheritance in .Net
Posted by
jasper22
at
10:05
|
In one of my current projects I had the following code (I simplified the code a bit):public string ConnectionString
{
get
{
switch(this.Importer)
{
case Importer.SqlServer:
return "Server=localhost;Database=Northwind";
case Importer.SqlServerOleDb:
return"Provider=SQLOLEDB;Data Source=localhost;Initial Catalog=Northwind";
default:
throw new NotSupportedException(
string.Format("Importer {0} is not supported yet.", this.Importer));
}
}
}After running the code coverage tool (dotCover from JetBrains) I received the following picture: First ideaSo, my code was clear and understandable, but obviously not fully tested (green: covered by tests, red: not covered). I asked me the question, how could I test the rest of the method.My idea was to extend the enum: [TestFixture]
public class ConfigurationTest
{
#region Test methods
{
get
{
switch(this.Importer)
{
case Importer.SqlServer:
return "Server=localhost;Database=Northwind";
case Importer.SqlServerOleDb:
return"Provider=SQLOLEDB;Data Source=localhost;Initial Catalog=Northwind";
default:
throw new NotSupportedException(
string.Format("Importer {0} is not supported yet.", this.Importer));
}
}
}After running the code coverage tool (dotCover from JetBrains) I received the following picture: First ideaSo, my code was clear and understandable, but obviously not fully tested (green: covered by tests, red: not covered). I asked me the question, how could I test the rest of the method.My idea was to extend the enum: [TestFixture]
public class ConfigurationTest
{
#region Test methods
How to Disable right click popup menu in a MVVM silverlight 4.0 application
Posted by
jasper22
at
10:04
|
The business problemThe silverlight configuration dialog has got a lot of useful functionality and the ability to quickly uninstall the application However most of the time you would want to prevent the business users from knowing the details of implementation.There is also a possibility that the business users could change a few settings inadvertently and cause the application from working or worse they could uninstall the application without knowing to install the application back again. The image belows shows a test solution which shows the right click menu which appears by default.SolutionOne possible approach for solving this problem would be to use javascript to disable the right click at the plugin level.This approach however would disable the right click event for the entire application and will not work in out of browser mode. The solution presented in this article uses the right click event handler exposed in the silverlight 4 version and will not work in the previous versions of the silverlight. The solution is to add an event handler to the mouse right button down event in the application startup method.In the event handler we set the ishandled property to true.This essentially prevents the event from bubbling up all the way to the silverlight plugin.The source code for the same is shown below. private void Application_Startup(object sender, StartupEventArgs e)
{
Application.Current.RootVisual.MouseRightButtonDown += new System.Windows.Input.MouseButtonEventHandler(RootVisual_MouseRightButtonDown);
}void RootVisual_MouseRightButtonDown(object sender, System.Windows.Input.MouseButtonEventArgs e)
{
e.Handled = true;
} private void Application_Exit(object sender, EventArgs e)
{
Application.Current.RootVisual.MouseRightButtonDown -= new System.Windows.Input.MouseButtonEventHandler(RootVisual_MouseRightButtonDown);
}It appears to be a easy work around ,However we still have a problem.It can be observed that when the datepicker popup window is open the user could right click in to the popup control and get the configuration dialog. Read more: Codeproject
{
Application.Current.RootVisual.MouseRightButtonDown += new System.Windows.Input.MouseButtonEventHandler(RootVisual_MouseRightButtonDown);
}void RootVisual_MouseRightButtonDown(object sender, System.Windows.Input.MouseButtonEventArgs e)
{
e.Handled = true;
} private void Application_Exit(object sender, EventArgs e)
{
Application.Current.RootVisual.MouseRightButtonDown -= new System.Windows.Input.MouseButtonEventHandler(RootVisual_MouseRightButtonDown);
}It appears to be a easy work around ,However we still have a problem.It can be observed that when the datepicker popup window is open the user could right click in to the popup control and get the configuration dialog. Read more: Codeproject
Capturing via tcpdump to view in Wireshark
Posted by
jasper22
at
10:02
|
$sudo tcpdump -i en1 -s0 -w captured.pcap-i Listening interface-s Snarf snaplen bytes of data from each packet rather than the default of 64K bytes. Packets truncated because of a limited snapshot are indicated in the output with ``[|proto]'', where proto is the name of the protocol level at which the truncation has occurred. Note that taking larger snapshots both increases the amount of time it takes to process packets and, effectively, decreases the amount of packet buffering. This may cause pack- ets to be lost. You should limit snaplen to the smallest number that will capture the protocol information you're interested in. Setting snaplen to 0 means use the required length to catch whole packets. Read more: F A C I L E L O G I N
Apache Axis2™
Posted by
jasper22
at
10:02
|
Apache Axis2™ is a Web Services / SOAP / WSDL engine, the successor to the widely used Apache Axis SOAP stack. There are two implementations of the Apache Axis2 Web services engine - Apache Axis2/Java and Apache Axis2/C While you will find all the information on Apache Axis2/Java here, you can visit the Apache Axis2/C Web site for Axis2/C implementation information.Apache Axis2, Axis2, Apache, the Apache feather logo, and the Apache Axis2 project logo are trademarks of The Apache Software Foundation. Read more: ApacheSetting up Apache Axis2 on windows
Download Apache Axis2 and extract the downloaded zip file to a desired location. Make two new environment variables set first one's variable name as AXIS2_HOME and variable value as path to Axis2 home folder(in my case it is C:\Users\thilini\Desktop\axis2-1.5.4) and second one's variable name as JAVA_HOME and variabale value as path to Java home folder(in my case it is C:\Program Files\Java\jdk1.6.0_16). Goto bin folder inside Axis2 home folder and double click on axis2server.bat file to start Axis2 service. Now it's time to test whether installation is successful or not. Just fire up a web browser and paste the following address in address bar http://localhost:8080/axis2/services/. If installation is successful it will list down a service called "Version". Read more: EVIAC
Download Apache Axis2 and extract the downloaded zip file to a desired location. Make two new environment variables set first one's variable name as AXIS2_HOME and variable value as path to Axis2 home folder(in my case it is C:\Users\thilini\Desktop\axis2-1.5.4) and second one's variable name as JAVA_HOME and variabale value as path to Java home folder(in my case it is C:\Program Files\Java\jdk1.6.0_16). Goto bin folder inside Axis2 home folder and double click on axis2server.bat file to start Axis2 service. Now it's time to test whether installation is successful or not. Just fire up a web browser and paste the following address in address bar http://localhost:8080/axis2/services/. If installation is successful it will list down a service called "Version". Read more: EVIAC
Back to the basics : Exception Management design guideline for N-tier Asp.net applications
Posted by
jasper22
at
09:59
|
Introduction"How do you define a good Exception Management for an N-Tier Asp.net application?"Pretty simple question, but not that much simple to answer.We are good at making things. But, may be, we are not equally good at designing systems which properly handles errors with gracefulness, provides user with a polite message about the error and doesn't leave him/her in a dead-end, and internally, notifies the system developers with enough details so that the poor developers don't feel like they need to learn some rocket science to fix those errors. So, If you ask me the same question, I would tell you the followings:Your system has a good exception management if:It doesn't show unnecessary technical error descriptions when an error occurs, rather, apologize to user with a screen that something went wrong and lets him/her go back to the system.
When an error occurs, it immediately notifies technical teams with detailed information for troubleshooting, along with logging error details.
It has exception management done in a central and manageable manner without unnecessary try..catch...throw spread across the overall code base.
So, if we want to ensure a good exception management in our Asp.net application, we need to meet these three high level objectives.The bare minimum thing you should doIf you are the laziest developer in the world (Like what I was a few years ago), you should at least take advantage of what Asp.net offers you to handle exceptions gracefully. All you need is to perform the following two simple steps: Enable customError in web.config:<customerrors defaultredirect="Error.aspx" mode="On">
</customerrors>As you might know already, this little configuration instructs the Asp.net run time to redirect to Error.aspx whenever an error occurs in your Asp.net application. Setting mode="On" instructs to redirect always, which may not be a good choice if you are developing your system. Setting mode="RemoteOnly" should be perfect choice for you as this results in redirection to the error page only when page is browsed from a remote machine. Read more: Codeproject
When an error occurs, it immediately notifies technical teams with detailed information for troubleshooting, along with logging error details.
It has exception management done in a central and manageable manner without unnecessary try..catch...throw spread across the overall code base.
So, if we want to ensure a good exception management in our Asp.net application, we need to meet these three high level objectives.The bare minimum thing you should doIf you are the laziest developer in the world (Like what I was a few years ago), you should at least take advantage of what Asp.net offers you to handle exceptions gracefully. All you need is to perform the following two simple steps: Enable customError in web.config:<customerrors defaultredirect="Error.aspx" mode="On">
</customerrors>As you might know already, this little configuration instructs the Asp.net run time to redirect to Error.aspx whenever an error occurs in your Asp.net application. Setting mode="On" instructs to redirect always, which may not be a good choice if you are developing your system. Setting mode="RemoteOnly" should be perfect choice for you as this results in redirection to the error page only when page is browsed from a remote machine. Read more: Codeproject
Ela, functional language
Posted by
jasper22
at
09:57
|
Project DescriptionEla is dynamically (and strongly) typed and comes with a rich and extensible type system out of box. It provides an extensive support for the functional programming paradigm including but not limited to - first class functions, first class currying and composition, list/array comprehensions, pattern matching, polymorphic variants, thunks, etc. It also provides some imperative programming features. Ela supports both strict and non-strict evaluation but is strict by default.The current language implementation is a light-weight and efficient interpreter written fully in C#. The interpreter was designed to be embeddable and has a clear and straightforward API. The language comes with a command line utility (Ela Console) that supports interactive mode. Code samples in ElaQuick sort:let quickSort x::xs = quickSort [ y @ y <- xs | y < x ]
++ [x] ++ quickSort [ y @ y <- xs | y >= x];
[] = []
Fibonacci:let fib = fib' 0 1
where fib' a b 0 = a;
a b n = fib' b (a + b) (n - 1)
end Read more: Codeplex
++ [x] ++ quickSort [ y @ y <- xs | y >= x];
[] = []
Fibonacci:let fib = fib' 0 1
where fib' a b 0 = a;
a b n = fib' b (a + b) (n - 1)
end Read more: Codeplex
SharpPcap - A Packet Capture Framework for .NET
Posted by
jasper22
at
09:56
|
IntroductionPacket capturing (or packet sniffing) is the process of collecting all packets of data that pass through a given network interface. Capturing network packets in our applications is a powerful capability which lets us write network monitoring, packet analyzers and security tools. The libpcap library for UNIX based systems and WinPcap for Windows are the most widely used packet capture drivers that provide API for low-level network monitoring. Among the applications that use libpcap/WinPcap as its packet capture subsystem are the famous tcpdump and Wireshark. In this article, we will introduce the SharpPcap .NET assembly (library) for interfacing with libpcap or winpcap from your .NET application and will give you a detailed programming tutorial on how to use it.Background Tamir Gal started the SharpPcap project around 2004. He wanted to use WinPcap in a .NET application while working on his final project for university. The project involved analyzing and decoding VoIP traffic and he wanted to keep coding simple with C# which has time saving features like garbage collection. Accessing the WinPcap API from .NET seemed to be quite a popular requirement, and he found some useful projects on CodeProject's website that let you do just that: Packet Capture and Analyzer
Raw Socket Capturing Using C#
Packet sniffing with winpcap functions ported to a .NET library
The first project is a great ethereal .NET clone that lets you capture and analyze numerous types of protocol packets. However, a few issues with this project make it almost impossible to be shared among other .NET applications. Firstly, the author did not provide any generic API for capturing packets that can be used by other .NET applications. He didn't separate his UI code and his analyzing and capturing code, making his capturing code depend on the GUI classes such as ListView to operate. Secondly, for some reason the author chose to re-implement some of WinPcap's functions in C# by himself rather than just wrapping them. This means that his application can't take advantage of the new WinPcap versions since he hard coded a certain version of WinPcap in his application. The second and the third articles are nice starts for wrapper projects for WinPcap, however they didn't provide some important WinPcap features such as handling offline pcap files and applying kernel-level packet filters, and most importantly they provide no parser classes for analyzing protocol packets. Both projects didn't post their library source code together with the article in order to let other people extend their work and add new features and new packet parser classes. And so, Tamir decided to start his own library for the task. Several versions in the 1.x series were released. Development slowed towards mid-2007 when the last version in the 1.x series was released, SharpPcap 1.6.2. Chris Morgan took over development of SharpPcap in November of 2008. Since then SharpPcap has had major internal rewrites and API improvements.In late February 2010, SharpPcap v3.0 was released. This release represents a rewrite of SharpPcap's packet parsers. Packet parsing functionality was broken out into a new library, Packet.Net. SharpPcap takes care of interfacing with libpcap/winpcap and Packet.Net takes care of packet dissection and creation. The details of Packet.Net's architecture will be discussed later in the tutorial. SharpPcap v3.5 was released February 1st, 2011. The 3.5 release contains significant API changes as well as WinPcap remote capture and AirPcap support.About SharpPcapThe purpose of SharpPcap is to provide a framework for capturing, injecting and analyzing network packets for .NET applications. SharpPcap is openly and actively developed with its source code and file releases hosted on SourceForge. Source code patches to improve or fix issues are welcome via the sharppcap developers mailing list. Bug reports, feature requests and other queries are actively answered on the support forums and issue trackers there so if you have any trouble with the library please feel free to ask. SharpPcap is a fully managed cross platform library. The same assembly runs under Microsoft .NET as well as Mono on both 32 and 64bit platforms.Read more: Codeproject
Raw Socket Capturing Using C#
Packet sniffing with winpcap functions ported to a .NET library
The first project is a great ethereal .NET clone that lets you capture and analyze numerous types of protocol packets. However, a few issues with this project make it almost impossible to be shared among other .NET applications. Firstly, the author did not provide any generic API for capturing packets that can be used by other .NET applications. He didn't separate his UI code and his analyzing and capturing code, making his capturing code depend on the GUI classes such as ListView to operate. Secondly, for some reason the author chose to re-implement some of WinPcap's functions in C# by himself rather than just wrapping them. This means that his application can't take advantage of the new WinPcap versions since he hard coded a certain version of WinPcap in his application. The second and the third articles are nice starts for wrapper projects for WinPcap, however they didn't provide some important WinPcap features such as handling offline pcap files and applying kernel-level packet filters, and most importantly they provide no parser classes for analyzing protocol packets. Both projects didn't post their library source code together with the article in order to let other people extend their work and add new features and new packet parser classes. And so, Tamir decided to start his own library for the task. Several versions in the 1.x series were released. Development slowed towards mid-2007 when the last version in the 1.x series was released, SharpPcap 1.6.2. Chris Morgan took over development of SharpPcap in November of 2008. Since then SharpPcap has had major internal rewrites and API improvements.In late February 2010, SharpPcap v3.0 was released. This release represents a rewrite of SharpPcap's packet parsers. Packet parsing functionality was broken out into a new library, Packet.Net. SharpPcap takes care of interfacing with libpcap/winpcap and Packet.Net takes care of packet dissection and creation. The details of Packet.Net's architecture will be discussed later in the tutorial. SharpPcap v3.5 was released February 1st, 2011. The 3.5 release contains significant API changes as well as WinPcap remote capture and AirPcap support.About SharpPcapThe purpose of SharpPcap is to provide a framework for capturing, injecting and analyzing network packets for .NET applications. SharpPcap is openly and actively developed with its source code and file releases hosted on SourceForge. Source code patches to improve or fix issues are welcome via the sharppcap developers mailing list. Bug reports, feature requests and other queries are actively answered on the support forums and issue trackers there so if you have any trouble with the library please feel free to ask. SharpPcap is a fully managed cross platform library. The same assembly runs under Microsoft .NET as well as Mono on both 32 and 64bit platforms.Read more: Codeproject
Assembler. Структурная обработка исключений SEH
Posted by
jasper22
at
09:56
|
Windows 95 и Windows NT поддерживают обработку исключений, называемых Структурной Обработкой Исключений, которые обрабатываются операционной системой, но также имеет и прямую поддержку на языке программирования. "Исключение" - случай, который является "неожиданным" или прерывает(останавливает) работу процесса. Исключения могут быть вызваны как аппаратными средствами ЭВМ, так и программным обеспечением. Вы можете писать более надежный код с использованием Структурной Обработкой Исключений.Вы можете гарантировать, что ресурсы, типа блоков памяти и файлов, должным образом будут
закрыты в случае неожиданного завершения Вашей программы. Отличительная особенность Структурной Обработки Исключений - это то, что после того, как исключение установлено, можно обращаться к исключению независимо от того, сколько других функций вызываются. Таким образом, функция А может обращаться к исключению, вызывая внутри функцию, называемую A. Следующий макрос облегчает включение Обработки Исключений в ваши программы. Каждое предложение на ассемблере имеет комментарии, описывающие выполняемое действие.SEH Macros@TRY_BEGIN MACRO Handler
Pushad ;сохраняем текущее состояние
Mov esi, offset Handler ; Адрес нового исключения
push esi ; сохраняем старое исключение
push dword ptr fs: [0] ;устанавливаем новый Handler
Mov dword ptr fs: [0], esp
ENDM
@TRY_EXCEPT MACRO Handler
Jmp NoException&Handler ;исключений нет, делаем переход
Handler: Mov esp, [esp + 8] ;исключение есть, получаем старое
значение ESP
pop dword ptr fs: [0] ;востанавливаем старое исключение
add esp, 4 ; значение ESP перед тем, как SEH был установлен
Popad ; востанавливаем старое состояние
ENDM
@TRY_END MACRO Handler
Jmp ExceptionHandled&Handler ; исключение было обработано
@TRY_EXCEPT
NoException&Handler: ;исключений нет
pop dword ptr fs: [0] ;востанавливаем старое исключение
add esp, 32 + 4 ; значение ESP перед тем, как SEH был установлен
32 для pushad и 4 для смещения Handler (состояние не восстанавливается)
ExceptionHandled&Handler:
; исключение было обработано, или его вообще небыло
ENDM
Использование SEH Макроса
Вышеописанный макрос используется так:
@TRY_BEGIN HandlerName
; Код в этом месте будет проверен на исключения.
@TRY_EXCEPT HandlerName
; Код в этом месте будет выполнен, если исключение произойдет.
@TRY_END HandlerName
; Нормальное выполнение
Пример программы
Структурная Обработка Исключений на Ассемблере
;
; Чтобы откомпилировать эту программу, Вам потребуется 32 бит. Turbo
Assembler
;
; TASM32 /ml SEH
; TLINK32 SEH, SEH, , IMPORT32. LIB
.386p
.model flat, stdcall
EXTRN ExitProcess:PROC
EXTRN MessageBoxA:PROC
Read more: Virtual Reality Online
закрыты в случае неожиданного завершения Вашей программы. Отличительная особенность Структурной Обработки Исключений - это то, что после того, как исключение установлено, можно обращаться к исключению независимо от того, сколько других функций вызываются. Таким образом, функция А может обращаться к исключению, вызывая внутри функцию, называемую A. Следующий макрос облегчает включение Обработки Исключений в ваши программы. Каждое предложение на ассемблере имеет комментарии, описывающие выполняемое действие.SEH Macros@TRY_BEGIN MACRO Handler
Pushad ;сохраняем текущее состояние
Mov esi, offset Handler ; Адрес нового исключения
push esi ; сохраняем старое исключение
push dword ptr fs: [0] ;устанавливаем новый Handler
Mov dword ptr fs: [0], esp
ENDM
@TRY_EXCEPT MACRO Handler
Jmp NoException&Handler ;исключений нет, делаем переход
Handler: Mov esp, [esp + 8] ;исключение есть, получаем старое
значение ESP
pop dword ptr fs: [0] ;востанавливаем старое исключение
add esp, 4 ; значение ESP перед тем, как SEH был установлен
Popad ; востанавливаем старое состояние
ENDM
@TRY_END MACRO Handler
Jmp ExceptionHandled&Handler ; исключение было обработано
@TRY_EXCEPT
NoException&Handler: ;исключений нет
pop dword ptr fs: [0] ;востанавливаем старое исключение
add esp, 32 + 4 ; значение ESP перед тем, как SEH был установлен
32 для pushad и 4 для смещения Handler (состояние не восстанавливается)
ExceptionHandled&Handler:
; исключение было обработано, или его вообще небыло
ENDM
Использование SEH Макроса
Вышеописанный макрос используется так:
@TRY_BEGIN HandlerName
; Код в этом месте будет проверен на исключения.
@TRY_EXCEPT HandlerName
; Код в этом месте будет выполнен, если исключение произойдет.
@TRY_END HandlerName
; Нормальное выполнение
Пример программы
Структурная Обработка Исключений на Ассемблере
;
; Чтобы откомпилировать эту программу, Вам потребуется 32 бит. Turbo
Assembler
;
; TASM32 /ml SEH
; TLINK32 SEH, SEH, , IMPORT32. LIB
.386p
.model flat, stdcall
EXTRN ExitProcess:PROC
EXTRN MessageBoxA:PROC
Read more: Virtual Reality Online
Enabling HTTP Strict Transport Security on debian servers
Posted by
jasper22
at
09:54
|
I just enabled HTTP Strict Transport Security (HSTS) markers on a bunch of web servers that offer HTTPS.It's an easy step to take, and it means that users of HSTS-compliant browsers (such as Chromium and the upcoming Firefox 4) or browsers with HSTS-compliant extensions (like Firefox's NoScript or HTTPS-Everywhere) will no longer be vulnerable to attacks like sslstrip once they have made one successful connection to the HSTS-enabled HTTPS web site. It's not a perfect solution, but it is far better than the current situation. And it's easy to implement for websites that already use HTTPS.For sites using apache, just enable mod_headers (on debian, that's: a2enmod headers) and add the following line to your HTTPS vhost stanza: Header add Strict-Transport-Security: "max-age=15768000"Depending on your setup, you may want to add the semicolon-delimited argument includeSubdomains, like this:Header add Strict-Transport-Security: "max-age=15768000;includeSubdomains" (note that the number of seconds above is roughly 6 months -- this is the duration that compliant clients will retain the protection).Read more: Debian Administration
How to customize the context menus of a WebBrowser control via the IDocHostUIHandler interface
Posted by
jasper22
at
09:54
|
Important noteUnfortunately, the customization approach, that I have originally used in this article, is not suitable for MFC applications (see the "Important Notice" message thread for more) and you should ignore the sections 4 and 6 of my article. On the contrary, section 5 is still 100% valid and can still be quite useful in the customization of the WebBrowser control context menus. The revised sample projects are using a new, much better customization approach that is going to be comprehensively discussed in the next update of this article, which will hopefully be ready in a couple of weeks. I am publishing this semi-documented and not fully-tested code, because I am having indications that some developers may need to have this code much sooner than the day of my next update. For each revised sample there is also a Readme.htm file that briefly describes how the sample works. Table of contents1. Why do we need to customize the context menus of the WebBrowser control (WBC)?
2. The basic customization techniques that can be used.
4. Implementation of the preliminary stuff.
7. Code listings.
8. References.
1. Why do we need to customize the context menus of the WebBrowser control (WBC)? The WebBrowser control (WBC) is a very powerful ActiveX control equipped with many useful capabilities, such as HTML, XML and text data viewing, web browsing, downloading and document viewing (it can display PDF, Word, Excel, PowerPoint and other documents). A description of its capabilities and its usefulness already exists in the MSDN site [1,2,3,4] and is out of the scope of this article. This article is going to deal only with the context menus, which WebBrowser control provides when it displays HTML, XML or text data and we will particularly discuss the customization of these context menus. When the WebBrowser control displays HTML, XML or text data, it provides by default, a powerful set of context menus that can be used to manipulate its content. Hence, significant control over its content is automatically being granted to the end-user. For instance, the end-user will be able to navigate forward and back whenever he chooses, to change the current encoding, to print/export/reload the content and he can also view the source data. By default the WebBrowser control does not need any assistance or approval from the application to do any of these. In fact, the application might not even know that these actions ever occur! Obviously, the fact that end-user has so much control over the content of our WebBrowser control is not always desirable and can introduce severe difficulties in our application design. If this is the case, then we have to customize the default context menus of the WebBrowser control and accommodate them to the specific needs of our application. Read more: Codeproject
2. The basic customization techniques that can be used.
2.1. Overriding the CWinApp::PreTranslateMessage() method.3. The objective of this article.
2.2. Implementing the IDocHostUIHandler interface.
4. Implementation of the preliminary stuff.
4.1. The CWebCtrlInterFace class declaration.5. Implementation of the IDocHostUIHandler::ShowContextMenu() method.
4.2. Implementation of the IUnknown interface methods.
4.3. Implementing the "neutral" behavior in the IOleClientSite and IDocHostUIHandler methods.
5.1. Vital technical information about the context menus of the WBC.6. The sample application.
5.2. Introducing the CWebCtrlInterFace customization modes.
5.3. Implementation of the kDefaultMenuSupport and kNoContextMenu modes.
5.4. Implementation of the kTextSelectionOnly mode.
5.5. Implementation of the kCustomMenuSupport mode.
5.6. Implementation of the kAllowAllButViewSourcemode mode.
7. Code listings.
8. References.
1. Why do we need to customize the context menus of the WebBrowser control (WBC)? The WebBrowser control (WBC) is a very powerful ActiveX control equipped with many useful capabilities, such as HTML, XML and text data viewing, web browsing, downloading and document viewing (it can display PDF, Word, Excel, PowerPoint and other documents). A description of its capabilities and its usefulness already exists in the MSDN site [1,2,3,4] and is out of the scope of this article. This article is going to deal only with the context menus, which WebBrowser control provides when it displays HTML, XML or text data and we will particularly discuss the customization of these context menus. When the WebBrowser control displays HTML, XML or text data, it provides by default, a powerful set of context menus that can be used to manipulate its content. Hence, significant control over its content is automatically being granted to the end-user. For instance, the end-user will be able to navigate forward and back whenever he chooses, to change the current encoding, to print/export/reload the content and he can also view the source data. By default the WebBrowser control does not need any assistance or approval from the application to do any of these. In fact, the application might not even know that these actions ever occur! Obviously, the fact that end-user has so much control over the content of our WebBrowser control is not always desirable and can introduce severe difficulties in our application design. If this is the case, then we have to customize the default context menus of the WebBrowser control and accommodate them to the specific needs of our application. Read more: Codeproject
SQL SERVER – PAGEIOLATCH_DT, PAGEIOLATCH_EX, PAGEIOLATCH_KP, PAGEIOLATCH_SH, PAGEIOLATCH_UP – Wait Type – Day 9 of 28
Posted by
jasper22
at
09:52
|
t is very easy to say that you replace your hardware as that is not up to the mark. In reality, it is very difficult to implement. It is really hard to convince an infrastructure team to change any hardware because they are not performing at their best. I had a nightmare related to this issue in a deal with an infrastructure team as I suggested that they replace their faulty hardware. This is because they were initially not accepting the fact that it is the fault of their hardware. But it is really easy to say “Trust me, I am correct”, while it is equally important that you put some logical reasoning along with this statement. PAGEIOLATCH_XX is such a kind of those wait stats that we would directly like to blame on the underlying subsystem. Of course, most of the time, it is correct – the underlying subsystem is usually the problem. From Book On-Line:
PAGEIOLATCH_DT
Occurs when a task is waiting on a latch for a buffer that is in an I/O request. The latch request is in Destroy mode. Long waits may indicate problems with the disk subsystem. PAGEIOLATCH_EX
Occurs when a task is waiting on a latch for a buffer that is in an I/O request. The latch request is in Exclusive mode. Long waits may indicate problems with the disk subsystem.PAGEIOLATCH_KP
Occurs when a task is waiting on a latch for a buffer that is in an I/O request. The latch request is in Keep mode. Long waits may indicate problems with the disk subsystem.PAGEIOLATCH_SH
Occurs when a task is waiting on a latch for a buffer that is in an I/O request. The latch request is in Shared mode. Long waits may indicate problems with the disk subsystem. PAGEIOLATCH_UP
Occurs when a task is waiting on a latch for a buffer that is in an I/O request. The latch request is in Update mode. Long waits may indicate problems with the disk subsystem.PAGEIOLATCH_XX Explanation:
Simply put, this particular wait type occurs when any of the tasks is waiting for data from the disk to move to the buffer cache.Read more: Journey to SQL Authority with Pinal Dave
PAGEIOLATCH_DT
Occurs when a task is waiting on a latch for a buffer that is in an I/O request. The latch request is in Destroy mode. Long waits may indicate problems with the disk subsystem. PAGEIOLATCH_EX
Occurs when a task is waiting on a latch for a buffer that is in an I/O request. The latch request is in Exclusive mode. Long waits may indicate problems with the disk subsystem.PAGEIOLATCH_KP
Occurs when a task is waiting on a latch for a buffer that is in an I/O request. The latch request is in Keep mode. Long waits may indicate problems with the disk subsystem.PAGEIOLATCH_SH
Occurs when a task is waiting on a latch for a buffer that is in an I/O request. The latch request is in Shared mode. Long waits may indicate problems with the disk subsystem. PAGEIOLATCH_UP
Occurs when a task is waiting on a latch for a buffer that is in an I/O request. The latch request is in Update mode. Long waits may indicate problems with the disk subsystem.PAGEIOLATCH_XX Explanation:
Simply put, this particular wait type occurs when any of the tasks is waiting for data from the disk to move to the buffer cache.Read more: Journey to SQL Authority with Pinal Dave
AlphaFS v1.5 Released (think "The 'Long Path' IO support the BCL doesn't yet have..." or "Don't 'W' [Wide/Unicode API/etc] P/Invoke your Path API's when AlphaFS has done it already for you..."
Posted by
jasper22
at
09:51
|
Project DescriptionAlphaFS is a .NET library providing more complete Win32 file system functionality to the .NET platform than the standard System.IO classes. Features highlights: creating hardlinks, accessing hidden volumes, enumeration of volumes, transactional file operations and much more.
News - AlphaFS 1.5 Stable ReleasedAlphaFS has now reached a stable state. Many improvements and bugfixes has been made since the beta and it is now considered stable enough for production use.
Introduction The file system support in .NET is pretty good for most uses. However there are a few shortcomings, which this library tries to alleviate. The most notable deficiency of the standard .NET file system support was discovered in our attempts to work with the Windows Volume Shadow Copy Service (VSS) (See http://www.codeplex.com/alphavss). VSS creates snapshots of volumes, but on Windows XP does not allow exposing this snapshot as a standard drive letter. There is a hack using the CreateDosDevice Win32 API function, but the solution is not very elegant. Exposing drive letters works on Windows Vista and later, but it may not be desirable from eg. a backup application, to suddenly have a number of new drive letters turn up in explorer. The paths through which the shadow copies are available are extended-length paths, eg. "\\?\Volume{12345678-aac3-31de-3321-3124565341ed}\Program Files" instead of simply "C:\Program Files". However, these paths cannot be accessed using the file system functions exposed by System.IO. What does AlphaFS provide?AlphaFS provides a namespace (Alphaleonis.Win32.Filesystem) containing a number of classes. Most notable are replications of the System.IO.File, System.IO.Directory and System.IO.Path, all with support for the extended-length paths discussed above. They also contain extensions to these, and there are more options for several functions. Another thing AlphaFS brings to the table is support for transactional NTFS (TxF). Almost every method in these classes exist in two versions. One normal, and one that can work with transactions, more specifically the kernel transaction manager. This means that file operations can be performed using the simple, lightweight KTM on NTFS file systems, through .NET, using the interface of the standard classes we are all used to. AlphaFS also contains a little security related functionality (in Alphaleonis.Win32.Security), providing the ability to enable token privileges for a user, which may be necessary for eg. changing ownership of a file.
...The library comes with full API documentation in CHM and Windows Help 2.x format.The library is Open Source, licensed under the MIT license.Read more: Greg's Cool [Insert Clever Name] of the Day
Read more: AlphaFS (Codeplex)
News - AlphaFS 1.5 Stable ReleasedAlphaFS has now reached a stable state. Many improvements and bugfixes has been made since the beta and it is now considered stable enough for production use.
Introduction The file system support in .NET is pretty good for most uses. However there are a few shortcomings, which this library tries to alleviate. The most notable deficiency of the standard .NET file system support was discovered in our attempts to work with the Windows Volume Shadow Copy Service (VSS) (See http://www.codeplex.com/alphavss). VSS creates snapshots of volumes, but on Windows XP does not allow exposing this snapshot as a standard drive letter. There is a hack using the CreateDosDevice Win32 API function, but the solution is not very elegant. Exposing drive letters works on Windows Vista and later, but it may not be desirable from eg. a backup application, to suddenly have a number of new drive letters turn up in explorer. The paths through which the shadow copies are available are extended-length paths, eg. "\\?\Volume{12345678-aac3-31de-3321-3124565341ed}\Program Files" instead of simply "C:\Program Files". However, these paths cannot be accessed using the file system functions exposed by System.IO. What does AlphaFS provide?AlphaFS provides a namespace (Alphaleonis.Win32.Filesystem) containing a number of classes. Most notable are replications of the System.IO.File, System.IO.Directory and System.IO.Path, all with support for the extended-length paths discussed above. They also contain extensions to these, and there are more options for several functions. Another thing AlphaFS brings to the table is support for transactional NTFS (TxF). Almost every method in these classes exist in two versions. One normal, and one that can work with transactions, more specifically the kernel transaction manager. This means that file operations can be performed using the simple, lightweight KTM on NTFS file systems, through .NET, using the interface of the standard classes we are all used to. AlphaFS also contains a little security related functionality (in Alphaleonis.Win32.Security), providing the ability to enable token privileges for a user, which may be necessary for eg. changing ownership of a file.
...The library comes with full API documentation in CHM and Windows Help 2.x format.The library is Open Source, licensed under the MIT license.Read more: Greg's Cool [Insert Clever Name] of the Day
Read more: AlphaFS (Codeplex)
Deploying Web Applications on a remote IIS server using MSBuild
I have written down some important points that should be taken care while deploying applications. Hope it helps others trying to do that same.
Ø MSBuild arguments should be set correctly. /p:DeployOnBuild=True /p:DeployTarget=MsDeployPublish /p:MSDeployServiceURL=<web server name>:8172/msdeploy.axd /p:DeployIISAppPath="<Website Name>/<Application Name>" /p:CreatePackageOnPublish=True /p:AllowUntrustedCertificate=True /p:UserName=<Domain name>\<User Name> /p:Password=<User Password> Ø Local Service should have Full Control on website directory i.e. “C:\inetpub\wwwroot”. Ø Following providers must be added in Management service delegation rule: setAct, createApp, contentPath, iisApp. We can use a blank rule and add these providers or following three rules from the templates:
1. Deploy Applications with content
2. Mark Folders as applications
3. Set Permissions for applications. Ø Identity Type should be “ProcessIdentity” Ø Path should be {userScope}Read more: Getting started with IIS 7 - Amol Mehrotra
Ø MSBuild arguments should be set correctly. /p:DeployOnBuild=True /p:DeployTarget=MsDeployPublish /p:MSDeployServiceURL=<web server name>:8172/msdeploy.axd /p:DeployIISAppPath="<Website Name>/<Application Name>" /p:CreatePackageOnPublish=True /p:AllowUntrustedCertificate=True /p:UserName=<Domain name>\<User Name> /p:Password=<User Password> Ø Local Service should have Full Control on website directory i.e. “C:\inetpub\wwwroot”. Ø Following providers must be added in Management service delegation rule: setAct, createApp, contentPath, iisApp. We can use a blank rule and add these providers or following three rules from the templates:
1. Deploy Applications with content
2. Mark Folders as applications
3. Set Permissions for applications. Ø Identity Type should be “ProcessIdentity” Ø Path should be {userScope}Read more: Getting started with IIS 7 - Amol Mehrotra
BitNami
Posted by
jasper22
at
16:48
|
After several months in private beta, we’re excited to announce that BitNami Cloud Hosting is now available! BitNami Cloud Hosting simplifies the process of deploying and managing open source and other applications in the cloud. 1-Click Application Deployment: Launch servers with one or more applications or development stacks from the BitNami library in one click. Currently available applications include: SugarCRM, Alfresco, JasperServer, Drupal, Wordpress and many others. You can view a list at: http://bitnami.org/cloud/apps Automatic Backups and One Click Server Restores: BitNami Cloud Hosting offers automated backups that can be scheduled to occur hourly, daily or weekly. Backups and not just for the application data, but the whole machine, so configuration and other data is also preserved. This allows for entire servers to be restored in one click. Backups are incremental, which keeps costs low while ensuring that servers can be restored from any point in time. Server Scaling: Change the size of a server as needed with the click of a button.Server Scheduling: Set a schedule for servers to automatically start and stop, reducing computing costs.Automatic Monitoring: BitNami Cloud Hosting provides automatic monitoring out of the box. Multi-Account Management: The BitNami Cloud Hosting control panel enables you to manage applications deployed across several cloud accounts.Read more: BitNami
Mollom Architecture - Killing Over 373 Million Spams At 100 Requests Per Second
Posted by
jasper22
at
16:46
|
Mollom is one of those cool SaaS companies every developer dreams of creating when they wrack their brains looking for a viable software-as-a-service startup. Mollom profitably runs a useful service—spam filtering—with a small group of geographically distributed developers. Mollom helps protect nearly 40,000 websites from spam, including one of mine, which is where I first learned about Mollom. In a desperate attempt to stop spam on a Drupal site, where every other form of CAPTCHA had failed miserably, I installed Mollom in about 10 minutes and it immediately started working. That's the out of the box experience I was looking for. From the time Mollom opened it's digital inspection system they've rejected over 373 million spams and in the process they've learned that a stunning 90% of all messages are spam. This spam torrent is handled by only two geographically distributed machines that handle 100 requests/ second, each running a Java application server and Cassandra. So few resources are necessary because they've created a very efficient machine learning system. Isn't that cool? So, how do they do it? To find out I interviewed Benjamin Schrauwen, cofounder of Mollom, and Johan Vos, Glassfish and Java enterprise expert. Proving software knows no national boundaries, Mollom HQ is located in Belgium (other good things from Belgium: Hercule Poirot, chocolate, waffles). Statistics
Platform
Read more: High Scalability
Read more: Mollom
- Serving 40,000 active websites, many of which are very large customers like Sony Music, Warner Brothers, Fox News, and The Economist. A lot of big brands, with big websites, and a lot of comments.
- Find 1/2 million spam messages each day.
- Handle 100 API calls/second.
- A spam check is low latency, taking between 30-50msecs. The slowest connection would be 500msec. The 95th percentile of latency is 250msecs. It's really optimized for speed.
- Spam classification efficiency is at 99.95%. This means that only 5 in 10,000 spam messages were not caught by Mollom.
- Netlog, which is a social networking site in Europe, has their own Mollom setup in their own datacenter. Netlog handles about 4 million messages a day on custom classifiers that are trained on their data.
Platform
- Two production servers run in two different datacenters for failover.
- One server is on the East coast and one is on the West coast.
- Each server is an Intel Xeon Quad core, 2.8GHz, 16GB RAM, 4 disks of 300 GB, RAID 10.
- SoftLayer - the machines are hosted by SoftLayer.
- Cassandra - a NoSQL database selected for it's write performance and ability to operate across multiple datacenters.
Read more: High Scalability
Read more: Mollom
COM in plain C
Posted by
jasper22
at
16:44
|
Content
Introduction
There are numerous examples that demonstrate how to use/create COM/OLE/ActiveX components. But these examples typically use Microsoft Foundation Classes (MFC), .NET, C#, WTL, or at least ATL, because those frameworks have pre-fabricated "wrappers" to give you some boilerplate code. Unfortunately, these frameworks tend to hide all of the low level details from a programmer, so you never really do learn how to use COM components per se. Rather, you learn how to use a particular framework riding on top of COM. If you're trying to use plain C, without MFC, WTL, .NET, ATL, C#, or even any C++ code at all, then there is a dearth of examples and information on how to deal with COM objects. This is the first in a series of articles that will examine how to utilize COM in plain C, without any frameworks. With standard Win32 controls such as a Static, Edit, Listbox, Combobox, etc., you obtain a handle to the control (i.e., an HWND) and pass messages (via SendMessage) to it in order to manipulate it. Also, the control passes messages back to you (i.e., by putting them in your own message queue, and you fetch them with GetMessage) when it wants to inform you of something or give you some data. Not so with an OLE/COM object. You don't pass messages back and forth. Instead, the COM object gives you some pointers to certain functions that you can call to manipulate the object. For example, one of Internet Explorer's objects will give you a pointer to a function you can call to cause the browser to load and display a web page in one of your windows. One of Office's objects will give you a pointer to a function you can call to load a document. And if the COM object needs to notify you of something or pass data to you, then you will be required to write certain functions in your program, and provide (to the COM object) pointers to those functions so the object can call those functions when needed. In other words, you need to create your own COM object(s) inside your program. Most of the real hassle in C will involve defining your own COM object. To do this, you'll need to know the minute details about a COM object -- stuff that most of the pre-fabricated frameworks hide from you, but which we'll examine in this series. In conclusion, you call functions in the COM object to manipulate it, and it calls functions in your program to notify you of things or pass you data or interact with your program in some way. This scheme is analogous to calling functions in a DLL, but as if the DLL is also able to call functions inside your C program -- sort of like with a "callback". But unlike with a DLL, you don't use LoadLibrary() and GetProcAddress() to obtain the pointers to the COM object's functions. As we'll soon discover, you instead use a different operating system function to get a pointer to an object, and then use that object to obtain pointers to its functions. Read more: Codeproject
Project Kipling Real-Time Data Journalism Tools
Posted by
jasper22
at
16:43
|
Journalists today work in a world dominated by two trends:1. People break and discuss the news in real time via many-to-many communications platforms such as Twitter, and2. Complex issues – both locally and globally – in politics, economics and the environment give rise to large sets of data and a demand for the stories those datasets have to tell. To address these trends, Project Kipling provides real-time Twitter data collection and management, and an extensive suite of proven analytical tools for
- Natural language processing,
- Text mining,
- Social network analysis,
- Geospatial analysis,
- Machine learning,
- Data mining,
- Predictive analytics and modeling,
- Economic and financial analysis, and
- Visualization, exploratory data analysis and data presentation.
- A complete office / productivity suite,
- Browsers, email, address book and calendaring,
- Voice and instant messaging communications, and
- Digital media creation / editing software.
Read more: Project Kipling
Confession: There's an iPhone App For That
Posted by
jasper22
at
12:20
|
Pope Benedict XVI has recently encouraged priests to blog and promoted Christian Netiquette. Now apparently the Roman Catholic church has sanctioned a 'Confession App,' available through iTunes for $1.99. Apparently it doesn't replace 'traditional,' in-person confession, but walks one through the process, even suggesting sins you may wish to confess. Read more: Slashdot
That pesky Hibernate
Posted by
jasper22
at
12:04
|
Yesterday a collegue of mine asked if I could build some functionality to physically delete records in our database. Since deleting things is always a good idea, I immediately started to work on his request. As we are using an ORM (Hibernate) to manage our relational persistence, this would just be a matter of adding a single line of code: session.delete(someEntity);And all the Hibernate magic combined with some cascading would take care of the deletion. So I concluded my assignment fast, and got the well know warm cosy feeling: 'job wel done'.
However, suddenly I realised that the combinaton: "hibernate, magic, job well done, warm cosy feeling" has put me into problems before. In fact, I remeber this since last time I was in that position I ended up feeling exactly like this man did. Just to be sure I went back to the code and configured my logging to print out what Hibernate was doing. As it turned out, something bad was going on. It seems that also Hibernate has difficulties deleting relationships (thats a line to think about). The problem is that each child is deleted one by one.Lets say you have two entities: Person and Product. There is a one to many relationship between Person and Product.
The relationship is uni-directional from Person to Product, mapped as a Set, managed by Person (as there is only one side) and the cascading is set to all-delete-orphan. <hibernate-mapping default-access="field" >
<class name="entities.Person" table="person">
<id name="id" column="id" access="property">
<generator class="native"/>
</id> <set name="products" cascade="all-delete-orphan">
<key column="person_id" not-null="true" update="false" foreign-key="person_fk"/>
<one-to-many class="entities.Product"/>
</set>
</class>
</hibernate-mapping><hibernate-mapping default-access="field" >
<class name="entities.Product" table="product">
<id name="id" column="id" access="property">
<generator class="native"/>
</id>
<property name="name" column="name"/>
</class>
</hibernate-mapping>If you delete a Person object which has 5 products, you will see that pesky Hibernate doing this:31398 [main] DEBUG org.hibernate.SQL - delete from product where id=?
31399 [main] DEBUG org.hibernate.SQL - delete from product where id=?
31399 [main] DEBUG org.hibernate.SQL - delete from product where id=?Read more: Warp 10 Mr. Crusher. Engage!
However, suddenly I realised that the combinaton: "hibernate, magic, job well done, warm cosy feeling" has put me into problems before. In fact, I remeber this since last time I was in that position I ended up feeling exactly like this man did. Just to be sure I went back to the code and configured my logging to print out what Hibernate was doing. As it turned out, something bad was going on. It seems that also Hibernate has difficulties deleting relationships (thats a line to think about). The problem is that each child is deleted one by one.Lets say you have two entities: Person and Product. There is a one to many relationship between Person and Product.
The relationship is uni-directional from Person to Product, mapped as a Set, managed by Person (as there is only one side) and the cascading is set to all-delete-orphan. <hibernate-mapping default-access="field" >
<class name="entities.Person" table="person">
<id name="id" column="id" access="property">
<generator class="native"/>
</id> <set name="products" cascade="all-delete-orphan">
<key column="person_id" not-null="true" update="false" foreign-key="person_fk"/>
<one-to-many class="entities.Product"/>
</set>
</class>
</hibernate-mapping><hibernate-mapping default-access="field" >
<class name="entities.Product" table="product">
<id name="id" column="id" access="property">
<generator class="native"/>
</id>
<property name="name" column="name"/>
</class>
</hibernate-mapping>If you delete a Person object which has 5 products, you will see that pesky Hibernate doing this:31398 [main] DEBUG org.hibernate.SQL - delete from product where id=?
31399 [main] DEBUG org.hibernate.SQL - delete from product where id=?
31399 [main] DEBUG org.hibernate.SQL - delete from product where id=?Read more: Warp 10 Mr. Crusher. Engage!
Java Floating Point Bug Can Lock Up Servers
Posted by
jasper22
at
12:02
|
Here we go again: Just like the recently-reported PHP Floating Point Bug causes servers to go into infinite loops when parsing certain double-precision floating-point numbers, Sun/Oracle's JVM does it, too. It gets better: you can lock up a thread on most servers just by sending a particular header value. Sun/Oracle has known about the bug for something like 10 years, but it's still not fixed. Java Servlet containers are patching to avoid the problem, but application code will still be vulnerable to user input. Read more: Slashdot
Special 48-Hour Offer: Free ASP.NET MVC 3 Video Training
Posted by
jasper22
at
11:10
|
The Virtual ASP.NET MVC Conference (MVCConf) happened earlier today. Several thousand developers attended the event online, and had the opportunity to watch 27 great talks presented by the community. All of the live presentations were recorded, and videos of them will be posted shortly so that everyone can watch them (for free). I’ll do a blog post with links to them once they are available.Special Pluralsight Training Available for Next 48 Hours In my MVCConf keynote this morning, I also mentioned a special offer that Pluralsight (a great .NET training partner) is offering – which is the opportunity to watch their excellent ASP.NET MVC 3 Fundamentals course free of charge for the next 48 hours. This training is 3 hours and 17 minutes long and covers the new features introduced with ASP.NET MVC 3 including: Razor, Unobtrusive JavaScript, Richer Validation, ViewBag, Output Caching, Global Action Filters, NuGet, Dependency Injection, and much more. Scott Allen is the presenter, and the format, video player, and cadence of the course is really great. It provides an excellent way to quickly come up to speed with all of the new features introduced with the new ASP.NET MVC 3 release. Click here to watch the Pluralsight training - available free of charge for the next 48 hours (until Thursday at 9pm PST).Read more: ScottGu's Blog
Read more: Pluralsight
Apache Camel
Posted by
jasper22
at
11:03
|
Apache Camel is an open-source framework to exchange, route and transform data using various protocols. It is prepackaged with components for dealing with various backend systems and powerful routing and filter capabilities. Web Site: http://camel.apache.org
Version discussed: Apache Camel 2.4.0
License & Pricing: Open Source with commercial support and packaging by FUSE
Support: User mailing list, developer mailing list, Internet Relay Chat, Fuse forums. Only the book "Camel in Action" by Manning is currently available. 1. IntroductionThe need to exchange data between different applications and environments is as old as software development. Every application is built with a specific idea in mind and has its respective data model and data format most suited for the task. The task to get data exchanged among systems is always present, when dealing with more than two or three applications. Often, third party software offers export and import interfaces, such as comma-separated value (CSV), but these are most of the time not sufficient. A different data structure, the incorporation of other information pieces, or the filtering of certain parts, are typically requirements. In the past, this was solved either by inhouse development efforts, resulting in dedicated glue or by using special, proprietary application suites. Nowadays, this can be solved with open source frameworks, bringing systems integration to a commodity level. This article introduces you to Apache Camel, one of those integration frameworks.1.1 Overview of Enterprise Application Integration (EAI)During the evolution of systems and their respective integration, methodologies and patterns for typical challenges have been documented. 1.1.1 ChallengesWhen integrating two systems, you typically face several, quite common, issues:You must access and interact with the system. This is about the technical access to the system, by means of API, file access or database connectivity.
You must transform incoming data into what is understood by the external system. This is about converting data from one format to another. This not only means data, but also file formats, protocols and alike.
You may need to distribute data or process only specific sets of data. This deals with routing and filter capabilities.
These issues can be solved in an application specific way, although this limits reusability and slows down the process considerably by reinventing the wheel, over and over again. Therefore, best practices were extracted, discussed and documented. 1.1.2 Typical tasksTo solve connectivity issues, the typical solution or framework comes with a set of pre-packaged components for accessing often used resources, such as web services, file systems, databases, HTTP URLs. Proprietary solutions typically also extend this to other vendor products, such as SAP or Siebel, but open source frameworks mostly limit this to open-standards resources.
Read more: methodsandtools
Version discussed: Apache Camel 2.4.0
License & Pricing: Open Source with commercial support and packaging by FUSE
Support: User mailing list, developer mailing list, Internet Relay Chat, Fuse forums. Only the book "Camel in Action" by Manning is currently available. 1. IntroductionThe need to exchange data between different applications and environments is as old as software development. Every application is built with a specific idea in mind and has its respective data model and data format most suited for the task. The task to get data exchanged among systems is always present, when dealing with more than two or three applications. Often, third party software offers export and import interfaces, such as comma-separated value (CSV), but these are most of the time not sufficient. A different data structure, the incorporation of other information pieces, or the filtering of certain parts, are typically requirements. In the past, this was solved either by inhouse development efforts, resulting in dedicated glue or by using special, proprietary application suites. Nowadays, this can be solved with open source frameworks, bringing systems integration to a commodity level. This article introduces you to Apache Camel, one of those integration frameworks.1.1 Overview of Enterprise Application Integration (EAI)During the evolution of systems and their respective integration, methodologies and patterns for typical challenges have been documented. 1.1.1 ChallengesWhen integrating two systems, you typically face several, quite common, issues:You must access and interact with the system. This is about the technical access to the system, by means of API, file access or database connectivity.
You must transform incoming data into what is understood by the external system. This is about converting data from one format to another. This not only means data, but also file formats, protocols and alike.
You may need to distribute data or process only specific sets of data. This deals with routing and filter capabilities.
These issues can be solved in an application specific way, although this limits reusability and slows down the process considerably by reinventing the wheel, over and over again. Therefore, best practices were extracted, discussed and documented. 1.1.2 Typical tasksTo solve connectivity issues, the typical solution or framework comes with a set of pre-packaged components for accessing often used resources, such as web services, file systems, databases, HTTP URLs. Proprietary solutions typically also extend this to other vendor products, such as SAP or Siebel, but open source frameworks mostly limit this to open-standards resources.
Read more: methodsandtools
Встраиваем сбор Code Coverage в CruiseControl.NET
Posted by
jasper22
at
11:02
|
Последнее время активно пропагандируется практика разработки программного обеспечения Test-Driven Development. Бесспорно, она очень полезна, но не всегда и не все ее применяют. Поэтому часть кода покрыта юнит-тестами, а часть остается непокрытой. Проследить за каждым проектом вручную, нормально написаны там тесты или нет, является практически невыполнимой задачей.
Недавно я задался вопросом, а как можно автоматизировать процесс сбора метрики, которая показывает процент покрытия кода тестами. Было решено встроить ее сбор в CruiseControl.NET. Естественно, что 100% покрытие не гарантирует отсутствие багов, но хотя бы показывает отношение разработчиков к написанию тестов. В данной статье я не буду останавливаться на моментах касающихся настройки CruiseControl’а для сборки проектов и запуска юнит-тестов. Будут описаны шаги, которые позволят собрать необходимую информацию о покрытии кода и вывести ее на странице с тестами. Для написания юнит-тестов наша компания использует фреймворк от Microsoft – MSTest. Результаты работы с этим фреймворком и будут описываться в статье. Стоит заметить, что необходимым условием для встраивания покрытия кода является изначальная настройка CruiseControl'а для запуска тестов. Насколько я знаю, CruiseControl имеет встроенные возможности по отображению данных собранных при помощи NCover. Но по причине того, что купить эту библиотеку у нас нет возможности, мы и пользуемся теми методами, которые есть. Всё нижеописанное относится к проектам, написанным в Visual Studio 2010. Отличия с 2008 версией хоть и небольшие, но они есть. Создание data.coverage файла на билд-сервере
Visual Studio позволяет настроить выполнение тестов таким образом, чтобы после прохождения всех тестов создавался файл, в котором будет храниться информация о покрытых участках кода. Называться этот файл будет data.coverage (хранится он в папке In с результатами прохождения тестов).
Эта настройка хранится в файле .testrunconfig вашего проекта. Через UI ее можно выставить следующим образом:
После сохранения файла внутри него будет следующий текст:<?xml version="1.0" encoding="UTF-8"?>
<TestSettings name="Local Test Run" id="de0d45b4-4fed-4acb-a663-2cfdf0ce4fd7" xmlns="http://microsoft.com/schemas/VisualStudio/TeamTest/2010">
<Description>This is a default test run configuration for a local test run.</Description>
<Deployment enabled="false" />
<Execution>
<Timeouts testTimeout="300000" />
<TestTypeSpecific>
<UnitTestRunConfig testTypeId="13cdc9d9-ddb5-4fa4-a97d-d965ccfc6d4b">
<AssemblyResolution>
<TestDirectory useLoadContext="true" />
</AssemblyResolution>
</UnitTestRunConfig>
</TestTypeSpecific>
<AgentRule name="LocalMachineDefaultRole">Read more: Habrahabr
Недавно я задался вопросом, а как можно автоматизировать процесс сбора метрики, которая показывает процент покрытия кода тестами. Было решено встроить ее сбор в CruiseControl.NET. Естественно, что 100% покрытие не гарантирует отсутствие багов, но хотя бы показывает отношение разработчиков к написанию тестов. В данной статье я не буду останавливаться на моментах касающихся настройки CruiseControl’а для сборки проектов и запуска юнит-тестов. Будут описаны шаги, которые позволят собрать необходимую информацию о покрытии кода и вывести ее на странице с тестами. Для написания юнит-тестов наша компания использует фреймворк от Microsoft – MSTest. Результаты работы с этим фреймворком и будут описываться в статье. Стоит заметить, что необходимым условием для встраивания покрытия кода является изначальная настройка CruiseControl'а для запуска тестов. Насколько я знаю, CruiseControl имеет встроенные возможности по отображению данных собранных при помощи NCover. Но по причине того, что купить эту библиотеку у нас нет возможности, мы и пользуемся теми методами, которые есть. Всё нижеописанное относится к проектам, написанным в Visual Studio 2010. Отличия с 2008 версией хоть и небольшие, но они есть. Создание data.coverage файла на билд-сервере
Visual Studio позволяет настроить выполнение тестов таким образом, чтобы после прохождения всех тестов создавался файл, в котором будет храниться информация о покрытых участках кода. Называться этот файл будет data.coverage (хранится он в папке In с результатами прохождения тестов).
Эта настройка хранится в файле .testrunconfig вашего проекта. Через UI ее можно выставить следующим образом:
- Откройте ваш файл .testrunconfig
- Выберите пункт Data and Diagnostics
- Выставите флажок напротив Code Coverage и нажмите Configure
- Выберите библиотеку, для которой необходимо собирать покрытие
- Выключите флажок Instrument assemblies in place
После сохранения файла внутри него будет следующий текст:<?xml version="1.0" encoding="UTF-8"?>
<TestSettings name="Local Test Run" id="de0d45b4-4fed-4acb-a663-2cfdf0ce4fd7" xmlns="http://microsoft.com/schemas/VisualStudio/TeamTest/2010">
<Description>This is a default test run configuration for a local test run.</Description>
<Deployment enabled="false" />
<Execution>
<Timeouts testTimeout="300000" />
<TestTypeSpecific>
<UnitTestRunConfig testTypeId="13cdc9d9-ddb5-4fa4-a97d-d965ccfc6d4b">
<AssemblyResolution>
<TestDirectory useLoadContext="true" />
</AssemblyResolution>
</UnitTestRunConfig>
</TestTypeSpecific>
<AgentRule name="LocalMachineDefaultRole">Read more: Habrahabr
Important .NET Framework 4.0 Command Line Tools You Must Know
Posted by
jasper22
at
11:00
|
The .NET Framework 4.0 contains a plethora of command line tools ranging from build, deployment, debugging, security to Interop tools, and so on. Here’s a list of important command line tools in the .NET Framework 4.0 which can be run using the Visual Studio Command Prompt. The description of these tools has been taken from the MSDN documentation. Assembly, Build, Deployment and Configuration ToolsAl.exe (Assembly Linker)
The Assembly Linker generates a file that has an assembly manifest from one or more files that are either modules or resource files. Gacutil.exe (Global Assembly Cache Tool)
The Global Assembly Cache tool allows you to view and manipulate the contents of the global assembly cache and download cache.Ilasm.exe (MSIL Assembler)
The MSIL Assembler generates a portable executable (PE) file from Microsoft intermediate language (MSIL) Ildasm.exe (MSIL Disassembler)
The MSIL Disassembler is a companion tool to the MSIL Assembler (Ilasm.exe). Ildasm.exe takes a portable executable (PE) file that contains Microsoft intermediate language (MSIL) code and creates a text file suitable as input to Ilasm.exe. Installutil.exe (Installer Tool)
The Installer tool is a command-line utility that allows you to install and uninstall server resources by executing the installer components in specified assembliesMage.exe (Manifest Generation and Editing Tool) and MageUI.exe
The Manifest Generation and Editing Tool (Mage.exe) is a command-line tool that supports the creation and editing of application and deployment manifests.Ngen.exe (Native Image Generator)
The Native Image Generator (Ngen.exe) is a tool that improves the performance of managed applications. Ngen.exe creates native images, which are files containing compiled processor-specific machine code, and installs them into the native image cache on the local computer Regasm.exe (Assembly Registration Tool)
The Assembly Registration tool reads the metadata within an assembly and adds the necessary entries to the registry, which allows COM clients to create .NET Framework classes transparently. Regsvcs.exe (.NET Services Installation Tool)
The .NET Services Installation tool loads and registers an assembly, generates, registers, and installs a type library into a specified COM+ application and configures services that you have added programmatically to your class. Read more: devCurry
The Assembly Linker generates a file that has an assembly manifest from one or more files that are either modules or resource files. Gacutil.exe (Global Assembly Cache Tool)
The Global Assembly Cache tool allows you to view and manipulate the contents of the global assembly cache and download cache.Ilasm.exe (MSIL Assembler)
The MSIL Assembler generates a portable executable (PE) file from Microsoft intermediate language (MSIL) Ildasm.exe (MSIL Disassembler)
The MSIL Disassembler is a companion tool to the MSIL Assembler (Ilasm.exe). Ildasm.exe takes a portable executable (PE) file that contains Microsoft intermediate language (MSIL) code and creates a text file suitable as input to Ilasm.exe. Installutil.exe (Installer Tool)
The Installer tool is a command-line utility that allows you to install and uninstall server resources by executing the installer components in specified assembliesMage.exe (Manifest Generation and Editing Tool) and MageUI.exe
The Manifest Generation and Editing Tool (Mage.exe) is a command-line tool that supports the creation and editing of application and deployment manifests.Ngen.exe (Native Image Generator)
The Native Image Generator (Ngen.exe) is a tool that improves the performance of managed applications. Ngen.exe creates native images, which are files containing compiled processor-specific machine code, and installs them into the native image cache on the local computer Regasm.exe (Assembly Registration Tool)
The Assembly Registration tool reads the metadata within an assembly and adds the necessary entries to the registry, which allows COM clients to create .NET Framework classes transparently. Regsvcs.exe (.NET Services Installation Tool)
The .NET Services Installation tool loads and registers an assembly, generates, registers, and installs a type library into a specified COM+ application and configures services that you have added programmatically to your class. Read more: devCurry
Install Windows Updates using C# & WUAPI
Posted by
jasper22
at
10:57
|
This document explains how to get the windows updates available for the current machine and selectively install them, using c#.Overview
This document explains how to get the windows updates available for the current machine
and selectively install them, using c#.Requirement
Perform Windows Updates customized installation using c# and Windows Update API Library.
This will provide centralized command over customized installation of updates, patches, KB for
Windows.
EX: If particular updates are not to be installed on some machines, in such cases, we can
configure our custom update application to ignore those updates, using this approach.Benefits
1. Exclusive command over the updates installed over a network of systems.
2. Easy to develop and support, using .NET
3. Simple API approachReferences Required
Add reference to the c:\WINDOWS\system32\wuapi.dll file. This is the API library for Window
Update system.
Write the below import statement using WUApiLib;
using System.Management;
CodingNow, we need to search the Microsoft updates for this machine;UpdateSessionClass uSession = new UpdateSessionClass();
IUpdateSearcher uSearcher = uSession.CreateUpdateSearcher();
ISearchResult uResult = uSearcher.Search("IsInstalled=0 and
Type='Software'");All the updates found will be now populated into the uResult collection object, which can be
accessed using the below foreach loop foreach (IUpdate update in uResult.Updates)
{
Console.WriteLine(update.Title);
}Note that, we have only found the updates available for this machine, but haven’t
downloaded them yet. You can iterate the available updates to select only the required updates and add then to an
UpdateCollection class which can be assigned to the below UpdateDownloader class so as to
download themNow we have to create an UpdateDownloader class object to download the updates, as below UpdateDownloader downloader = uSession.CreateUpdateDownloader();
downloader.Updates = uResult.Updates;
downloader.Download();
Read more: EggCafe
This document explains how to get the windows updates available for the current machine
and selectively install them, using c#.Requirement
Perform Windows Updates customized installation using c# and Windows Update API Library.
This will provide centralized command over customized installation of updates, patches, KB for
Windows.
EX: If particular updates are not to be installed on some machines, in such cases, we can
configure our custom update application to ignore those updates, using this approach.Benefits
1. Exclusive command over the updates installed over a network of systems.
2. Easy to develop and support, using .NET
3. Simple API approachReferences Required
Add reference to the c:\WINDOWS\system32\wuapi.dll file. This is the API library for Window
Update system.
Write the below import statement using WUApiLib;
using System.Management;
CodingNow, we need to search the Microsoft updates for this machine;UpdateSessionClass uSession = new UpdateSessionClass();
IUpdateSearcher uSearcher = uSession.CreateUpdateSearcher();
ISearchResult uResult = uSearcher.Search("IsInstalled=0 and
Type='Software'");All the updates found will be now populated into the uResult collection object, which can be
accessed using the below foreach loop foreach (IUpdate update in uResult.Updates)
{
Console.WriteLine(update.Title);
}Note that, we have only found the updates available for this machine, but haven’t
downloaded them yet. You can iterate the available updates to select only the required updates and add then to an
UpdateCollection class which can be assigned to the below UpdateDownloader class so as to
download themNow we have to create an UpdateDownloader class object to download the updates, as below UpdateDownloader downloader = uSession.CreateUpdateDownloader();
downloader.Updates = uResult.Updates;
downloader.Download();
Read more: EggCafe
February 2011 Security Release ISO Image
Posted by
jasper22
at
10:54
|
This DVD5 ISO image file contains the security updates for Windows released on Windows Update on February 8th, 2011.Read more: MS Download
Visual Studio is Hiring
Posted by
jasper22
at
10:24
|
Do you want to work on a product used by millions of developers around the world? I do! Come join me to deliver Visual Studio, the set of developer tools used across Microsoft and around the world. We have open positions available across Test, Dev and PM at varying levels on many projects across Visual Studio Professional. We’re looking for the most talented folks around to help us deliver the core parts of Visual Studio, from the Shell & IDE, Core languages (VB/C#/F#), packaging & setup, to our future investments in C# & VB. Here are a list of our open positions. We love referrals, let your friends know they can work on Visual Studio too! To get more information, please submit your resume.Read more: C# Frequently Asked Questions
How to Access/Manipulate HTML Elements/Javascript in Silverlight
Posted by
jasper22
at
10:23
|
Introduction: Interacting between HTML and ASP.NET has been one of the most common coding scenarios we have come across; i.e. accessing HTMLl elements and calling JavaScript from code behind. I recently came across a situation where I wanted to add Silverlight content to an existing page and allow the HTML and Silverlight portions of the page to interact. When we need this interaction: So let us see initially what could be probable scenarios where we need to interact / access HTML in Silverlight page.
So, let's try to BRIDGE this gap between Silverlight and the ordinary world of HTML.What is this BRIDGE: This BRIDGE is built using a Silverlight set of managed classes (commonly called Helper classes) that replicate the HTML DOM (document object model), and we can access these classes by namespace, System.Window.Browser.
- To make compatible: When we want to use the latest and greatest user interfaces with Silverlight, this requires compatibility with HTML. Situations could be like including a Silverlight content region to show non-essential extras alongside the critical HTML content.
- Legacy web pages: If we have an existing web page that does exactly what we want, it may make more sense to extend it with a bit of Silverlight pizzazz than to replace it outright. So the solution is to create a page that includes both HTML and Silverlight content.
- Server-side features: We know that Silverlight is a poor fit for tasks that need to access server resources or require high security, which is why it makes far more sense to build a secure checkout process with a server-side programming framework like ASP.NET. But you can still use Silverlight to display advertisements, video content, product visualizations, and other value-added features in the same pages.
So, let's try to BRIDGE this gap between Silverlight and the ordinary world of HTML.What is this BRIDGE: This BRIDGE is built using a Silverlight set of managed classes (commonly called Helper classes) that replicate the HTML DOM (document object model), and we can access these classes by namespace, System.Window.Browser.
Google2Piwik – exporting Google Analytics to Piwik
Posted by
jasper22
at
10:20
|
DescriptionGoogle2Piwik is script written in Python to enable exporting statistics from Google Analytics to open source alternative, Piwik.Requirements
Read more: ClearCode
Access to Piwik Installation.
Google Analytics Account with read or admin rights.
Google Analytics API currently does not support Google Apps for your Domain Accounts.
Thats why you can’t export data from account@yourdomain.com even if you have access via web interface.
However you can grant privileges to your Gmail account, and use it to perform the export.
Python 2.6 with components:
gdata-python-client (Google Python API)
http://code.google.com/p/gdata-python-client/
MySQLdb
Read more: ClearCode
OpenMFC
Posted by
jasper22
at
10:17
|
Project DescriptionOpenMFC is opensource version of MFC for using with C/C++ compiler without MFC.Read more: Codeplex
קליטת תאריכים מתוך טקסט
Posted by
jasper22
at
10:16
|
בעבר כתבתי על כך שראוי שתאריכים יכתבו בצורה סטנדרטית, למשל התאריך של היום ראוי שיכתב בתור '20110208' כדי שלא יהיה ספק שמדובר בשמונה בפברואר ולא בשניים באוגוסט.
אם לעומת זאת נכתוב '2010-02-08' בשפות שונות הוא יובן באופנים שונים, כאשר השפה היא בראש ובראשונה שפת ברירת המחדל של השרת, מעליה שפת ברירת המחדל של ה-Login (אם היא שונה מזו של השרת- היא גוברת עליה), ומעל כולם השפה שהגדרנו ל-Session שפתחנו (בדרך כלל כולן זהות לזו של השרת ואין כל בעייה).
יחד עם זאת יש מקרים בהם התאריכים מתקבלים בצורה לא סטנדרטית, ו-SQL Server מגלה גמישות רבה בנכונותו לתרגם אותם לצורה תקנית; וזה שימושי בעיקר כשהנתונים מתקבלים ממערכות חיצוניות.
Select * From sys.sysLanguages;טבלת sys.SysLanguages מציגה את ההגדרות של השפות השונות שנועדו לזהות שמות מקוצרים ומלאים של ימים וחודשים.
שפת ברירת המחדל במקומנו היא בדרך כלל US_English, אבל נניח שקיבלנו קובץ טקסט הכתוב בצרפתית דווקא:
Set Language Francais; Select Cast(N'fevr 8 2011' As DateTime),Cast(N'fevr 8 2011' As DateTime),
Cast(N'fevrier 8 2011' As DateTime),
Cast(N'fevrier 2011 8' As DateTime),
Cast(N'8 fevrier 2011' As DateTime),
Cast(N'8 2011 fevrier' As DateTime),
Cast(N'2011 8 fevrier' As DateTime),
Cast(N'2011 fevrier 8' As DateTime),
Cast(N'fevr 8 2011' As DateTime),
Cast(N'fevr 2011 8' As DateTime),
Cast(N'8 fevr 2011' As DateTime),
Cast(N'8 2011 fevr' As DateTime),
Cast(N'2011 8 fevr' As DateTime),
Cast(N'2011 fevr 8' As DateTime),
Cast(N'2011-fevr-8' As DateTime);
שימו לב שאת כל המחרוזות האלו המערכת תצליח לתרגם נכון לשמונה בפברואר,
כאשר ניתן על פי הצורך לציין גם חלקי יממה (שעות, דקות, AM/PM..).
Read more: גרי רשףLateBindingApi
Posted by
jasper22
at
10:14
|
Project Description
Creates .Net proxy components from COM Type Libraries.
The components will be created as C# or VB.Net Source Code in a generated Visual Studio Solution.
The classes in the generated solution are accessing the COM Server with late binding reflection technique. - You can generate a COM TypeLibrary in multiple versions to a single .Net proxy component.
The generated classes, methods and properties are marked with attributes whitch COM Type Library version support the entities.
- Easy mechanism to handle COM References and unkown variant types are integrated.
- Events are also supported.Read more: Codeplex
Creates .Net proxy components from COM Type Libraries.
The components will be created as C# or VB.Net Source Code in a generated Visual Studio Solution.
The classes in the generated solution are accessing the COM Server with late binding reflection technique. - You can generate a COM TypeLibrary in multiple versions to a single .Net proxy component.
The generated classes, methods and properties are marked with attributes whitch COM Type Library version support the entities.
- Easy mechanism to handle COM References and unkown variant types are integrated.
- Events are also supported.Read more: Codeplex
Simple Error Reporting on WP7
Posted by
jasper22
at
09:38
|
“It's not a bug - it's an undocumented feature “Unfortunately, the sad fact is that we all make mistakes! How can we make our Windows Phone 7 Application send us error reports? Why not just email it to yourself? Here is a simple code snippet to get you started:private void RootFrame_NavigationFailed(object sender, NavigationFailedEventArgs e)
{
if (System.Diagnostics.Debugger.IsAttached)
{
System.Diagnostics.Debugger.Break();
} e.Handled = true;
Microsoft.Phone.Tasks.EmailComposeTask task = new Microsoft.Phone.Tasks.EmailComposeTask();
task.Body = e.Exception.Message;
task.Subject = "Error Report";
task.To = "support@myCoolWP7App.com";
task.Show();
}
Read more: Rudi Grobler in the Cloud
{
if (System.Diagnostics.Debugger.IsAttached)
{
System.Diagnostics.Debugger.Break();
} e.Handled = true;
Microsoft.Phone.Tasks.EmailComposeTask task = new Microsoft.Phone.Tasks.EmailComposeTask();
task.Body = e.Exception.Message;
task.Subject = "Error Report";
task.To = "support@myCoolWP7App.com";
task.Show();
}
Read more: Rudi Grobler in the Cloud
.NET Interview FAQs - 1
What are the different terms that are related to the life cycle of a Remoting object?
The related terms to the life cycle of a Remoting object are define as Lease Time, Sponsorship Time, RenewOnCallTime, and LeaseManagePollTime. Lease Time: The LeaseTime property protects the object from the garbage collector. Every object created has a default leasetime for which it will be activated. Once the leasetime expires, the object is eligible again for garbage collector and is eventually destroyed. Default value is 5 minutes.
Sponsorship Time: Even though the leasetime of an object has expired, there still may be clients who would still need the remoting object on the server. In such cases the leasemanager keeps a track of such clients and asks them if they need the object and are ready to sponsor the object to extend its existence. This is done through SponsorshipTime property.
RenewOnCallTime: The RenewOnCallTime property defines the duration for which a remoting object's lease is extended if a sponsor is found. The default value is 2 minutes.
LeaseManagePollTime: The LeaseManager class has a property PollTime, which defines the frequency at which the LeaseManager polls the leases. Default is 10 seconds.
Explain the two different types of remote object creation mode in .NET?
What is the difference between URI and URL?
What is Fragment caching in asp.net?
What are Partial classes in .net? Read more: Your Learning Bolg......
The related terms to the life cycle of a Remoting object are define as Lease Time, Sponsorship Time, RenewOnCallTime, and LeaseManagePollTime. Lease Time: The LeaseTime property protects the object from the garbage collector. Every object created has a default leasetime for which it will be activated. Once the leasetime expires, the object is eligible again for garbage collector and is eventually destroyed. Default value is 5 minutes.
Sponsorship Time: Even though the leasetime of an object has expired, there still may be clients who would still need the remoting object on the server. In such cases the leasemanager keeps a track of such clients and asks them if they need the object and are ready to sponsor the object to extend its existence. This is done through SponsorshipTime property.
RenewOnCallTime: The RenewOnCallTime property defines the duration for which a remoting object's lease is extended if a sponsor is found. The default value is 2 minutes.
LeaseManagePollTime: The LeaseManager class has a property PollTime, which defines the frequency at which the LeaseManager polls the leases. Default is 10 seconds.
Explain the two different types of remote object creation mode in .NET?
What is the difference between URI and URL?
What is Fragment caching in asp.net?
What are Partial classes in .net? Read more: Your Learning Bolg......
Subscribe to:
Posts (Atom)