This is pretty crazy stuff. Previously, it was thought that only single-celled life could exist in such inhospitable places, but this proves otherwise.
Read more: Gizmode
Read more: BMC Biology
This is a mirror of official site: http://jasper-net.blogspot.com/
This is a mirror of official site: http://jasper-net.blogspot.com/
This is pretty crazy stuff. Previously, it was thought that only single-celled life could exist in such inhospitable places, but this proves otherwise.
Read more: Gizmode
Read more: BMC Biology
Read more: HostWisely
Here is a quick definition of the subject from Book On Line:
A Database Snapshot is a read-only, static view of a database (the source database). Multiple snapshots can exist on a source database and can always reside on the same server instance as the database. Each database snapshot is consistent, in terms of transactions, with the source database as of the moment of the snapshot’s creation. A snapshot persists until it is explicitly dropped by the database owner.
If you do not know how Snapshot database work, here is a quick note on the subject. However, please refer to the official description on Book-on-Line for accuracy. Snapshot database is a read-only database created from an original database called the “source database”. This database operates at page level. When Snapshot database is created, it is produced on sparse files; in fact, it does not occupy any space (or occupies very little space) in the Operating System. When any data page is modified in the source database, that data page is copied to Snapshot database, making the sparse file size increases. When an unmodified data page is read in the Snapshot database, it actually reads the pages of the original database. In other words, the changes that happen in the source database are reflected in the Snapshot database.
-- Create Snapshot Database
CREATE DATABASE SnapshotDB ON
(Name ='RegularDB',
FileName='c:\SSDB.ss1')
AS SNAPSHOT OF RegularDB;
GO
-- Select from Regular and Snapshot Database
SELECT * FROM RegularDB.dbo.FirstTable;
SELECT * FROM SnapshotDB.dbo.FirstTable;
GO
Read more: Journey to SQL Authority with Pinal Dave
SQL is by concept a language to manipulate sets of data; therefore Microsoft SQL Server 2005 database system uses T-SQL (Transact SQL) for writing structure code to control the data flow. Prior to Microsoft SQL Server 2005, the only way to write procedures and functions was by using T-SQL, but now, Microsoft SQL Server 2005 provides an integration with the Common Language Runtime (CLR), and consequently the procedures and functions can be written using managed code in any .NET language such as C#. This article is intended to illustrate how to implement a stored procedure with managed code.
The business scenario
As illustrative purposes, we're going to develop a stored procedure to return a list of products by its subcategory using the AdventureWorks database and Production.Product table shipped with the installation of Microsoft SQL Server 2005 (Listing 1). The main steps are to create a class and the underlying business logic to get a list of products, build this class into an assembly, register the assembly in the SQL Server engine, and then create a stored procedure in the database which is an interface to the corresponding method in the class hosted in the assembly.
select *
from Production.Product
where ProductSubcategoryID=@ProductSubcategoryID;
Listing 1
Developing the solution
The first step is to create a SQL Server project by opening Visual Studio.NET 2005 and select File | New | Project
Read more: C# Corner
A commenter to Scott’s blog post asked,
Will it be possible to extend this so that is uses libraries like AntiXSS instead? See: http://antixss.codeplex.com/
The answer is yes!
ASP.NET 4 includes a new extensibility point which allows you to replace the default encoding logic with your own anywhere ASP.NET does encoding.
Read more: haacked
Read more: Nicholas Allen's Indigo Blog
Official site: DelegConfig v2 beta (Delegation / Kerberos Configuration Tool)
That’s right. You now need to download a 620MB ISO, find a tool to open it up with (since Windows still lacks native support for opening ISO files directly) to extract a 17MB installer for WinDBG. From the WinDBG download page:
This current version of Debugging Tools for Windows is available as part of the Windows Driver Kit (WDK) release 7.1.0. To download the WDK and manually install Debugging Tools for Windows:
1. Download the WDK and burn the ISO to a DVD or other installation media. (The WDK ISO file is approximately 620 MB to download.)
2. Open the ISO from the installation media and open the Debuggers directory.
3. In the Debuggers directory, run the appropriate MSI or setup EXE for x86 and follow the instructions in the Setup Wizard.
4. After installation is complete, you can find the debugger shortcuts by clicking Start, pointing to All Programs, and then pointing to Debugging Tools for Windows (x86).
Read more: Winterdom
I have been thinking about this a lot recently. Initially I thought that Blend could provide an attribute that I could apply to a static method, and it would then invoke that decorated method before loading any Views. That seemed like the best solution (and still does, in my opinion). However, since Blend has no such attribute, I figured it was a moot point…until I realized that I could create my own attribute for the same purpose!
If you decorate an assembly with an attribute, that attribute must be instantiated when the assembly is inspected via reflection (which Blend most certainly does). So, I simply created a custom attribute and applied it to my assembly. In that attribute’s constructor, I check to see if it was loaded into design-time. If it was, I then perform my initialization logic. How simple!
Read more: Josh Smith on WPF
I'm so nervous! Yep! In my ten-plus years here at Microsoft I've been through forty-some informational interviews and close to ten full interview loops, and I still get nervous, and am sure I am doing horribly, and I am talking really fast, and . . .
Take a breath. Slow down. Be yourself. Yes, you may not have any idea how to solve the problem your interviewer just asked you. It doesn't matter. Unless you have a bad interviewer, what they are most interested in is how you approach the problem, not whether you come up with the best solution. More than once I've gotten stuck on "I know there's a better way to do this!", and so I'm standing there doing nothing rather than working towards a solution - *not* helpful in getting hired!
They keep asking me to write these algorithms I've never heard of! Tell them! I never studied computer science and so run into this all the time. Tell your interviewer that you aren't familiar with the algorithm, or concept, or whatever, and that you'll work through it as best you can. Remember, your interviewer wants to see how you solve problems. They can teach the particular programming language they use or the problem domain they are in as long as you can work through a problem you've never seen before.
Read more: Test Guide
* Introduction
* Disclaimer
* Using the code
* Glossary
* Background
* The Problem
o Key Exchange
o Data Transfer
o Block Encryption and Decryption
* Interop classes
o CLR, Crypto++ and the C++ Standard Library
o Setting pointers to the new and delete operators
* Implementation Details
o class RSAES_PKCS15 (Managed C++)
o class CryptoPP_RSAES_PKCS15 (Native C++)
* Tester Applications
o C# Console Application Tester
* Things to do
* Other useful things
o BER and DER encoding/decoding of Integers
* Known Issues
o LoaderLock Exception
o Compiling Crypto++ as Unicode
o Base64 Encoding/Decoding with Crypto++
* In Closing
* Acknowledgements
* References
* History
Introduction
The purpose of this article is to show the interaction between Crypto++ and .NET 2.0 Cryptography services using RSA PKCS#1 encryption and to show how to wrap a Crypto++ as a managed class. Often a client and server use different cryptographic services and need to interact correctly e.g. the public portions of the encryption key need to be exchanged between them and data from the client needs to be decrypted by the server and vice versa. Most of the articles I could find were for older versions of .NET and the documentation on how to use the Cryptography services was a bit sparse, hence the need for this article.
In order to simplify things, I have stripped away the usual communication links between the client and the server. Byte arrays will be passed between them instead. No existing standards will be used to exchange keys either, the public modulus and exponent integers will be sent as byte arrays again. Putting these components together is a plumbing job � explaining this would be very specific to my problem and would confuse a complicated article even more. Below is a short description of what will be achieved by this article:
I want to use Crypto++ from C#, so I wrap it using a managed C++ class as follows:
1. Compile Crypto++ 5.5.1 either as a static lib or as a DLL using Visual Studio 2005 with dynamically linked standard multithreaded libraries (/MD or /MDd)
2. Create a native wrapper class which provides a simple interface to Crypto++ and encrypts or decrypts a byte array of any size
3. Create a managed C++ class which encapsulates the native wrapper class and converts .NET managed types to native types (and vice versa) and then calls the Crypto++ wrapper class methods
Read more: Codeproject
The advantage of the XmlSerializer class is that you can read and/or write XML with very little code. Most of the code required is simply the definition of the data. In other words, if our data is a list of Links consisting of a HREF or URL, a title and a category, then that data could be defined in the following manner:
public class LinkObject
{
string ThisCategory;
string ThisHRef;
string ThisTitle;
public string Category
{
get { return ThisCategory; }
set { ThisCategory = value; }
}
public string HRef
{
get { return ThisHRef; }
set { ThisHRef = value; }
}
public string Title
{
get { return ThisTitle; }
set { ThisTitle = value; }
}
}
Using the XmlSerializer class, we use Serialize.Deserialize to read the data and XmlSerializer.Serialize to write the data. An instance of the XmlSerializer class could be created using:
XmlSerializer Serializer = new XmlSerializer(typeof(LinkObjectsList));
Then the data could be written using:
TextWriter Writer = new StreamWriter(Filename);
Serializer.Serialize(Writer, LinksList);
Writer.Close();
Data could be read using:
TextReader Reader = new StreamReader(Filename);
LinksList = (LinkObjectsList)Serializer.Deserialize(Reader);
Reader.Close();
It is nearly that easy. Note that when the data is as simple of the above data, it is possible to read and write the data using a DataTable. If however the data is more complicated than what a single DataTable is capable of, then the XmlSerializer class can be easier (see below).
Note that the LinkObject class above represents one link. We are writing and reading a list of links, where list could be called an array or a collection or a table or something else. We can create a list of links using:
List<LinkObject> LinksList = new List<LinkObject>();
Read more: C# Corner
For the purpose of this posting we will assume the following scenario.
You are a software vendor that has written an automated banking machine application. Several times a day the kiosk is restarted by the customer because the application has crashed. In an effort to identify the cause of the crash, which happens when you are not there, you have used ADPLUS to collect a User Mode memory dump. The memory dump has been copied onto your machine and you are ready to start debugging.
Open the dump file by selecting the “Open Crash Dump…” option found under the “File” menu within WinDBG. Browse to the appropriate memory dump file and click the “Open” button. After a few moments WinDBG will return control to you and a prompt should be seen that is similar to “0:000>” (ProcessId:ThreadId>) as seen in the bottom centre of the image below.
Read more: Practical Development
Project Status
A seventh version has been released which implements ISO, FAT and NTFS file systems. VHD, XVA, VMDK and VDI disk formats are implemented, as well as read/write Registry support. The library also includes a simple iSCSI initiator, for accessing disks via iSCSI.
It is now possible to format, read and modify NTFS volumes.
How to use the Library
Here's a few really simple examples.
How to create a new ISO:
CDBuilder builder = new CDBuilder();
builder.UseJoliet = true;
builder.VolumeIdentifier = "A_SAMPLE_DISK";
builder.AddFile(@"Folder\Hello.txt", Encoding.ASCII.GetBytes("Hello World!"));
builder.Build(@"C:\temp\sample.iso");
You can add files as byte arrays (shown above), as files from the Windows filesystem, or as a Stream. By using a different form of Build, you can get a Stream to the ISO file, rather than writing it to the Windows filesystem.
Read more: Codeplex
If you are familiar with SOS.dll, the managed-debugging extension that ships with the .NET Framework, Psscor2.dll provides a superset of that functionality. Most of the added functionality helps you identify issues in ASP.NET.
For example, Psscor2 provides the ability to view:
* managed call stacks (with source mappings)
* managed exception information
* what types are in the managed heap and their reference chain
* which ASP.NET pages are running on which thread
* the contents of the ASP.NET cache
* and much more.
Read more: Jin's WebLog original, translated
Download: MS Download
<add key="MyKey" value="MyValue"/>
<add key="TheKey" value="TheValue"/>
</appSettings>
יש ל - appSettings שני מאפיינים מעניינים האחד נקרא file והשני נקרא configSource. שניהם מאפשרים להוציא את ה - appSettings לקובץ נפרד לדוגמא
ההבדל בין file ל - configsource הוא
1. configSource מחייב להעביר את כל ההגדרות של appSettings לקובץ החיצוני לעומת file שמאפשר להכניס חלק מהערכים בקובץ המקורי.
2. configSource הוא ממש כמו קובץ הקונפיג המקורי וכל שינוי בו יעשה restart ל - application לעומת file שמאפשר לשנות את הקובץ בלי restart.
Internals Viewer for SQL Server - Internals Viewer is a tool for looking into the SQL Server storage engine and seeing how data is physically allocated, organised and stored.
DataScripter - This Addin for SQL Server Management Studio 2008 allows you to generate INSERT statements for all values of a table easily. Just select Script Data as from the context menu of the table and choose one of the options
Fulltext Management for SQL Server - This Addin for SQL Server Management Studio allows you to manage your fulltext catalogs easily. It even works for SQL Server Express editions, so now you can use a nice GUI instead of unhandy SQL commands.
SQL Compact data and schema script utility - This console app and SQL Server 2008 Management Studio add-in helps you better manage your SQL Compact development efforts. If allows you to script schema and data to a .sql file, which can be used in any context. It also scripts DML for use in SSMS scripts
Read more: SQL Server curry
Port forwarding has a security function as well. For example you want to hide your web server from your public network. So you can tunnel your HTTP port from your secured to your public network machine.
The following examples describes how to forward a remote or a local port via SSH. Typically SSH forwards to localhost (127.0.0.1). To change this you have to set the GatewayPorts parameter to yes (/etc/ssh/sshd_config).
Remote Port Forwading example:
#you want to forward a local port to a remote machine
ssh -v -g -R remoteport:localhost:localport root@remotehost
#e.g. forwarding my local webserver on port 8080 to http://developers-blog.org:80
ssh -v -g -R 80:localhost:8080 root@developers-blog.org
#to bypass the ClientAliveInterval you can append a while loop to hold up the SSH connection
ssh -v -g -R 80:localhost:8080 root@developers-blog.org "while [ 1 ]; do sleep 10; echo '\''loop step'\''; done"
Local Port Forwading example:
#you want to forward a remote port to my local machine
ssh -v -g -L localport:remotehost:remoteport root@remotehost
#e.g. i want to see my local webserver on my
ssh -v -g -L 8080:developers-blog.org:80 root@developers-blog.org
#for bypass the ClientAliveInterval you can append a while loop as well
ssh -v -g -L 8080:developers-blog.org:80 root@developers-blog.org "while [ 1 ]; do sleep 10; echo '\''loop step'\''; done"
Read more: Developers Blog - Programming Languages, Technologies and Visions
When you are completely done developing your project, the migration to the live server is seamless. Here are the simple steps to install a local server on your PC to easily develop websites.
This article applies to the installation on Windows 98, NT, 2000, 2003, XP and Vista, of Apache, MySQL, PHP + PEAR, Perl, mod_php, mod_perl, mod_ssl, OpenSSL, phpMyAdmin, Webalizer, Mercury Mail Transport System for Win32 and NetWare Systems v3.32, Ming, JpGraph, FileZilla FTP Server, mcrypt, eAccelerator, SQLite, and WEB-DAV + mod_auth_mysql.
Installing XAMPP on your computer
1. First, download XAMPP for Windows Installer
2. Then run the installer on your computer and make sure that your Windows firewall unblocks Apache.
3. Run the Apache administrator.
4. Open your browser and go to http://127.0.0.1 – If all went well, a screen will appear where you can choose your language.
5. Go to http://127.0.0.1/security/xamppsecurity.php and setup a password (it ill be used for your databases), and click on “Password Changing”.
Congratulations! You’re done! Now put your website’s files in a new directory under C:\xampp\htdocs\ (if you installed xampp in C:\xampp). For example: C:\xampp\htdocs\myproject\; and setup your databases using PHPMyAdmin located here http://127.0.0.1/phpmyadmin/.
Configuring Mod Rewrite
Read more: Richard Castera
In this blog post, we will describe a couple of options to get diagnostics for your projects using MSBuild and then taking a deeper dive into the compiler and the linker.
Using MSBuild
Using the IDE, you can enable Timing Logging by setting “Tools/Options/Projects and Solutions/VC++ Project Settings/Build Timings” = “Yes” or raise the verbosity of the build to “Diagnostics” from “Tools/Options/Project and Solutions/Build and Run/MSBuild project build output verbosity”.
Using these options, you can get performance summaries per project and also get details on where time is spent on targets and tasks. This sort of information is useful say when you are trying to figure out how long that copy task is taking to copy your files over from across folders.
1>------ Rebuild All started: Project: mfc-app, Configuration: Debug Win32 ------
1>Build started 1/12/2010 5:31:58 PM.
1>_PrepareForClean:
1> Deleting file "Debug\mfc-app.lastbuildstate".
1>InitializeBuildStatus:
1> Creating "Debug\mfc-app.unsuccessfulbuild" because "AlwaysCreate" was specified.
1>ClCompile:
1> stdafx.cpp
...............
1> ChildFrm.cpp
1> Generating Code...
1>Manifest:
1> Deleting file "Debug\mfc-app.exe.embed.manifest".
1>LinkEmbedManifest:
1> mfc-app.vcxproj -> C:\Users\user\documents\visual studio 2010\Projects\mfc-app\Debug\mfc-app.exe
1>FinalizeBuildStatus:
1> Deleting file "Debug\mfc-app.unsuccessfulbuild".
1> Touching "Debug\mfc-app.lastbuildstate".
1>
Read more: Visual C++ Team Blog
Hopefully you are not installing new x86 boxes, 64-bit handles memory so much better. If you have legacy x86 boxes you can easily do a backup or detach from that old system and restore or attach on the new x64 instance. You can also reverse the process and copy data from x64 back to x86. The same logic applies to the other technologies listed above.
Per BOL (I used the SQL 2008 R2 version):
· The SQL Server on-disk storage format is the same in the 64-bit and 32-bit environments. Therefore, a database mirroring session can combine server instances that run in a 32-bit environment and server instances that run in a 64-bit environment.
· Because the SQL Server on-disk storage format is the same in the 64-bit and 32-bit environments, a replication topology can combine server instances that run in a 32-bit environment and server instances that run in a 64-bit environment.
· The SQL Server on-disk storage format is the same in the 64-bit and 32-bit environments. Therefore, a log shipping configuration can combine server instances that run in a 32-bit environment and server instances that run in a 64-bit environment.
If you're doing SAN level replication you'll need to talk to your SAN vendor about their support across platforms.
Read more: Cindy Gross - Troubleshooting, tips, and general advice about SQL Server
RAID Level 1
SQL Server 2008 Books Online says, "This level is also known as disk mirroring because it uses a disk file system called a mirror set. Disk mirroring provides a redundant, identical copy of a selected disk. All data written to the primary disk is written to the mirror disk. RAID 1 provides fault tolerance and generally improves read performance but may degrade write performance" ("RAID Levels and SQL Server," SQL Server 2008 Books Online, MSDN).
Level 1 is one of our favorite ways to set up SQL Server. It is fast and provides data protection like a superhero straddling the speed and safety worlds. All superheroes have weaknesses, and RAID level 1 is no exception—it uses two hard disks of identical sizes, and this can cause several drawbacks. The first drawback is that the size of your logical disk is the same as the size of one of the physical disks. In other words, you pay for two disks, and you only get to use the storage size of one.
The other, more important, drawback is that you can only store files of up to the size of one of the disks on the logical disk. For example, if you have two 185 gigabyte (GB) drives in the RAID level 1 configuration, you end up with a single logical drive of 185 GB. In this example, the maximum file you could hold on the disk would be less than 185 GB in size. Since your largest database files are the .mdf file (which holds the data) and the .ldf file (which holds the transaction logs), you need to make sure that they do not exceed the size of the level 1 logical drive.
One way to prevent the data from exceeding the size of the level 1 drive is to create a secondary database file (.ndf) of an equal size on another level 1 logical drive set. This divides the data in the database between the two files.
You can keep your transaction log file (.trn) from getting too big by frequently backing up the transaction log, which allows it to be stored on a RAID level 1 array.
As we continue this series of posts about the disk subsystem, we will talk more about which database files go on which kinds of logical drives and about the arrangement of physical disks.
RAID Level 5
SQL Server 2008 Books Online says that level 5 is also known as striping with parity. "Data redundancy is provided by the parity information. The data and parity information are arranged on the disk array so that the two types of information are always on different disks" ("RAID Levels and SQL Server," SQL Server 2008 Books Online, MSDN).
A RAID level 5 configuration lets you have more than two physical drives in the RAID configuration. In fact, there is really no benefit until you have there or more drives. The data is written to all three drives at the same time. However, the data is read from any of the drives without involving the other physical drives in the array.
Read more: Understanding RAID for SQL Server Part 1, Part 2
[WebMethod]
public double GetFactorial(int x)
{
double factorial = 0;
if (x >= 0)
{
factorial = CalcFactorial(x);
}
return factorial;
}
privat static double CalcFactorial (int x)
{
// base case
if (x <= 1)
{
return 1;
}
return x * CalcFactorial (x -1);
}
Now, press F5 and you will get the following.
Read more: Sajid's TechnoTips
The following code snippet gives you an idea of how to use Cassia (extracted from the Cassia Project Home):
ITerminalServicesManager manager = new TerminalServicesManager();
using (ITerminalServer server = manager.GetRemoteServer("your-server-name"))
{
server.Open();
foreach (ITerminalServicesSession session in server.GetSessions())
{
Console.WriteLine("Session ID: " + session.SessionId);
Console.WriteLine("User: " + session.UserAccount);
Console.WriteLine("State: " + session.ConnectionState);
Console.WriteLine("Logon Time: " + session.LoginTime);
}
}
Read more: DevX.com
Official site: Cassia Project Home
When we announced that the Visual Studio 2010 Release Candidate Is Now Available For Download, a reader, Igor, asked us to provide a table summarizing which C++0x Core Language features are implemented in VC10. So, here it is! It's derived from, but slightly modified from, GCC's tables. For example, I added "Rvalue references v2".
Proposal | VC9 | VC10 | |
Rvalue references | No | v2 | |
Rvalue references v2 | No | v2 | |
Rvalue references for *this | No | No | |
Initialization of class objects by rvalues | Yes | Yes | |
static_assert | No | Yes | |
auto | No | Yes | |
Multi-declarator auto | No | Yes | |
Removing old auto | No | Yes | |
Trailing return types | No | Yes | |
Lambdas | No | v1.0 | |
decltype | No | Yes | |
Right angle brackets | Yes | Yes | |
Extern templates | Yes | Yes | |
nullptr | No | Yes | |
Strongly typed enums | Partial | Partial | |
Forward declared enums | Partial | Partial | |
Extended friend declarations | Partial | Partial | |
Local and unnamed types as template arguments | Yes | Yes | |
C++0x Core Language Features: Concurrency | | | |
exception_ptr | No | Yes | |
Thread-local storage | Partial | Partial |
Read more: Visual C++ Team Home
· When you try to create a login using the command "Create login [<Domain>\<Machine account>$] from windows" we might get the following error
o Msg 15025, Level 16, State 2, Line 1
o The server principal '<DOMAIN NAME>\<MACHINE NAME>$' already exists.
· The following error might be returned when we try creating the SQL Server login in Management studio
o Create failed for Login '<DOMAIN NAME>\<MACHINE NAME>$'. (Microsoft.SqlServer.Smo)
o The server principal ‘<DOMAIN NAME>\<MACHINE NAME>$’ already exists. (Microsoft SQL Server, Error: 15025)
· The error message says that the server principal already exists. However, if you look for any such principal under Security in Management studio you will not find the login <DOMAIN NAME>\<MACHINE NAME>$
· Further, when you run the following query in a new query window, you will not find the login <DOMAIN NAME>\<MACHINE NAME>$
o Select LOGINNAME from sys.SYSLOGINS
CAUSE
=====
This problem occurs if there is already a login which is registered under the same SID as that of the Login which you are trying to add.
RESOLUTION
=========
· To determine whether the SID already exists for a different login, please follow these steps:
· In the new query window, run the following command:
· Technically, it is not possible to have a more than one login with the same SID unless these logins have been manually created.
Read more: Microsoft SQL Server Tips & Tricks
public Dolar(int value)
{
Value = value;
}
}
אנחנו יכולים גם להגדיר explicit operator במקרים שאנחנו חוששים לאיבוד מידע
Dolar d2 = (Dolar)l;
Cassandra is an open source distributed database management system. It is an Apache Software Foundation top-level project, as of February 17, 2010, designed to handle very large amounts of data spread out across many commodity servers while providing a highly available service with no single point of failure. It is a NoSQL solution that was initially developed by Facebook and powers their Inbox Search feature. Jeff Hammerbacher, who led the Facebook Data team at the time, has described Cassandra as a BigTable data model running on an Amazon Dynamo-like infrastructure.
I bet you have used data that has been served by Cassandra and not even realized it, here are some prominent users of Cassandra:
* Facebook
* Digg
* Twitter
* Reddit
Sounds interesting or at least worth a look, right? Well I thought so, however during my journey of getting the database setup I have come to realize there is almost no documentation on installation for Linux, and even less for Windows. So I am going to provide you with a jump start to installing Cassandra on your machine. I am doing this so you don’t have to spend days jumping around the web, going down false paths, and pulling your hair out like I did, all so you can get on to what you really care about … development.
First Things First
The first thing you need to understand about Cassandra is that it is developed in Java. So you can run it on any machine that supports Java 6 or better. So before you go any farther make sure you Java JRE is updated to the latest version.
The next thing you need is a copy of Cassandra. Which can be found here. My setup is going to be based off of the latest stable release.
Running From Windows
As I said before you can run from an operating system that Java has a runtime for. So the first and probably most obvious one for a Windows developer, is running Cassandra on Windows.
Read more: Nick Berardi's Coder Journal
At that time I had a great mentor and I was sure I got the whole “Red-Green-Refactor” routine, In fact I knew it so well that I allowed myself to “speed development” by writing the code before the actual test.
One day while happily coding with a fellow developer we came across an interesting problem: we needed to create a value generator – a class that will return a unique value each time a method (GetNextValue) is called.
Of course being two bright and talented developers we’ve started by debating how this class should be implemented and so that it would support every conceivable type – needless to say after a few minuets we were still “designing” and every design we had was flawed – it had a bunch corner cases that forced us to search for yet another better-stronger-faster design.
Luckily for us we had someone in the same room that saw our plight and decided to put a stop to it. What he did is remind us how TDD should be done – one test at a time.
“Write a test that checks two unique integers” – he said.
“But it won’t work for strings or even doubles” – we said.
“Do it anyway” - And we did:
[TestMethod]
public void ValueGenerator_GenerateValuesForInt_TwoDifferentValuesReturned()
{
var val1 = ValueGenerator.GetNextValue(typeof(int));
var val2 = ValueGenerator.GetNextValue(typeof(int));
Assert.AreNotEqual(val1, val2);
}
Read more: Helper Code
You can create two different kinds of extensions using Feature Builder. A standard Feature Extension can contain tools, code, and a simple map - it will run on the Visual Studio Premium and Visual Studio Professional editions (in the final version of this tool). A more advanced extension, called an Ultimate Feature Extension, can contain everything a feature extension can contain, as well as rich modeling and visualization tools that can take advantage of the modeling platform inside the Visual Studio 2010 Ultimate edition (required). These tools can be used to provide a logical view of your target solution, and to visualize your existing code. This is the preferred type of extension to use if you intend to provide architectural guidance or share specific refactoring or pattern knowledge.
This preview requires Windows 7 or Windows Server 2008 R2, Visual Studio 2010 Ultimate Edition, and the installation of the Visual Studio SDK (RC1 Version) to build Feature Extensions. The Feature Extensions you create have the same requirements except for the SDK. The RTM version of this tool will require Visual Studio 2010 Ultimate Edition to create Feature Extensions, but will allow you to create Feature Extensions which do not require the Ultimate Edition to run.
Read more: Visual Studio, VSIP Partners and more ......
1. The problem
We all use Dynamic Link Libraries (DLL). They have excellent facilities. First, such library loads into the physical address space only once for all processes. Secondly, you can expand the functionality of the program by loading the additional library, which will provide this functionality. And that is without restarting the program. Also a problem of updating is solved. It is possible to define the standard interface for the DLL and to influence the functionality and the quality of the basic program by changing the version of the library. Such methods of the code reusability were called “plug-in architecture”. But let’s move on.
Of course, not every dynamic link library relies only on itself in its implementation, namely, on the computational power of the processor and the memory. Libraries use libraries or just standard libraries. For example, programs in the C\C++ language use standard C\C++ libraries. The latter, besides, are also organized into the dynamic link form (libc.so and libstdc++.so). They are stored in the files of the specific format. My research was held for Linux OS where the main format of dynamic link libraries is ELF (Executable and Linkable Format).
Recently I faced the necessity of intercepting function calls from one library into another - just to process them in such a way. This is called the call redirecting.
Read more: Codeproject
So what does this have to with Office 2010? In Excel 2010 we made it truly easy to connect to a SQL Azure database and pull down data. Here I explain how to do it.
By following these steps you will be able to:
1. Create an Excel data connection to a SQL Azure database
2. Select the data to import into Excel
3. Perform the data import
All mistakes herein, if any, are my own. Please alert me to potential errors.
Import SQL Azure Data Into Excel
You need to be running Excel 2010 (post-Beta 2 builds) for these steps to work properly.
Read more: John R. Durant's WebLog
Spend Less on Hardware
You can save money on hardware by spending less on expensive processors and memory. The Lockless memory allocator can speed up your software in a more inexpensive way to meet your performance targets.
Fully Utilize Modern Multicore Machines.
The Lockless memory allocator is designed for 64bit multicore machines whilst still supporting 32bit applications. Allocations are 16 byte aligned to optimize SSE2 usage. 64 byte allocations are cache-line aligned to prevent speed loss from cache-line bouncing in multithreaded applications.
Multithread Optimized
The Lockless memory allocator uses lock-free techniques to minimize latency and memory contention. This provides optimal scalability as the number of threads in your application increases. Per-thread data is used to reduce bus communication overhead. This results in thread-local allocations and frees not requiring any synchronization overhead in most cases.
Read more: Lockless
Stolen documents recovered in a year-long investigation show the hackers have breached the servers of dozens of countries and organizations, taking everything from top-secret files on missile systems in India to confidential visa applications, including those of Canadians travelling abroad.
The findings, which are part of a report that will be made public today in Toronto, will expose one of the biggest online spy rings ever cracked. Written by researchers at the University of Toronto’s Munk Centre for International Studies, the Ottawa-based security firm SecDev Group and a U.S. cyber sleuthing organization known as the Shadowserver Foundation, the report is expected to be controversial.
The researchers have found a global network of “botnets,” computers controlled remotely and made to report to servers in China. Along with those servers, the investigators located where the hackers stashed their stolen files, allowing a glimpse into what the spy ring is looking for.
“Essentially we went behind the backs of the attackers and picked their pockets,” said Ron Deibert, director of the Citizen Lab at the Munk School of Global Affairs, which investigated the spy ring.
Read more: The globe and mail
Configuration cfg = null;
IFormatter serializer = new BinaryFormatter();
//first time
cfg = new Configuration().Configure();
using (Stream stream = File.OpenWrite("Configuration.serialized"))
{
serializer.Serialize(stream, configuration);
}
//other times
using (Stream stream = File.OpenRead("Configuration.serialized"))
{
cfg = serializer.Deserialize(stream) as Configuration;
}
Check it out for yourselves.
Read more: Development With A Dot
We can interpret quite a few things from it which can help us in further debugging. Here’s how
For example (for 32 bit app)
0:027> !address –summary
-------------------- Usage SUMMARY --------------------------
TotSize ( KB) Pct(Tots) Pct(Busy) Usage
29b32000 ( 683208) : 32.58% 41.98% : RegionUsageIsVAD
1cab1000 ( 469700) : 22.40% 00.00% : RegionUsageFree
d3b4000 ( 216784) : 10.34% 13.32% : RegionUsageImage
3bfc000 ( 61424) : 02.93% 03.77% : RegionUsageStack
f0000 ( 960) : 00.05% 00.06% : RegionUsageTeb
2896a000 ( 665000) : 31.71% 40.86% : RegionUsageHeap
0 ( 0) : 00.00% 00.00% : RegionUsagePageHeap
1000 ( 4) : 00.00% 00.00% : RegionUsagePeb
1000 ( 4) : 00.00% 00.00% : RegionUsageProcessParametrs
1000 ( 4) : 00.00% 00.00% : RegionUsageEnvironmentBlock
Tot: 7fff0000 (2097088 KB) Busy: 6353f000 (1627388 KB)
-------------------- Type SUMMARY --------------------------
TotSize ( KB) Pct(Tots) Usage
1cab1000 ( 469700) : 22.40% : <free>
119a8000 ( 288416) : 13.75% : MEM_IMAGE
10b5000 ( 17108) : 00.82% : MEM_MAPPED
50ae2000 ( 1321864) : 63.03% : MEM_PRIVATE
------------------- State SUMMARY --------------------------
TotSize ( KB) Pct(Tots) Usage
3152f000 ( 808124) : 38.54% : MEM_COMMIT
1cab1000 ( 469700) : 22.40% : MEM_FREE
32010000 ( 819264) : 39.07% : MEM_RESERVE Largest free region: Base 6b0b2000 - Size 0203e000 (33016 KB) *
Read more: WebTopics
Copyright © 2011 Jasper22.NET | Design by Smashing Wordpress Themes - Blogger templates by Blog and Web