This is a mirror of official site: http://jasper-net.blogspot.com/

Tutorial: The best tips & tricks for bash, explained

| Thursday, November 22, 2012
Running a command from your history
Sometimes you know that you ran a command a while ago and you want to run it again. You know a bit of the command, but you don’t exactly know all options, or when you executed the command. Of course, you could just keep pressing the Up Arrow until you encounter the command again, but there is a better way. You can search the bash history in an interactive mode by pressing Ctrl + r. This will put bash in history mode, allowing you to type a part of the command you’re looking for. In the meanwhile, it will show the most recent occasion where the string you’re typing was used. If it is showing you a too recent command, you can go further back in history by pressing Ctrl + r again and again. Once you found the command you were looking for, press enter to run it. If you can’t find what you’re looking for and you want to try it again or if you want to get out of history mode for an other reason, just press Ctrl + c. By the way, Ctrl + c can be used in many other cases to cancel the current operation and/or start with a fresh new line.

Repeating an argument
You can repeat the last argument of the previous command in multiple ways. Have a look at this example:

[rechosen@localhost ~]$ mkdir /path/to/exampledir
[rechosen@localhost ~]$ cd !$

The second command might look a little strange, but it will just cd to /path/to/exampledir. The “!$” syntax repeats the last argument of the previous command. You can also insert the last argument of the previous command on the fly, which enables you to edit it before executing the command. The keyboard shortcut for this functionality is Esc + . (a period). You can also repeatedly press these keys to get the last argument of commands before the previous one.

Some keyboard shortcuts for editing
There are some pretty useful keyboard shortcuts for editing in bash. They might appear familiar to Emacs users:

  • Ctrl + a => Return to the start of the command you’re typing
  • Ctrl + e => Go to the end of the command you’re typing
  • Ctrl + u => Cut everything before the cursor to a special clipboard
  • Ctrl + k => Cut everything after the cursor to a special clipboard
  • Ctrl + y => Paste from the special clipboard that Ctrl + u and Ctrl + k save their data to
  • Ctrl + t => Swap the two characters before the cursor (you can actually use this to transport a character from the left to the right, try it!)
  • Ctrl + w => Delete the word / argument left of the cursor
  • Ctrl + l => Clear the screen

Dealing with jobs
If you’ve just started a huge process (like backupping a lot of files) using an ssh terminal and you suddenly remember that you need to do something else on the same server, you might want to get the huge process to the background. You can do this by pressing Ctrl + z, which will suspend the process, and then executing the bg command:

[rechosen@localhost ~]$ bg
[1]+ hugeprocess &

Read more: Pinehead.tv

Posted via email from Jasper-net

Microsoft has failed

|
Microsoft is largely irrelevant to computing of late, the only markets they still play in are evaporating with stunning rapidity. Their long history of circling the wagons tighter and tighter works decently as long as there is not a credible alternative, and that strategy has been the entirety of the Microsoft playbook for so long that there is nothing else now. It works, and as the walls grow higher, customer enmity builds while the value of an alternative grows. This cycle repeats as long as there is no alternative. If there is, everything unravels with frightening rapidity.

A company that plays this game for too long becomes set in their ways, and any chance of real change simply becomes impossible. Microsoft is there, and has been for a long long time. Their product lines have stagnated, creating customer lock in is prioritized over creating customer value, and the supply chain is controlled by an iron fisted monopoly. Any attempt at innovation with a Windows PC has been shut out for over a decade, woe betide anyone who tried to buck that trend. The history books are littered with the corpses of companies that tried to make change the ‘Windows experience’. Microsoft’s displeasure is swift and fatal to those that try. Or at least it was.
In the end, Windows advanced only to the point of undercutting any competition, and even then to the minimum extent possible. The rules in Redmond were, “Do not change anything unless it is to crush someone doing something innovative”. They didn’t unless they did, and it worked. And the market stagnated. Ask yourself when the last time Microsoft did something innovative? Did it come from internal impetuses, or a clone of the competition?

Sooner or later, someone will come along and do a better job than the treacle that Microsoft, offers. Actually that happens all the time. How about, sooner or later, someone will come along and do a better job than the treacle that Microsoft offers, and for some reason, Microsoft won’t be able to crush them like a bug. Then the circled wagons have an alternative. Then the decades of built up enmity have an outlet. Then Microsoft is in trouble.

In such a situation, a company has two choices, both of which are quite stark. They can radically change their ways or they can wither and die. Before you point to Windows 8 and say, “But they are changing and innovating”, hold off a moment, it isn’t what you think.

Microsoft has three product lines that underpin everything, Windows, Windows Server, and Windows Mobile/Phone/WART/whatevertheynameitthisweek. On those, the other moneymakers, Office and Exchange, run exclusively. The apps use protocols that are locked down with dubious methods, and will not run on any competition. The competition is likewise excluded from doing what Microsoft can, either directly like Novell, or by raising the cost to the point of it not being profitable. This is how the wagons are circled, with every iteration, the cost of competing go up, and value of alternatives go up too.

Read more: SemiAccurate

Posted via email from Jasper-net

How do I forward an exported function to an ordinal in another DLL?

|
The syntax for specifying that requests to import a function from your DLL should be forwarded to another DLL is

; A.DEF
EXPORTS
 Dial = B.Call
This says that if somebody tries to call Dial() from A.DLL, they are really calling Call() in B.DLL. This forwarding is done in the loader. Normally, when a client links to the function A!Dial, the loader says, "Okay, let me get the address of the Dial function in A.DLL and store it into the __imp__Dial variable." It's the logical equivalent of

client::__imp__Dial = GetProcAddress(hinstA, "Dial");
When you use a forwarder, the loader sees the forwarder entry and says, "Whoa, I'm not actually supposed to get the function from A.DLL at all! I'm supposed to get the function Call from B.DLL!" So it loads B.DLL and gets the function Call from it.

hinstB = LoadLibrary("B.DLL");
client::__imp__Dial = GetProcAddress(B, "Call");
(Of course, the loader doesn't actually do it this way, but this is a good way of thinking about it.)

But what if the function Call was exported by ordinal? How do you tell the linker, "Please create a forwarder entry for Dial that forwards to function 42 in B.DLL?"

I didn't know, but I was able to guess.

Back in the days of 16-bit Windows, there were two ways to obtain the address of a function exported by ordinal. The first way is the way most people are familiar with:

FARPROC fp = GetProcAddress(hinst, MAKEINTRESOURCE(42));
The second way uses an alternate formulation, passing the desired ordinal as a string prefixed with the number-sign:

FARPROC fp = GetProcAddress(hinst, "#42");

Read more: The old new thing

Posted via email from Jasper-net

Elementary OS

|
Hello, Luna Beta 1

Developers and testers, today we are happy to announce the first beta release of elementary OS Luna. We've been working hard the past year and a half to create the next generation of elementary, and it begins with this beta.

What’s New

Luna is our greatest undertaking yet, and along with it have come many new apps, features and development libraries. By integrating and innovating with the latest technologies, we’re building a platform that free software developers can be excited about.

Pantheon

Inline image 1

Underlying Technologies

Luna is powered by newer technologies, both developed by elementary and other open source projects, bringing a wide number of improvements to the OS.
Below the UI, elementary OS uses the Linux kernel. The kernel has seen significant improvements, including wider hardware compatibility, better wireless drivers, improved graphics drivers, and many low-level advancements.

Throughout the OS we’ve focused on pushing GTK3 as much as possible. Every default user-oriented app uses GTK3, meaning they take advantage of our sleek new theme.

During the Luna cycle, we’ve built out our user interface technology, Granite. Granite is an extension of GTK, providing a select number of improved, useful, well-designed, and consistent widgets for apps. Granite now includes a welcome screen, thin panes, popovers, modebuttons, static notebook, dynamic notebook, decorated window, source list, and the about dialog. Each of these widgets are available to app developers and are used throughout the OS, bringing beautiful design and both visual and behavioral consistency between apps.

Read more: Elementary OS

Posted via email from Jasper-net

Adventures in Microsoft UEFI Signing

|
As I explained in my previous post, we have the code for the Linux Foundation pre-bootloader in place.  However, there was a delay while we got access to the Microsoft signing system.

The first thing you have to do is pay your $99 to Verisign (now Symantec) and get a verified by Verisign key.  We did this for the Linux Foundation, and all they want to do is call head office to verify. The key comes back in a URL that installs it in your browser, but the standard Linux SSL tools can be used to extract this and create a usual PEM certificate and key.  This is nothing to do with UEFI signing, but it’s used to validate to the Microsoft sysdev system that you are who you say you are.  Before you can even create a sysdev account, you have to prove this by signing an executable they give you and upload it.  They make strict requirements that you sign it on a specific Windows platform, but sbsign worked just as well and bingo our account is created.

Once the account is created, you still can’t upload UEFI binaries for signature without first signing a paper contract.  The agreements are pretty onerous, include a ton of excluded licences (including all GPL ones for drivers, but not bootloaders).  The most onerous part is that the agreements seem to reach beyond the actual UEFI objects you sign.  The Linux Foundation lawyers concluded it is mostly harmless to the LF because we don’t ship any products, but it could be nasty for other companies.  According to Matthew Garrett, Microsoft is willing to negotiate special agreements with distributions to mitigate some of these problems.

Once the agreements are signed then the real technical fun begins.  You don’t just upload a UEFI binary and have it signed.  First of all you have to wrap the binary in a Microsoft Cabinet file.  Fortunately, there is one open source project that can create cabinet files called lcab. Next you have to sign the cabinet file with your Verisign key.  Again, there is one open source project that can do this: osslsigncode. For anyone else needing these tools, they’re now available in my openSUSE Build Service UEFI repository. The final problem is that the file upload requires silverlight.  Unfortunately, moonlight doesn’t seem to cut it and even with the version 4 preview, the upload box shows up blank, so time to fire up windows 7 under kvm. When you get to this stage, you also have to certify that the binary “to be signed must not be licensed under GPLv3 or similar open source licenses”.  I assume the fear here is key disclosure but it’s not at all clear (or indeed what “similar open source licences” actually are).

Once the upload is done, the cabinet file goes through seven stages.  Unfortunately, the first test upload got stuck in stage 6 (signing the files).  After about 6 days, I sent a support email in to Microsoft asking what was going on.  The response: “The error code thrown by our signing process  is that your file is not a valid Win32 application? Is it valid Win32 application?”.  Reply: obviously not, it’s a valid UEFI 64 bit binary.  No further response …

Posted via email from Jasper-net

Using Reactive Extensions with Mono

|
I first learned about Reactive Extensions (Rx) begin this month when it was open sourced by Microsoft. Although I found a few scattered references on the internet on how to get Rx working with Mono, I had to jump through quite a few hoops. This blogpost is a detailled account and will hopefully save you a couple of hours.

Getting Reactive Extensions
When you are using Windows this is pretty straightforward. But then again, in that case you are probably using .NET and not reading this blogpost at all. However when you are using Linux or OS-X it gets a bit more complicated. In that case your only option is to use NuGet.

Getting NuGet

I didn’t download the recommended version (NuGet.exe Bootstrapper 2.0) but used the NuGet.exe Command Line. This didn’t work out of thebox. According to this excellent blog post you first have to import some root certificates so that Mono will trust NuGet:

$ mozroots --import --sync

Next you type:

$ mono NuGet.exe

This will result in output similar to:

NuGet bootstrapper 1.0.0.0
Found NuGet.exe version 2.1.2.
Downloading…
Update complete.

You now have NuGet running. To get help type:

$ mono NuGet.exe help

Getting Rx-Main

Ok, so let’s finally get Rx. I started with the latest and greatest (Rx-Main 2.0.21114 at the moment of writing) but I didn’t get that working. However version Rx-Main 1.0.11226 does seem to work with Mono. To see all available versions enter:

$ mono NuGet.exe list Rx-Main -AllVersions

To install the latest Rx 1.0 enter:

$ mono NuGet.exe install Rx-Main -Version 1.0.11226

This will download Rx-Main into your current working directory. You can find the dll you need as: ./Rx-Main.1.0.11226/lib/Net4/System.Reactive.dll

Posted via email from Jasper-net

Step-By-Step Guide to Controlling Device Installation Using Group Policy

| Wednesday, November 21, 2012
Summary: By using the Windows Server 2008 and Windows Vista operating systems, administrators can determine what devices can be installed on computers they manage. The guide summarizes the device installation process and demonstrates several techniques for controlling device installation. (34 printed pages.)

Contents

Introduction
   Who Should Use This Guide?
   Benefits of Controlling Device Installation Using Group Policy
Scenario Overview
Technology Review
   Device Installation in Windows
   Group Policy Settings for Device Installation
   Group Policy settings for Removable Storage Access
Requirements for completing the scenarios
   Prerequisite Procedures
Prevent installation of all devices
   Prerequisites for preventing installation of all devices
   Steps for preventing installation of all devices
Allow users to install only authorized devices
   Prerequisites for allowing users to install only authorized devices
   Steps for allowing users to install only authorized devices
Prevent installation of prohibited devices
   Prerequisites for preventing installation of prohibited devices
   Steps for preventing installation of prohibited devices
Control read and write permissions on removable media
   Prerequisites for controlling read and write permissions on removable media
   Steps for controlling read and write permissions on removable media
Conclusion
Additional resources
Logging bugs and feedback


Introduction


This step-by-step guide describes how you can control device installation on the computers that you manage, including designating which devices users can and cannot install. Specifically, in Windows Server 2008 and Windows Vista you can apply computer policy to:


Prevent users from installing any device.

Allow users to install only devices that are on an "approved" list. If a device is not on the list, then the user cannot install it.

Prevent users from installing devices that are on a "prohibited" list. If a device is not on the list, then the user can install it.

Deny read or write access to users for devices that are themselves removable, or that use removable media, such as CD and DVD burners, floppy disk drives, external hard drives, and portable devices such as media players, smart phones, or Pocket PC devices.

This guide describes the device installation process and introduces the identification strings that Windows uses to match a device with the device driver packages available on a computer. The guide also illustrates three methods of controlling device installation. Each scenario shows, step by step, one method you can use to allow or prevent the installation of a specific device or a class of devices. The fourth scenario shows how to deny read or write access to users for devices that are removable or that use removable media.


Read more: MSDN


Posted via email from Jasper-net

Using Caller Info Attributes in .NET 4.5

|
Introduction

When developing complex .NET applications sometimes you need to find out the details about the caller of a method. .NET Framework 4.5 introduces what is known as Caller Info Attributes, a set of attributes that give you the details about a method caller. Caller info attributes can come in handy for tracing, debugging and diagnostic tools or utilities. This article examines what Caller Info Attributes are and how to use them in a .NET application.  

Overview of Caller Info Attributes

Caller Info Attributes are attributes provided by the .NET Framework (System.Runtime.CompilerServices) that give details about the caller of a method. The caller info attributes are applied to a method with the help of optional parameters. These parameters don't take part in the method signature, as far as calling the method is concerned. They simply pass caller information to the code contained inside the method. Caller info attributes are available to C# as well as Visual Basic and are listed below:

Caller Info Attribute Description
CallerMemberName This attribute gives you the name of the caller as a string. For methods, the respective method names are returned whereas for constructors and finalizers strings ".ctor" and "Finalizer" are returned.
CallerFilePath This attribute gives you the path and file name of the source file that contains the caller.
CallerLineNumber This attribute gives you the line number in the source file at which the method is called.
A common use of these attributes will involve logging the information returned by these attributes to some log file or trace.

Using Caller Info Attributes

Now that you know what Caller Info Attributes are, let's create a simple application that shows how they can be used. Consider the Windows Forms application shown below:

Read more: Codeguru

Posted via email from Jasper-net

MediaSuite.NET

|
Overview:

MediaSuite.NET is the independent Multimedia Framework for Microsoft .NET. Providing unmatched performance and flexibility for all your Multimedia needs. Self contained and independent of other frameworks such as Directshow, Media Foundation or FFmpeg, MediaSuite.NET features all aspects usually only provided through native frameworks and functionality sold by multiple vendors. With MediaSuite, everything is in the box. Ready for use commercially.

Sample Code:

RTP Reception, H.263 decoding and display

  • CamCapture.NET, WaveInput & WaveOutput
  • Audio Encoder/Decoder setup and feedback loop
  • Using AVIWriter.NET
  • Using MP4Writer.NET
  • Using MP4Toolkit
  • Using Resampler.NET
  • Encoding and sending G.711 over RTP
  • Receiving G.711 over RTP, decoding & Playout
  • Generating DTMF Tones
  • How to extract information from a FLV (Flash Video) file
  • RTP Send & Receive symmetric using RTPSession with one participant

Summary:

I hope this post will be useful for people who work with video and audio in Microsoft.NET environment.

Posted via email from Jasper-net

Step by step guide to setting up Xen and XAPI (XenAPI) on Ubuntu 12.04 and managing it with Cirtix XenCenter

| Monday, November 19, 2012
XCP ( Xen Cloud Platform ) is the open source version similar to Citrix XenServer that uses the Xen Hypervisor. It is currently distributed as an ISO installer also called as XCP appliance. XCP uses XAPI or XenAPI to manage Xen hosts. XCP is based on CentOS 5.5

Project Kronos is an initiative to port the XAPI  tool stack to Debian and Ubuntu. It is a management stack implemented in OCaml that configures and controls Xen hosts, attached storage, networking and virtual machine life cycle.  It exposes a HTTP API and provides a command line interface (xe) for resource management.

XenCenter is windows desktop application by Citrix that is distributed with XenServer for managing servers running XenServer. It uses XAPI for talking to Xen resource pools. Since we are setting up XAPI, we can use XenCenter to manage the server

Why use XCP-XAPI on Debian/Ubuntu when XCP appliance exists ?

  • Manage dom0 using a configuration management framework (Puppet, Chef)
  • Apply security updates to dom0 root file system
  • Run Xen version 4.1. XCP appliance runs version <to be filed>
  • Ubuntu 12.04 is a LTS release that is supported for 5 years
Prerequisites

  • A fresh installation of Ubuntu 12.04 on the server
  • Small root file system partition – I usually have a 10GB partition for  root fs (/) and the rest of the space is setup as a physical volume for setting up LVM later. This LVM partition will be used for vm storage and snapshots later. You can choose any partition layout that you are comfortable with, just remember to keep the root partition small and have a large space dedicated for a LVM volume
  • root access to the host
Installing and configuring Xen Hypervisor

Install the Xen Hypervisor
  1. $sudo apt-get install xen-hypervisor
  2. Setup GRUB to boot the Xen Hypervisor
  • $sudo sed -i ‘s/GRUB_DEFAULT=.*\+/GRUB_DEFAULT=”Xen 4.1-amd64″/’ /etc/default/grub
  1. Disable apparmor at boot
  •  $sudo sed -i ‘s/GRUB_CMDLINE_LINUX=.*\+/GRUB_CMDLINE_LINUX=”apparmor=0″/’ /etc/default/grub
  1. Restrict dom0 to 2GB of memory and 2 vcps
  • $sudo vi /etc/default/grub
after GRUB_CMDLINE_LINUX=”apparmor=0″  add the line GRUB_CMDLINE_XEN=”dom0_mem=2G,max:2G dom0_max_vcpus=2″
  1. Update Grub with the config changes we just made
  • $sudo  update-grub
  1. Reboot the server so that Xen boots on the server
  • $sudo reboot
  1. Once the server is back online ensure that Xen is running
  • cat /proc/xen/capabilities should display “control_d”
Installing and configuring XAPI (XenAPI)

  1. Install XCP-XAPI
QR: Inline image 1

Posted via email from Jasper-net