Code for this entry can be found on Github at: https://github.com/NeilRobbins/CudaHack/tree/master/1This weekend saw me ditch my normal routine and head into Belgium to visit @neilrobbins for a couple of days coding against an Amazon EC2 Cluster GPU Compute instance – this is big stuff for the future and it’s nice to do something different (I’ve spent the past few weeks chasing up leads on jobs and generally not writing much code except at work). Rather than do our knockabout in a “cool” language like Ruby, our daily-life language (C#) or something else, we opted to stay old school and do our work in C to get a better feel for what is actually going on when talking to the GPU – as I have previously written shader code against DirectX (SM1 and SM2) and wanted to see what is so different. NVCC and GCCCUDA C is a bit different from ordinary C, in that you use a different compiler (NVCC) which takes the special CUDA variant of C and compiles it into standard object files which can be linked ordinarily against your typical GCC created object files. In our experiments, we kept a single file (main.cu) which we compiled and linked using NVCC, and decided to write our standard C and CUDA c next to each other and get on with things. A simple C RoutineRather than parallelise anything to begin with, we opted to see what happens when writing a simple bit of code to execute on the GPU, and how you pass data to it. Consider the following: int addTwoNumbers(int x, int y)
{
return x + y;
}
Read more: <code of rob>
{
return x + y;
}
Read more: <code of rob>
0 comments:
Post a Comment