coding beacon

[programming & visualization]

Tag Archives: HPC

How to start using OpenCL ASAP

The following is written assuming the computer has Intel CPU:

Supported Targets

3rd Generation Intel Core Processors
Intel “Bay Trail” platforms with Intel HD Graphics
4th Generation Intel Core Processors, need kernel patch currently, see the “Known Issues” section.
5th Generation Intel Core Processors “Broadwell”.

To start programming right away, do the following:

1. Get Beignet. https://wiki.freedesktop.org/www/Software/Beignet/

Beignet is an open source implementation of the OpenCL specification – a generic compute oriented API. This code base contains the code to run OpenCL programs on Intel GPUs which basically defines and implements the OpenCL host functions required to initialize the device, create the command queues, the kernels and the programs and run them on the GPU.

In terms of the OpenCL 1.2 spec, beignet is quite complete now (at the time of writing, 28/03/2015).

2. Get OpenCL Studio http://opencldev.com/

title.jpg

The OpenCL Programming Book

http://www.fixstars.com/en/opencl/book/OpenCLProgrammingBook/contents/

Eclipse: prepare for OpenCL programming

http://stackoverflow.com/questions/21318112/how-to-prepare-eclipse-for-opencl-programming-intel-opencl-sdk-installed-in-li

http://marketplace.eclipse.org/content/opencl-development-tool

Advertisements

High Performance Computing Libraries (to be updated)

1. IRC channel #opencl at freenode network

2. https://blog.ajguillon.com/

3. Reference on installing pre-requisites (hardware drivers)

http://cran.r-project.org/web/packages/OpenCL/INSTALL

CUDA vs OpenCL

https://www.wikivs.com/wiki/CUDA_vs_OpenCL

http://streamcomputing.eu/blog/2010-01-28/opencl-the-battle-part-i/

http://programmers.stackexchange.com/questions/53410/cuda-vs-opencl-opinions#53699

http://blog.accelereyes.com/blog/2012/02/17/opencl_vs_cuda_webinar_recap/

ArrayFire (open source) http://arrayfire.com/

“ArrayFire supports both CUDA-capable NVIDIA GPUs and most OpenCL devices, including AMD GPUs/APUs and Intel Xeon Phi co-processors. It also supports mobile OpenCL devices from ARM, Qualcomm, and others. We want your code to run as fast as possible, regardless of the hardware.”

“ArrayFire is a blazing fast software library for GPU computing. Its easy-to-use API and array-based function set make GPU programming simple. A few lines of code in ArrayFire can replace dozens of lines of raw GPU code, saving you valuable time and lowering development costs.”

Getting Started

(written prior to being open-source): http://blog.accelereyes.com/blog/2013/03/04/arrayfire-examples-part-1-of-8-getting-started/

Overview by NVidia: http://devblogs.nvidia.com/parallelforall/arrayfire-portable-open-source-accelerated-computing-library/

* Download here

http://arrayfire.com/download/ & install, and append “%AF_PATH%/lib;” to your PATH env.variable

* Sources

https://github.com/arrayfire/arrayfire

* Files required to use ArrayFire from R (prerequisites: source files above)

https://github.com/arrayfire/arrayfire_r

HPC Hardware & Software

Parra||e||a: http://www.kickstarter.com/projects/adapteva/parallella-a-supercomputer-for-everyone

The Parallella project will make parallel computing accessible to everyone.

Software: http://forums.parallella.org/

The Producer: http://www.adapteva.com/

Starting at mere $99…

NVidia: https://en.wikipedia.org/wiki/Nvidia_Tesla#Specifications_and_configurations

CoreMark: http://www.eembc.org/coremark/

A widely-available, generic benchmark specifically targeted at the processor core. Introducing CoreMark — Developed by EEMBC, this is a simple, yet sophisticated, benchmark that is designed specifically to test the functionality of a processor core. Running CoreMark produces a single-number score allowing users to make quick comparisons between processors.

HPC tools and services

Distributed Computing in C++
http://stackoverflow.com/questions/2258332/distributed-computing-in-c
-Combine the above two answers – MPI to communicate across clusters, OpenMP to parallelise for cores on clusters. If you have graphics cards, throw CUDA etc into the mix too. That’s what our distributed clusters do at work.-Checkout http://www.zircomp.com. zNet is a C++ framework that is intended for multi-core and distributed core programming, supports streaming of build-in and custom types without any inheritance, transparently supports auto-discovery and load balancing and specifically oriented towards making application scalable on any hardware.
-CloudIQ Engine from Appistry. It allows you to distribute your C++ algorithms across any number of servers for processing. It also provides for process flow management for tasks. As part of the framework, failover is included, so if a task dies midstream (say someone pulls the plug on a machine), that task is automatically restarted on another node. And if that happens as part of a process flow, the whole flow does not have to be restarted, only the latest task. The framework automatically checkpoints your work at each step.

OpenMPI and/or OpenMP combinations work the best. We use OpenMPI on our supercomputing cluster to process large scientific jobs that require weeks of computing time. As an additional note, MPI has C++ bindings from Boost::MPI which supports lovely stuff like serialization of STL types (valarray, vectors, strings, etc.) to allow easier message-passing on your part.
Buying Cluster/Grid/Cloud Time?
http://stackoverflow.com/questions/409311/buying-cluster-grid-cloud-time?rq=1
-You might want to check out Amazon’s EC2 service:
http://aws.amazon.com/ec2/
Some people have already done some work in regards to clustering with EC2:
http://www.google.com/search?q=cluster+computing+amazon+ec2&rls=com.microsoft:*&ie=UTF-8&oe=UTF-8&startIndex=&startPage=1
Additionally, Microsoft has offered Windows Azure, which has native hooks for .NET, allows you to run anything, really (Java, php), given that you are able to load a runtime and code from storage (or deployed with your app, but that has it’s own set of pros/cons).
-Amazon’s Elastic Compute Cloud is very interesting. You pay for what you use (Memory, CPU, Persisted storage) many OS options.

-There is a new service called Amazon Elastic MapReduce which runs on top of EC2 cluster. It has APIs in many of the programming languages including Ruby and PHP. Also, if you need more established service, checkoutGreenPlum