Im looking for someone who can write a program in a GPU, that will do some string operations, so we can compare the speed to the CPU.
What I need specifically, is if we have a 14 megabyte buffer, and we write incrementing hex data to the buffer (as a string), I need to know how many times per minute this data can be rewritten in a GPU, per thread, and does the GPU limit the memory writes if we try to run this on multi threads? And the program would emit the metrics to the console. Then we run the same operation in the CPU, and let me know if the CPU is faster in this exact operation?
Basically a benchmark program on both CPU and GPU.
I am trying to prove that this string operation is faster on the CPU, because of the great amount of data being written, and even if one were to add more cores the GPU should bottleneck due to not having enough RAM to do this across many cores.
NOTE: Please do not reply if you are not capable of programming a GPU, and programming in C# or c++, and have at least 5 years experience in programming. I need to have confidence that you understand how to create a benchmark, write a routine in c# that takes N time that is optimized and write the same routine in the GPU.