Create an account


Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
GPU OS Hosting

#1
Here's a thought I've had rolling around. If you follow the GPU market, you probably know the hype about computing in parallel on a GPU. Parallel execution fascinates me, but it's still only applications. Here's the thought:
Host a Microkernel(like GNU Hurd) on the GPU through µC/OSII. i.e. µC/OSII boots and loads the Hurd kernel onto the GPU's memory and then starts it's execution(probably through fake drivers in µC/OSII tricking Hurd into thinking it's executing on an older processor) Hurd input/output could be routed through µC/OSII for interaction(so you can use the Hurd Smile ). Thus, the kernel could be running in parallel, with each of the "servers" in the microkernel on a separate core. I probably don't have my facts exact, but you get the idea.
Thoughts?
Reply

#2
The GPU market is resorting to gimmiks now?
My will be done.
Reply

#3
So, if I understand correctly, you want to run the kernel on the GPU? If that is the case, I'm not sure it's worth it. From what I know (just a little), kernels have general purpose instructions and not compute intensive instructions. So, in this case, the GPU would not speed up so much the execution on the kernel.

Have a look at OpenCL. It looks like it's gonna be the future for scientific computing on GPU.
Reply

#4
An apparent trend is that GPU and CPU are converging, in the x86 world at least. In the past, graphics chips were little more than fixed-function and fixed-pipeline rasterisation accelerators and CPUs were single-core general purpose processors. Now, with each generation CPUs have more cores and instruction set extensions specifically for parallel workloads (MMX, SSE, 3DNow!, etc...) while GPUs are becoming less graphics-specific and more programmable. Intel and AMD have products in their roadmaps (e.g. AMD "fusion") that integrate GPU cores on the CPU die. Where the kernel run on those? Will it be similar to Cell chips where the GPU cores act as the SPEs?
Reply

#5
http://www.nvidia.com/object/what_is_cuda_new.html ?
I use google translator to write the message, so do not blame me in my illiterate speech
Reply

#6
@Aleator:
Nvidia's move to CUDA was massive. They now have an L1 and L2 cache for each cluster of cores. At this point, there is very little to prevent something more impressive from running on the GPU.

@All:
Here's the best method of implementation:
The BIOS cannot boot to GPU, and no bootloader can push the kernel image onto the GPU and execute it. You have to have a medium. µC/OSII is small, secure, and quick. It would boot, and load the driver for the GPU. It would then load the kernel image into the GPU's RAM. Then it would "fake" execute the kernel code, streaming the threads to the 200 cores present on the GPU. The kernel would think it is running on a 200 core CPU, but µC/OSII is sending all instructions to the GPU's varied cores. While at this stage, GNU Hurd has very few servers. With more advances to the kernel, more servers would be added. Unlike a 12.8 ghz(combined) quad core, the 271 ghz(combined) processing power of a GTX 470 GPU will be able to run millions of applications instantly with no slowdown. Imagine running a cloud server on a GPU. Twould be a 607 mhz computer for each user(448). With the current set of cloud tasks, that would be a cakewalk.
Reply

#7
The linux kernel is never going to run on a GPU (at least not in 3 years).
Simply because the kernel has general purpose instructions. And if you manage to do so, it will be VERY slow compared to running the kernel on the CPU.
The GPU has many cores because its build the do something simple over and over again (like doing ray intersections or brute-forcing passwords). But not for heavy compute intensive tasks.
Reply

#8
(04-05-2010, 03:52 AM)O.I.B. Wrote: The linux kernel is never going to run on a GPU (at least not in 3 years).
Simply because the kernel has general purpose instructions. And if you manage to do so, it will be VERY slow compared to running the kernel on the CPU.
The GPU has many cores because its build the do something simple over and over again (like doing ray intersections or brute-forcing passwords). But not for heavy compute intensive tasks.

My thoughts exactly.

@master[mind]: I wish I could agree with you, but it just doesn't seem feasible. I repeat, I know just a little about kernels but as I see it, the kernel is a bunch of modules that are loaded up to support the execution of programs, drivers, services, etc. So if those modules depend on each other, how many threads can you actually separate and run on the GPU? 200? I don't think so.. I think it cannot be broken in more than 10 - 15 threads (at most). That's why it's not efficient to run the kernel on GPU. Plus, the GPU does operations on 128 bit and your variables are up to 64 bit. So again, not that efficient... We don't even start to talk about a kernel port to run on a graphics card. I've studied a little bit OpenCL, wich is built on top of CUDA (for NVIDIA) and on top of ATI Stream (for ATI) and I'm telling you, there would be a lot of modifications to make the kernel run on the graphics card, if you can actually do it (not sure). On top of that, you have to write code that runs on the CPU and controls the execution of the code run on the GPU. The GPU cannot run by itself.
So you see, it's complicated.


kinda' offtopic: But, if we're on this subject, what I would really like to see in a computer, is an FPGA that can be programmed with kernel functions. Than would make the kernel run really fast. The FPGA should be programmed when we install the operating system.
Reply

#9
Whoa whoa whoa...not Linux, GNU Hurd. It's a microkernel, so it's process structure is already divided up, it just needs enough separate processing cores to handle it. Linux is a monolithic kernel. It is a single executed thread, with driver modules merged. It boots and runs. It cannot be split. GNU Hurd is a collection of "servers". A server for your network driver, a server for the file system, a server for a program running, in fact, you could feasibly split the interpreter and the running program to two separate threads for say python. This is perfect for a widely parallel processor such as a GPU. BTW, this would have to be OpenCL.
Reply

#10
I see. Although, look into OpenCL a little bit and you'll see that the porting is not simple. I work right now with OpenCL and I can assure you that the port would take considerable effort.

In an OpenCL program there are two bits of code: one that is executed on the CPU and one or more that are executed on the GPU. The CPU code controls the execution of the GPU code and is compiled with a standard compiler (gcc). The GPU code is a particularization of the C99 standard and is compiled with a special compiler that is accessible only from the code run on the CPU. The compiler for the GPU has some restrictions like: limited pointer usage, no support for standard header files, and others. For more information on this, read here http://www.khronos.org/opencl/sdk/1.0/docs/man/xhtml/ under the section OpenCL Compiler -> Restrictions.

Trust me, it's hard. It took me one day to realize that I cannot include a header file like in a normal C program. I managed to bypass the problem, but this is because I wasn't doing anything complicated.

Hope you understand that I'm not against this idea, I'm just saying is hard and I'm not sure that in the end it will pay off.
Reply

#11
Just out of curiosity, what do you want to archive whit it?
It sounds like to much work to do "just for fun".
Reply

#12
Heh...no, I don't plan to write this, it was just a thought that occurred to me. Wanted to see what y'all thought. I'll read that Aleator when I get the chance.
Reply



Possibly Related Threads...
Thread Author Replies Views Last Post
  Club for torrent hosting.... MarisaG 0 508 11-11-2018, 10:50 PM
Last Post: MarisaG
  Torrent hosting MarisaG 0 418 11-10-2018, 02:07 AM
Last Post: MarisaG

Forum Jump:


Users browsing this thread:
1 Guest(s)

Forum software by © MyBB original theme © iAndrew 2016, remixed by -z-