Hi Steve,
Maybe my question is stupid(very), but do you think there is a way to
make yafray work under Apples Xgrid cluster app, using your
fork/thread modification + some coding? I mean splitting an single
render process in several independent threads or forks that xgrid can
effectively understand and that can be send to every node in apples
grid (and maybe as result get several outputs(images) that can be
batch joined together later?). I have just tested xgrid using
one/unique unix application over the network with different parameters
for each node, resulting in independent data results that can be joined
together later(math simulation), but it looks like xgrid can make use
of a threaded multi processor app and distribute each job to a
different node. I asked for this to Jandro on the web forum but he said
it would take a lot of coding. (by the way xgrid don´t use MPI)
I hope it´s possible, anyway, i will try to compile your yafray with
your patch on my G4 and give it a test.
Thank´s a lot
Diego Pino N
Post by Steve SmithPost by Alejandro Conty EstevezIf the patch works ok, or at least compiles ok, I'll merge it, no
prob.
Will check as soon as possible.
Cool, thanks.
Post by Alejandro Conty EstevezFor the MPI version meybe is better to keep a separate tree, since it
involves polluting too much code, right? We can put a cvs repository
for that and info in the main site if you like.
Actually it should only be a case of adding a couple of files and
tweaking the makefile and configure scripts. The way I've done the
mono/fork/thread stuff is to use inheritance to keep a clear separation
of the functionality. With the latest patch the inheritance tree looks
scene_t (mono)
|
--------------
| |
forkscene_t threadscene_t
scene_t (mono)
|
--------------------
| |
multiproc_t (abstract) threadscene_t
|
--------------------
| |
forkscene_t mpiscene_t
(Hopefully these diagrams work)
multiproc_t contains common functionality (the actually child worker
code), while the descendants override the communication functions (ie.
what is currently in ipc.cc will go into forkscene_t, MPI specific
stuff
will go into mpiscene_t.
Anyway, I'll work on the principle that the existing patch will go in
and produce a proof-of-concept of the MPI stuff and see how you feel
about including it.
Cheers,
Steve/Tarka
_______________________________________________
Yafray-devel mailing list
http://mail.nongnu.org/mailman/listinfo/yafray-devel