you can attach a camera to the renderer and then zoom out.
or you can go the other way round and scale the model down with “scale (transform)” or “uniform scale (transform)” for example.
also make sure that the depthbuffer of the renderer is activated.
to do this: select the "renderer ex9) node. then hit ctril top open the inspector. there you can enable the depthbuffer both for windowed and fullscreen node.
Thanks Elektromeier, I have done this already using the softimage control type camera and by transforming the model, but I still seem to be inside it? I can only really see it when I use a wireframe type fill node.
Will try the depthbuffer now though.
Right it appears to be my GFX card, its a crap onboard one on the work machine. And has just crashed the display, Which depthbuffermode shoukld I use the inspektor shows two? something like db16 and db32 (presumably bit depth yeah?) I cant get there now as Iv’e half crashed my display!
(I knew I should have brought my lappy in today)
Oh, and… presumably the normals from 3ds to .xfile are single sided? or would I need to flip them in 3ds?
dont know exactly the export process with 3ds because i just know how to do it with blender.
mh there is a node called “normals”. it calculates new normals, but i dont know how it does that. and if you use that normals node bew aware that it destroys any subsets in the mesh.
I think the problem is that your mesh is very big , if you see something in your renderer , thats means that is just a scaling problem and not a graphic card problem.
use a scale node connect to the shader and change the scaling value to something very small.or do it directly in 3dsmax .
For the normal you have to export your mesh with doublesided polygon , the best is to do it in 3dsmax , i think it´s in the material propertie , just check double sided .
For the dephtbuffer you have to set it in the inspektor from the renderer the dephtbuffer format at D16 or D24X8.
What is the better way to go - using one big .x-file (100MB+) or several small ones?
I tested the big one, vvvv seems to have no problem with it. Performance isn’t a problem, too. EDIT: Well, it was none with my PC at home. This one here isn’t that fast and jerks.
But are there other things that could make one prefering many small files?
the following is not based on any hard facts, but rather a guess: if it is good for you to handle in one big file (probably with many subsets as spreads) then do it. i can’t think of anything that would prevent one from using one big but difficulties with handling in the patch.
I discovered the following:
With meshes alone vvvv has not much trouble with relativly big files. But as I have to give several textures to the objects, performance falls really deep when using one big file. But when I load every object in a seperate file and let them display at the same time, vvvv has no problems with that.
So, for me it turned out that using multiple files is much better than using one big file. At least when lokking at performance.