18 more posts in this thread. [Missing image file: Recursive_raytrace_of_a_sphere.png]
Raytracing shoots primary rays to understand what objects in the scene are visible and what are their properties (diffuse etc).
From the intersection point of those primary rays with surfaces, it shoots secondary rays to calculate reflections, refractions, GI, etc.
But the primary rays also calculate antialiasing. But when the primary rays are shoot at the beginning of the process, the secondary effects still aren't there. The image is incomplete. So how the fuck that works? Does the raytracer shoot primary rays again at the end to calculate antialiasing?
Low poly inquiry
6 more posts in this thread. [Missing image file: f.jpg]
So /3/, I need to ask your opinion off this:
We're basically creating a game which has a top down view of your units/characters. there would be lots of instances of the said "characters" (think RTS, MMORPG) in a certain playthrough, and our target platform is in the web (unity3d web player). to top it off, we have to meet a file size limit for the whole game (all assets included).
So, my questions are:
1.) what is the minimum amount of poly's (yeah sounds stupid, forgive me) to create a chibi biped that can at least move his arms and legs?
2.) to those who have dabbled with unity, what is the safe number of instances for these "models" could exist in a given playthrough without staggering the performance (making the game lag-filled, makes the game hang)?
Please bear with me. thanks.
you rock 4chan!