Thứ Sáu, 26 tháng 4, 2013

Friday Thoughts

Large Levels

I have done some DBP Object tests and the way memory is created and deleted, and it is clear to me that whatever scheme I choose the slow fragmentation of memory in the heap is a critical limitation of the system I am currently using to build segment levels.

Even if I use instancing and shared parent geometry, the size of the memory required for the Vertex and Index buffers, plus the reduction in memory over time means there will be a problem sooner or later when it comes to the capabilities of creating 'very large levels'.

I am pretty sure I can get larger levels with the system as it is, but a small voice is telling me that squeezing a little more from the current system might not be the best plan. I don't have a solution, but I have been sketching out some ideas of possible implementations that allows 'extremely large levels' to be constructed.

Even as I contemplate the possibilities, I am acutely aware that such a detour will affect the project timetable. Even though the existing system will allow you to create 200x200x20 terrain levels with lots of buildings and objects, at some point it will 'run out'. And when it does that, it will most likely crash out.  I have run four or five different ways to monitor and track the memory usage of the current system (last night), but in each case I had to reject the solution due to either in-accuracy or memory monitored out of context.

I also gained a great insight into all the different elements within the engine that reserve memory and free it, and when they do this. It was clear that operating within this finite world of just 1.8GB to store everything was quite restricting.


The New Approach

If I decide to take a detour, the new system will be based on a system that has a single aim of serving data to the DirectX buffers so you can see what you are looking at. Everything else will be about generating that data only when needed, and avoid the creation of new pockets of memory during the editing and game play processes.  The ideal mechanism would occupy memory only when the camera needed to render a view, and use LOD (level of detail) to reduce polygon quantities down for wide angle or high views.

I don't have the whole picture in my mind right now, but the upshot of such a system would mean you would no longer have the traditional problem of 'stopping editing when the memory ran out'. Instead, the memory is used as a workspace in which to bring things in when you are using them, and remove them when you no longer require their services. This means the limitation is no longer the memory size, but the local storage system which I think we can all agree is plentiful.

I am curious what the voting might be on this possible new approach. Keep with the current plan and be ready for October 2013 or implement this more far reaching technology and maybe add two months onto the project.

Signing Off

As you can see, big thoughts on Friday and some coding to back-up my concern that memory management in it's current form is a little too wasteful. There are also a few solutions that sit between these two approaches, and I need to finish thinking about the many assets required to populate a typical level before I know how big the change might be.  I do know that when users buy and start using Reloaded, they should be able to paint segments for as long as they want and as far as they want. It's not everything, but I think it is one of the important ones to get right and I don't mind sacrificing a deadline if it means turning a 'good' result into a 'great' result.

Không có nhận xét nào:

Đăng nhận xét