Thứ Năm, 30 tháng 5, 2013

Thursday Rotate, Hide & Delete Em

Almost Deferred

I was all set to start some deferred rendering code when I realized that my editor still had a few missing ingredients. It could not control the limb visibility of the new instance stamp objects, it did not respect the rotation of the object either and you could not delete the object from the edited scene. Essential features you will agree.

Hide, Rotate and Delete

It took some time to ensure the base functionality was in place, and then it was a quick step to re-compile the editor, first making sure the limb visibility was set on the instance object before making it into an instance stamp, and then sitting back and watching it happen.

It was quite a buzz seeing the logic that once served a different object handling system work nicely with the new one, as internally it could not be more different to what we had before.

Hiding limbs and rotating objects, a key need within the editor, where to some degree anticipated and they presented no problems. Delete however was a minefield..

So much of a minefield that it's not entirely perfect (or finished) yet, as this process involves the pain of locating the mesh data within the buffer, deleting it and then shuffling the remaining valid mesh data (vertex and index data). There are also multiple instances in the editor that add and remove the objects so I need to catch each type and ensure the right data is produced.

Further, as I expand the capabilities of the instance stamp system further, my ancient reliance on the MAP() array inside the current FPSC code is also starting to appear quite redundant. No sense storing this reference data twice, once in DBP arrays and once in the DBP object engine. My thinking is to detect and remove all reliance on the DBP array side, effectively deleting all references to this map array so that when I choose to increase the level size, whatever that becomes, I don't need to worry about fixed size arrays in the actual DBP engine code. It's a big job though so going to tackle that when other more identifiable tasks have been put to bed.

Deferred Sample Selected

Many coders learn more from a simple cut and paste example than any amount of books and documentation on the subject, and I am one of them. I have now selected my uber simple demo and it looks something like this:


That's right, the old teapot makes a comeback once again.  This little fellow taught me DirectX in the early years :)  What this new demo does is produce a very simple four render target example of deferred rendering, and does so almost entirely within a single shader. I have copied the code below, so I hope the author does not mind (credited below), so you can see how simple it really is:

//-------------------------------------------------------------------------------
//           Name: deferred.fx
//         Author: Nikita Kindt (nk47@bk.ru)
//  Last Modified: 17/06/06
//-------------------------------------------------------------------------------

float4x3 c_mWorld;
float4x3 c_mView;
float4x3 c_mViewInverse;
float4x4 c_mProjection;
float4x4 c_mViewProjection;

// light parameters
float3 c_vLightDir;
float4 c_vLightPos;
float4 c_vLightDiffuse;
float4 c_vLightSpecular;

// material parameters
float4 c_vMaterialDiffuse;
float4 c_vMaterialSpecular;
float c_fSpecularPower;

// textures
texture2D c_tDiffuseMap;
texture2D c_tSceneMaterialMap;
texture2D c_tSceneNormalMap;
texture2D c_tScenePositionXYMap;
texture2D c_tScenePositionZMap;


sampler DiffuseSampler = sampler_state
{
Texture = <c_tDiffuseMap>;

AddressU = Wrap;
AddressV = Wrap;

MagFilter = Linear;
MinFilter = Linear;
MipFilter = Linear;
};

// rendertargets
sampler SceneMaterialSampler = sampler_state
{
Texture = <c_tSceneMaterialMap>;
MagFilter = Point;
MinFilter = Point;
};

sampler SceneNormalSampler = sampler_state
{
Texture = <c_tSceneNormalMap>;
MagFilter = Point;
MinFilter = Point;
};

sampler ScenePositionXYSampler = sampler_state
{
Texture = <c_tScenePositionXYMap>;
MagFilter = Point;
MinFilter = Point;
};

sampler ScenePositionZSampler = sampler_state
{
Texture = <c_tScenePositionZMap>;
MagFilter = Point;
MinFilter = Point;
};



struct VS_INPUT_BUILD
{
    float3  vPos            : POSITION0;
    float2  vTex0           : TEXCOORD0;
    float3  vNrm            : NORMAL0;
    float3  vTan            : TANGENT0;
    float3  vBin            : BINORMAL0;
};

struct VS_OUTPUT_BUILD
{
float4 vPos : POSITION0;
    float2  vTex0           : TEXCOORD0;
float3 vWorldPos : TEXCOORD1;
float3 vWorldNrm : TEXCOORD2;
};

struct PS_OUTPUT_BUILD
{
float4 vMaterial : COLOR0;
float4 vWorldNrm : COLOR1;
float4 vWorldPosXY : COLOR2;
float4 vWorldPosZ : COLOR3;
};

VS_OUTPUT_BUILD vsBuild(VS_INPUT_BUILD i)
{
VS_OUTPUT_BUILD o;

o.vWorldPos = mul(float4(i.vPos, 1), c_mWorld);
o.vPos = mul(float4(o.vWorldPos, 1), c_mViewProjection);
o.vTex0 = i.vTex0;
o.vWorldNrm = normalize(mul(float4(i.vNrm, 0), c_mWorld));

return o;
};

// psBuild()
// put geometry data into render targets
PS_OUTPUT_BUILD psBuild(VS_OUTPUT_BUILD i) : COLOR0
{
PS_OUTPUT_BUILD o;

// material
float4 vDiffuseMaterial = tex2D(DiffuseSampler, i.vTex0);

o.vMaterial.rgb = vDiffuseMaterial;
o.vMaterial.a = 1.0;

// convert normal to texture space [-1;+1] -> [0;1]
o.vWorldNrm.xyz = i.vWorldNrm * 0.5 + 0.5;
o.vWorldNrm.w = 0.0;

// position
o.vWorldPosXY = float4(i.vWorldPos.xy, 0, 0);
o.vWorldPosZ = float4(i.vWorldPos.z, 0, 0, 0);

return o;
};



struct PS_INPUT_LIGHT
{
float2 vTex0 : TEXCOORD0;
};

// psLighting()
// uses data from textures (previous render targets)
float4 psLighting(PS_INPUT_LIGHT i) : COLOR0
{
float3 vDiffuseMaterial = tex2D(SceneMaterialSampler, i.vTex0).rgb;
float3 vSpecularMaterial = tex2D(SceneMaterialSampler, i.vTex0).a;

// normals are stored in texture space [0,1] -> convert them back to [-1,+1] range
float3 vWorldNrm = (tex2D(SceneNormalSampler, i.vTex0) - 0.5) * 2;

float3 vWorldPos;
vWorldPos.xy = tex2D(ScenePositionXYSampler, i.vTex0).xy;
vWorldPos.z = tex2D(ScenePositionZSampler, i.vTex0).x;
float3 vLightDir = normalize(c_vLightPos - vWorldPos);
float3 vEyeVec = normalize(c_mViewInverse[3].xyz - vWorldPos);
float3 vDiffuseIntensity = dot(vLightDir, vWorldNrm);
float3 vSpecularIntensity = pow(max(0, dot(vEyeVec, reflect(-vLightDir, vWorldNrm))), c_fSpecularPower);

float4 color;
color.rgb = vDiffuseIntensity * c_vLightDiffuse.xyz * vDiffuseMaterial + 
vSpecularIntensity * c_vLightSpecular.xyz * vSpecularMaterial;
color.a = 1.0;

// here we add color to show how lighting pass affects the scene
color.rgb += i.vTex0.rgr * 0.5;

return color;
};


technique buildPass
{
pass p0
{
VertexShader = compile vs_1_1 vsBuild();
PixelShader = compile ps_2_0 psBuild();

CullMode = ccw;
FillMode = solid;
Zenable = true;
ZWriteEnable = true;
ZFunc = less;
StencilEnable = false;
AlphaBlendEnable = false;
AlphaTestEnable = false;
ColorWriteEnable = red | green | blue;
}
}

technique lightPass
{
    pass p0
{
VertexShader = NULL;
PixelShader = compile ps_2_0 psLighting();

CullMode = none;
FillMode = solid;
Zenable = false;
StencilEnable = false;
AlphaBlendEnable = true;
Srcblend = One;
Destblend = One;
AlphaTestEnable = false;
ColorWriteEnable = red | green | blue;
}

};

Well, maybe if you are not 100% schooled on reading shader code, it might seem a little daunting, but for anyone who has written a shader, you will start to ask what all the fuss was about.

I am not keen on using FOUR render targets, and the above example is really just a skeleton for what the final shader will end up looking like, but I really like the fact we have a VERY simple template to start from. It is much better to start from a simple shader that works to a complex shader that does not work, or worse an empty shader you have to write from scratch. If anyone knows of a good shader authoring IDE that detects compile errors in real-time and works with DirectX 9.0c shader code (i.e. VS1/PS1/PS2/PS3), send me a few links!

Signing Off

Hopefully going to get an early night today so I can be up bright and early for my drive to Crewe and train down to London. Friday I meet the secret FPSC investor and the guy you will all owe a beer to when the Reloaded project is complete. When the Kickstarter campaign failed, this amazing fellow swooped in and backed the whole project personally. The least I can do is buy the fellow a meal in our countries capital (if he lets me pay). I will be staying in London until Saturday so there will be no blog tomorrow, but I will provide a special report on my secret meeting over the weekend (or Monday), but given it's secret there will be precious little facts divulged. I will see if I can take some photos to lighten the blog and provide some relief from the sea of code I am drowning you in.

Helping passwords better protect you

Knowing how to stay safe and secure online is important, which is why we created our Good to Know site with advice and tips for safe and savvy Internet use. Starting today, we'll also be posting regularly with privacy and security tips. We hope this information helps you understand the choices and control that you have over your online information. -Ed.

It could be your Gmail, your photos or your documents—whatever you have in your Google Account, we work hard to make sure it’s protected from would-be identity thieves, other bad guys, or any illegitimate attempts to access your information.

But you can also help keep your information safe. Think of how upset you would be if someone else got access to your Google Account without your permission, and then take five minutes to follow the steps below and help make it more secure. Let’s start with the key to unlocking your account—your password:

1. Use a different password for each important service
Make sure you have a different password for every important online account you have. Bad guys will steal your username and password from one site, and then use them to try to log into lots of other sites where you might have an account. Even large, reputable sites sometimes have their password databases stolen. If you use the same password across many different sites, there’s a greater chance it might end up on a list of stolen passwords. And the more accounts you have that use that password, the more data you might lose if that password is stolen.

Giving an account its own, strong password helps protect you and your information in that account. Start today by making sure your Google Account has a unique password.

2. Make your password hard to guess
“password.” “123456.” “My name is Inigo Montoya. You killed my father. Prepare to die!” These examples are terrible passwords because everyone knows them—including potential attackers. Making your passwords longer or more complicated makes them harder to guess for both bad guys and people who know you. We know it’s hard: the average password is shorter than 8 characters, and many just contain letters. In a database of 32 million real passwords that were made public in 2009, analysis showed (PDF) only 54 percent included numbers, and only 3.7 percent had special characters like & or $.

One way to build a strong password is to think of a phrase or sentence that other people wouldn’t know and then use that to build your password. For example, for your email you could think of a personal message like “I want to get better at responding to emails quickly and concisely” and then build your password from numbers, symbols, and the first letters of each word—“iw2gb@r2eq&c”. Don’t use popular phrases or lyrics to build your password—research suggests that people gravitate to the same phrases, and you want your password to be something only you know.

Google doesn’t restrict password length, so go wild!

3. Keep your password somewhere safe
Research shows (PDF) that worrying about remembering too many passwords is the chief reason people reuse certain passwords across multiple services. But don’t worry—if you’ve created so many passwords that it’s hard to remember them, it’s OK to make a list and write them down. Just make sure you keep your list in a safe place, where you won’t lose it and others won’t be able to find it. If you’d prefer to manage your passwords digitally, a trusted password manager might be a good option. Chrome and many web browsers have free password managers built into them, and there are many independent options as well—take a few minutes to read through reviews and see what would be best for your needs.

4. Set a recovery option

Have you ever forgotten your password? Has one of your friends ever been locked out of their account? Setting a recovery option, like an alternate email address or a telephone number, helps give the service provider another way to contact you if you are ever locked out of your account. Having an up-to-date recovery phone or email address is the best thing you can do to make sure you can get back into your account fast if there is ever a problem.

If you haven’t set a recovery option for your Google Account, add one now. If you have, just take a second to make sure it’s up to date.

We have more tips on how to pick a good password on our Help Center, and in the video below:

Your online safety and privacy is important to you, and it’s important to us, too. We’ve made a huge amount of progress to help protect your Google Account from people who want to break into it, but for the time being, creating a unique, strong password is still an important way to protect your online accounts. Please take five minutes today to reset your important passwords using the tips above, and stay tuned for more security tips throughout the summer.

Investing in a South African solar project

As we search for investments that can help speed up the adoption of renewable energy, we’ve been looking beyond the U.S. and Europe to parts of the world where our investments can have an even greater impact. We’ve just closed our first investment in Africa: $12 million USD (103 million Rand) investment in the Jasper Power Project, a 96 megawatt solar photovoltaic plant in the Northern Cape province of South Africa. Upon completion, Jasper will be one of the largest solar installations on the continent, capable of generating enough electricity to power 30,000 South African homes. The project, developed and funded by SolarReserve, Intikon Energy and the Kensani Group, is also backed by Rand Merchant Bank, the Public Investment Corporation, Development Bank of South Africa and the PEACE Humansrus Trust.


View Jasper Power Project in a larger map
The Jasper Power Project is located in the Northern Cape Province of South Africa, near Postmasburg


When we consider investing in a renewable energy project, we focus on two key factors. First, we only pursue investments that we believe make financial sense. South Africa’s strong resources and supportive policies for renewable energy make it an attractive place to invest—which is why it had the highest growth in clean energy investment in the world last year. Second, we look for projects that have transformative potential—that is, projects that will bolster the growth of the renewable energy industry and move the world closer to a clean energy future. The Jasper Power Project is one of those transformative opportunities. To explain why, perhaps some background would be helpful.

Back in 2008, South Africa experienced a severe energy shortage, which resulted in blackouts throughout the country and slowed down economic growth. Since then the South African government has been actively supporting the growth of new sources of electricity to power the nation. While today South Africa is primarily dependent on fossil fuels, there’s lots of potential for renewable energy—it’s a country blessed with abundant wind and solar resources—and the government has set an ambitious goal of generating 18 gigawatts (GW) of renewable energy by 2030 (as a comparison, the entire South African grid is currently 44 GW).

To meet this goal, the South African government has established the Renewable Energy Independent Power Producer Procurement Program (REIPPPP). Through the program, renewable energy projects compete on the basis of cost and contribution to the local economy to be awarded a contract with Eskom, South Africa’s state-owned energy utility. Jasper and the other projects being developed through the REIPPPP have the potential to transform the South African energy grid. And given South Africa’s position as an economic powerhouse in Africa, a greener grid in South Africa can set an example for the whole continent.

Once constructed, the project will use solar panels like these.

Just as compelling are the economic and social benefits that the project will bring to the local community. Jasper will create approximately 300 construction and 50 permanent jobs in a region experiencing high rates of unemployment, as well as providing rural development and education programs and setting aside a portion of total project revenues—amounting to approximately $26 million over the life of the project—for enterprise and socio-economic development. We appreciate how forward-thinking the South African government has been in designing the REIPPPP to encourage these kinds of local economic benefits.

Google has committed more than $1 billion to renewable energy investments and we continue to search for new opportunities. Our search has brought us from the U.S. to Europe and now to Africa. We’re excited to see where else it might lead.

Thứ Tư, 29 tháng 5, 2013

Wednesday Was Animated

A Nice New Video

I think you will like today's video, direct from the art hub we call Mark, and a massive reveal about just how capable the Reloaded enemy characters are going to be compared to the classic character.  Rather than twitter on, the video does a great job of giving you a thorough overview:


As you can see, amazing animation and when coupled with some really intense AI control you will be hard pressed to even find these guys let alone shoot one.  As I say, can't wait to get started but priorities place my firmly in level engine land.

Level Engine Land

I am hotting up to the idea of deferred rendering (and cascade shadows for the outside) which of course leaves the issue of shadows on the inside. One of the links provided to me from the comments section called DeepDeferred made use of what I suspect was geometry shader based shadow volumes to cast shadows based on a single point light in the scene. It was quite dramatic and reminded me of the updated Doom 3 when it first came out.

The advantage of baked lights is that you can have fifty shadows cast and crossing all over the place to create a really rich detailed scene. As much as lights contribute to a scene, it's really the shadows that make everything pop, and as some users quite rightly pointed out, deferred rendering does not mean shader rendering, just surface lighting. Shadows is a whole new thought process, so we will leave that particular item alone for a while and let it brew!

Signing Off

I am now at the stage of adding my first deferred rendering attempt to the instance stamp engine prototype, and although I have a grasp of the theory it's not until you code and run do you get a sense for what it is.  I will warn the world in advance that the assets used will be legacy artwork, and not the final Reloaded segment and entity art.  It sounds like the majority of you want to see deferred rendering go in, and who am I to fly in the face of what the customer wants, so deferred it is.  Now you can argue among yourself exactly how we handle those fifty real-time shadows ;)

A new inbox that puts you back in control

We get a lot of different types of email: messages from friends, social notifications, deals and offers, confirmations and receipts, and more. All of these emails can compete for our attention and make it harder to focus on the things we need to get done. Sometimes it feels like our inboxes are controlling us, rather than the other way around.

But it doesn’t have to be that way. Today, Gmail is getting a brand new inbox on desktop and mobile that puts you back in control using simple, easy organization.

On the desktop, the new inbox groups your mail into categories which appear as different tabs. You simply choose which categories you want and voilà! Your inbox is organized in a way that lets you see what’s new at a glance and decide which emails you want to read when.


You can easily customize the new inbox—select the tabs you want from all five to none, drag-and-drop to move messages between tabs, set certain senders to always appear in a particular tab and star messages so that they also appear in the Primary tab.

In the Gmail for Android 4.0+ and Gmail for iPhone and iPad apps, you'll see your Primary mail when you open the app and you can easily navigate to the other tabs.
 

If the new inbox isn't quite your style, you can simply switch off all optional tabs to go back to classic view, or switch to any of your other favorite inbox types.

The new inbox is rolling out gradually. The desktop, Android and iOS versions will become available within the next few weeks. If you'd like to try out the new inbox on Desktop sooner, keep an eye on the gear menu and select Configure inbox when it appears in the Settings options.

Thứ Ba, 28 tháng 5, 2013

Monday and Tuesday Backloggin

Backlog Mountain

As I had a long weekend, thanks in part to a UK bank holiday and birthday party, the PC was not touched for the best part of three days and as such, plenty of emails and letters to wade through on my return.

I have a few short days to squeeze some coding into, and then I am off Jet-setting again to meet our mysterious Reloaded investor who is in London for a few days.  All my travel arrangements are booked, printed out and my car is due for a quick service Wednesday to ensure everything goes smoothly.  I am really looking forward to the meeting, and as I am in London, also looking forward to finding a nice Irish pub dispensing the very best Guinness.

So even though your friendly neighbourhood Lee has been barely active, the Reloaded universe continues to turn with the small team growing around the project.

Tracers Ahoy

The tracer prototype is coming along nicely, with blazing streaks of heat whizzing past the players head and scorching the air. The final visuals from mark and the sound contributions in the module really came together and I am excited to add it to the engine when the time comes.  The system is also pretty configurable from the gunspec file as well, so modders out there with ambitions to create new Reloaded weaponry will have a great time!


Firstly sorry for the classic art (not representative of Reloaded art), and secondly, check out the white line at the bottom of the screen. This is a tracer that fires from the barrel of specific guns to give you some idea how close that enemy bullet was. The scene will get pretty lively during the action, and tracers will be a valuable element of that visual splendour.

Character Accent 

Another wonderful surprise from the art lab as we get to see more of the versatility of our principal enemy protagonist. Here we see him climb a two segment high ladder, complete with commentary from the man himself:


Of course I am itching at the bit to get stuck in and add this to the engine, but I have to practise patience.  I have a level engine to put back together, a lighting model to decide on and some basic physics and occlusion so we can run around a typical level and see some normality again.

Signing Off

My brain was not entirely idle over the weekend and it did turn once or twice to the request made on one of the comment boards that the level size should be 500x50x500 rather than 20 units high.  While it is true such a dimension would not be exploited by the masses, it suggests that at least one game designer wants to see this happen and we are not in the business of limiting the engine before we leave the starting gate.

I did have the notion of writing some quick speed prototypes to compare the extra performance hit from turning the two dimensional array access into a hash table access approach.  What this means in real money is that instead of a fixed level limit of 500x50x500, you would have an effective level size of around 99999x99999x99999 and a lower memory footprint as we would only expand the hash table as new reference elements where added to the map. Sounds great in theory, but where an array access can be less than 50 cycles to complete, a hash table interrogation could take thousands of cycles, and when you then multiply each write/read by the iterations demanded of the current system which constantly creates the level as you go, it might start to impact on available cycles for the rest of the engine.

I am also warming to the benefits of dropping baked lights altogether and opting for a deferred renderer that only uses dynamic directional, spot and point lighting for the entire scene.  The biggest win is that the memory footprint will be smaller and the lighting results when editing and playing much better. The potential downside (or disaster) is that performance will be such that only dedicated graphics cards could run with all features switched on. Some good news about referred lighting is that you can gain more performance (quite a bit more) by simply reducing the display resolution as the lighting is based on how many pixels the screen space has to deal with, so you can get some great visuals on an average PC by simply shrinking your display. It also means we spend more time on forward thinking technologies, and less time on legacy solutions, which ultimate future proofs the engine for years to come.  The deep thoughts continue...

Thứ Sáu, 24 tháng 5, 2013

Friday Map Editor Fun

Inside the Map Editor

A small triumph today as I manage to get the Instance Stamp system tentatively working inside the current FPS Creator Reloaded map editor.


What you are seeing is the system that only creates segment renders when the camera needs them operating inside the map editor.  You can also see, there is no lighting being applied and they are all using a single texture, but there are no nasty artifacts and you can paint segments as normal. You also cannot delete segments at the moment, though this functionality is now added to the instance stamp system.

From this foundation I can start to craft in all the bits needed to get it back to where the map editor was about six weeks ago.  Might seem like a giant step backwards, but given a few days the new system will be able to edit segments as before, but now those levels can be 500x20x500 in size and the memory footprint won't go above 1GB unless seriously taxed with polygons.

The Long Walk

Before turning my attention to editor integration I did a quick test and discovered it would take you just under 5 minutes to jog from one end of the new Reloaded level to the other end, which is a long time when you consider that is in a straight line along the shortest distance of the level edges.  Now consider how long it would take you to fill a space 500x20x500 (or even 500x500) with meaningful detailed content and you will realize that by coding the engine for the worst case polygon scenario, the actual usage patterns would hardly scratch the surface and protect both memory and performance levels.

There are some unknowns still be to discovered such as the extra memory footprint of the mip-mapped textures, the sound and music, the memory required to store and operate all the AI, the data banks that drive all the logic of a level and who knows what else.

I don't think we're going to get infinite worlds here, but we should end up with a level too large to populate in a single sitting whilst remaining inside our maximum memory capacity of 1.8GB.

Light and AI

It's great to see comments and conversations spring up around these two issues, and as much as AI is an exciting feature to think about the code will have to wait until we have cracked the fundamentals.

It looks like the voting scales are ever so slightly leaning towards the deferred approach to rendering, and I am happy to leave the subject bubbling a while longer until my brain leaps out the bath and yell eureka.  Quite rightly, Mark warns of severe performance issues when rendering deferred, as you really do need to render the scene at least two times.  With the forward render layer trick for transparent and secondary shader elements, that count increases to three. When you add shadows through CSM, you are anywhere from four to six times. I don't need to tell you that all this spells massive slowdown if your graphics card does not have the video memory and GPU grunt to process all that. It could be a quick and dirty prototype is the way to resolve this question, and then try it out on what we all agree would be the Reloaded minimum specification.

Signing Off

I am taking the weekend off to visit family and friends in my shiny new car (well, new to me) and probably doing to have a beer or three before the weekend comes to a close. For those who asked, the car is a Honda Accord, voted the most reliable manufacturer seven years on the trot. It's bank holiday Monday here in the UK I think but I will be cracking on finishing off Instance Stamping in the editor, getting textures and shaders working there again and ensuring the same functionality exists as before. From there we can look at lighting prototypes and basic physics so we can run around the new instance-stamp universe and jump off crates!  If anyone can find some good deferred rendering demos that run on very basic PC's, send me the link so I can try them against my array of devices. See you Monday!

Thứ Năm, 23 tháng 5, 2013

Thursday Thmoothness

Happy Camper

A nice day indeed. I cleared my inbox in about 15 minutes, and was able to dive right in today. Thanks to new improved LOW LOD meshes from Mark, I was able to get the prototype for Instance Stamping back to a nice smooth fluid 120fps experience, and also discovered the stutter was caused not by the iterations to process the vector transforms to get parent mesh data into the buffers, but the constant creation and deletion of buffers that where found to be too small. By predicting the final size of the buffers, and making them larger initially, such a release/recreate was avoided and everything smoothed out again. Hurray!  All these issues are pointers to watch out for when it comes time to integrate and refine within actual game and editing scenarios, but it was necessary to go through it.

Phone Interview

I also had the pleasure of being interviewed for a Case Study on the perils and potential of Perceptual Computing development. Had a great call and imparted lots of little coding gems, so hopefully you will get to read that soon. I will post the link when it goes live. It's being commissioned by Intel and professionally authored by people who do this for a living, so I am looking forward to reading it myself.

Surprise Email

As you may know Mark's primary mission in the project right now is the creation, rigging and animating of our central character for Reloaded. It's a lot of work, especially as our mandate is to push the envelope. An email dropped in this evening which got me quite excited and shows the potential of the AI system when I finally get around to coding it.



For some reason the Google Blogger is not showing me the YouTube videos associated with my account, so here is the link instead.  

As you can see, having these as part of the character behaviours will mean making a successful hit has become much more challenging, and if you don't know they are hiding, having them leap over an obstacle and start firing will come as a real shock.  Can't wait to get it into the action..

The Lighting Question

Thanks for the comments so far. I have done some brief research already and it looks like we don't need to choose between deferred and forward rendering as such. A technique called 'Cascade Shadow Mapping' will allow an entire scene to be shadowed from a single spot of light up in the sky. I say spot as that is essentially what the technique uses (spot light) so would not be ideal for a point light source for an interior scene but perhaps the idea of baked lighting and then CSM for outdoor scenes would be the solution. The research and thoughts continue. I am curious what hardware my readers have in terms of graphics card right now. A major factor in whether or not to 'defer' is how many users have the graphics cards required to run the technique. Deferred rendering relies on drawing a large portion of the scene three times over, as opposed to the forward renderer which can do it in one.  For this you need monster graphics horsepower and plenty of video memory, so what do you guys have (and what does your end user have)?

Signing Off

The evening is not over for me as I want to put another hour into moving the Instance Stamp stuff over to the map editor to see where we are.  It never pays to stay too far away from your engine, and as much fun prototypes are you cannot sell them and they are by there very nature disposable. I might also spend an hour looking at the Bullet physics SDK and seeing what kind of demos and games have been created with the technology. 

Capturing the beauty and wonder of the Galapagos on Google Maps

The Galapagos Islands are some of the most biologically unique ecosystems in the world. Explorers and scientists alike have long studied and marveled at these islands—made famous by Charles Darwin. The Ecuadorean Government, local conservation groups and scientists are working to protect the Galapagos from threats posed by invasive species, climate change and other human impacts.

It’s critical that we share images with the world of this place in order to continue to study and preserve the islands’ unique biodiversity. Today we’re honored to announce, in partnership with Charles Darwin Foundation (CDF) and the Galapagos National Parks Directorate (GNPD), that we’ve collected panoramic imagery of the islands with the Street View Trekker. These stunning images will be available on Google Maps later this year so people around the world can experience this remote archipelago.

Daniel Orellana of Charles Darwin Foundation crossing a field of ferns to reach Minas de Azufre (naturally-occurring sulfur mines) on the top of Sierra Negra, an active volcano on Isabela Island. The Google Maps team traveled for more than three hours, hiking and on horseback, to reach this remote location.

Images, like the one you see above, are also an important visual record that the CDF and GNPD will use to study and protect the islands by showing the world how these delicate environments have changed over time.

Daniel Orellana of the Charles Darwin Foundation climbs out of a lava tunnel where he was collecting imagery. The dramatic lava landscapes found on Isabela island tell the story of the formation of the Galapagos Islands.

Our 10-day adventure in the Galapagos was full of hiking, boating and diving around the islands (in hot and humid conditions) to capture 360-degree images of the unique wildlife and geological features of the islands with the Trekker. We captured imagery from 10 locations that were hand-selected by CDF and GNPD. We walked past giant tortoises and blue-footed boobies, navigated through steep trails and lava fields, and picked our way down the crater of an active volcano called Sierra Negra.


A Galapagos giant tortoise crawls along the path near Googler Karin Tuxen-Bettman while she collects imagery with the Street View Trekker in Galapaguera, a tortoise breeding center, which is managed by the Galapagos National Park Service.

Life underwater in the Galapagos is just as diverse as life on land. We knew our map of the islands wouldn’t be comprehensive without exploring the ocean that surrounds them. So for the second time we teamed up with the folks at the Catlin Seaview Survey to collect underwater panoramic imagery of areas being studied by CDF and GNPD. This imagery will be used by Catlin Seaview Survey to create a visual and scientific baseline record of the marine environment surrounding the islands, allowing for any future changes to be measured and evaluated by scientists around the world.

Christophe Bailhache navigates the SVII camera through a large group of Sea Lions at Champion Island in Galapagos. Image courtesy of the Catlin Seaview Survey.

We truly believe that in order to protect these Galapagos Islands, we must understand them. As they say, “a picture is worth a thousand words.” We hope this Street View imagery not only advances the important scientific research, but also inspires you to learn more about this special place. Stay tuned for updates on this collection—the first time we’ve captured imagery from both land and sea! We can’t wait to share this amazing imagery with you later this year.

Thứ Tư, 22 tháng 5, 2013

Wednesday LOD and LIGHT

Plenty Code Crunched

A full day of coding, both last evening and this evening sees the low and high LOD system implemented. I also added in some test artwork from Mark for the segment visuals to see how the popping would look.


I have run into some logistical issues however as performance is very much tied to the size of a single segment, how many of those segments are in close proximity and the cumulative processing required to fill the buffers as you traverse through the space.  If I spread out the segments, all is smooth. When I bunch them up, I get stutter as I move through the level. Not ideal by any means, and something will have to give. Currently my high LOD test segment uses over 3000 polygons so one thing that might give is to set an upper limit for segment art.

Thursday will see these logistics put under scrutiny and some rules established so I can have 120fps smooth scrolling through the level with the LOD system in play.  I can report the 'LOD popping' is minimal with my test segment which is good, and I have also increased the size of the overall level from 200x20x200 to 500x20x500 as when I walked from one end to the other I felt I needed more distance, and it's only an extra 50MB (with further optimization in memory use possible).

The Lighting Issue

A new situation has emerged that might affect deadlines, and just like the memory resource bar and massive levels was discussed in the blog, I felt it was a good idea to broke the new subject here as well.

I was about to add the real-time light mapping system to the instance stamp code I have been working one when I got an email from Mark suggesting that adding directional light to the level would not work in cases where a soldier or object was placed alongside a wall that would cast a shadow over the subject based on the angle of light.

For an outdoor scene, it is common in some FPS games that a single directional light representative of the sun would allow all objects in the scene to cast a shadow. This shadow would render on the floor, walls, characters and objects of the scene. The common technique used is a variation of shadow mapping and uses a second render from the light position and clever shaders to achieve.  It was suggested that before I embark on adding my real time light mapper to the new static scene system, we look at the possibility of adding dynamic directional lighting into the scene, and perhaps even deferred rendering.

It must be stresses that starting a series of prototypes to experiment with deferred rendering techniques, and implementing those to replace reliance on baked light mapping would add considerable time to the project deadline. Although I am familiar with the technique, I have not created one personally and there are bound to be a hundred small details that relate to creating a deferred rendering system for an entire game engine. It would also mean we basically throw away the real-time light mapping system developed so far as a dynamic lighting system would not need secondary shadow baking and surface lighting.

Rick suggests we implement both, baked lights for interiors and dynamics for exteriors, but this of course means all the work of a deferred renderer, plus implementing, perfecting and then combining these two techniques into the final engine, producing the most amount of work.

One advantage of dropping baked light-maps is that no memory would be required to store the UV data sets and of course the light map textures, which when considering a 500x20x500 level, would amount to quite a sea of lightmap textures and most certainly some form of streaming system to ensure the texture and UV data was streamed in as the level scene required the data.

Decisions Decisions

I am loathe to drop code that I spent weeks theorizing, coding and perfecting, but at the end of the day it is about creating the best product and if that means smashing the sculpture to pieces and starting again, then these are the hard knocks of game development.

I have decided to crack on with things that need coding and not debating such as getting the static LOD system smooth and predictable, adding it to the main map editor so I can edit with the new instance stamps, and then seeing what I can do about physics and player controls.

Physics and Player Controls

Just to highlight that game engine development is not all peaches and cream, another issue to tackle in my immediate future is the fact that as the geometry is being constructed on the fly as you get near to it, so to does the physics geometry. 

In the current FPSC all physics level geometry is loaded in at the start and stays solid until you go to the next level. Apart from placing pseudo static boxes, I have never tried generating polygon data and adding it to a physics scene on the fly. My mind tells me such a thing would introduce instability and cause distant objects to start flying out of the scene, and at the very least add more processing overhead as you advance to new parts of the level.

A prototype will need to be created to answer these questions before I can introduce physics to the map editor and allow the user to drop down and run around the instance stamp based level.

All these things can be worked on without answering the lighting question, so we have some time to debate the pros and cons of creating a whole new shader based lighting system over the traditional baked lighting system we have right now.  Its worth noting both COD MW2 and Black Ops are forward rendered using some baked light-maps, and they look just fine.  Deferred rendering you would see in games such as the latest Far Cry. I am not pinning my 'alternative flag' to deferred, simply opening up the possibility for developing a new lighting system that does not use baked lights and instead produces all it's lighting from clever shaders and even cleverer techniques.

Signing Off

I am quite pleased with how the LOD work went, and with some tweaking of buffer sizes, asset sizes and load balancing so not too much happens in the background, I should be onto the next thing soon. If anyone looks at these daily antics from outside the cult of coding, you should begin to realize just how fluid development can be. Where I to stick with a deadline and meet it, I would not be implementing instance stamps, I would probably already be working on physics and player control systems, running around a light-mapped level created under the umbrella and protection of a resource meter. The big question is whether the lighting question is one of those deal breakers, or whether we can stick with baked lighting and return to outdoor directional dynamic shadows and lighting during an update after Christmas.  Feel free to post some comments on this!

“Coming Home” by Wisconsin student wins U.S. 2013 Doodle 4 Google competition

After 130,000 submissions and millions of votes cast, Sabrina Brady of Sparta, Wisc. has been named the 2013 U.S. Doodle 4 Google National Winner. Her doodle, “Coming Home,” will be featured on the Google homepage in the U.S. tomorrow, May 23.

Students across all 50 states amazed us with their creative interpretations of this year’s theme, “My Best Day Ever...” From scuba diving to dinosaurs to exploring outer space, we were wowed by the ways young artists brought their best days to life in their doodles.

Sabrina’s doodle stood out in the crowd; it tells the story of her reunion with her father as he returned from an 18 month deployment in Iraq. Her creative use of the Google letters to illustrate this heartfelt moment clearly resonated with voters across the country and all of us at Google.

In addition to seeing her artwork on the Google homepage, Sabrina—who is in 12th grade at Sparta High School—will receive a $30,000 college scholarship, a Chromebook computer and a $50,000 technology grant for her school. She will attend Minneapolis College of Art and Design this coming fall, where she will continue her artistic pursuits. Congratulations Sabrina!


In addition to the National Winner, voters across the country helped us determine the four National Finalists, who will each receive a $5,000 college scholarship:
  • Grades K-3: Reagan Gonsalves (Grade 1, Santan Elementary School, Chandler, Ariz.) for her doodle “My best day ever is learning about nature.” Reagan says, “My best day ever is to be around the pretty animals and plants in nature, because I love to know about what is around me. I love to watch hummingbirds drink nectar out of flowers. I love to read books on nature and how plants and animals grow.”
  • Grades 4-5: Audrey Zhang (Grade 4, Michael F. Stokes Elementary School, Levittown, N.Y.) for her doodle “...When I discover paradise!” Zhang says, “My best day ever will be when I discover paradise. In paradise, I could play with dragons, romp with leopards, and chat with fairies...It would be the best day ever when I could finally live in a mystical, dreamy realm.”
  • Grades 6-7: Maria Iannone (Grade 7, Chestnut Ridge Middle School, Sewell, N.J.) for her doodle “The best day ever.” Maria says, “Where I live, it's difficult to view the night sky very well. Having an interest in astronomy, a day where I can observe the things I study on my own time would satisfy me.”
  • Grades 8-9: Joseph Han (Grade 8, Falmouth Middle School, Falmouth, Maine) for his doodle “Late-afternoon bliss.” Joey says, “For me, ‘the best day ever’ doesn't consist of ambitious dreams, but rather the enjoyment of a day spent in carefree euphoria. Being in the woods is something that evokes such happiness in me. The lighthearted joy of rafting, fishing or catching fireflies is what I've attempted to capture.”

After the awards ceremony, all 50 of our State Winners will unveil a special exhibition of their artwork at the American Museum of Natural History in New York City, where their doodles will be displayed for the public to view from May 22 - July 14.

Thanks to all who voted and helped us select the 2013 Doodle 4 Google winners. Even more importantly, thank you to all of the students who submitted their artwork and the parents and teachers who continue to inspire and support their young artists. Until next year... happy doodling!

Top Charts in Google Trends—The most searched people, places and things

Ever wonder what the world is searching for? With Google Trends, you can see what's hot right now, and also explore the history and geography of a topic as it evolves. Today you'll find new charts of the most-searched people, places and things in more than 40 categories, from movies to sports teams to tourist attractions. You'll also find a new colorful visualization of real-time Hot Searches.

Top Charts—a new monthly "spirit of the times"
Top Charts are lists of real-world people, places and things ranked by search interest. They show information similar to our Year-End Zeitgeist, but updated monthly and going back to 2004. To check them out, go to Google Trends and click "Top Charts" on the left-hand side. For example, you can see the 10 most-searched cities, movies and scientists in April:

Top Charts includes more than 40 top 10 lists and more than 140 time periods. Hover on a chart for links to embed the chart in your own page or share on social media.

Top Charts is built on the Knowledge Graph, so the data shows interest in real-world things, not just keywords. When you look at a chart of sports teams and you see the Golden State Warriors, those rankings are based on many different related searches, like [gs warriors], [golden state bball] and [warriors basketball]. That way you see which topics are most popular on Google Search, however people search for them. Top Charts provide our most accurate search volume rankings, but no algorithm is perfect, so on rare occasion you may find anomalies in the data. You can learn more about Top Charts in our Help Center.

Hot Searches, now in hot colors
In addition to Top Charts, now there's a vibrant new way to visualize trending searches as they happen. On the Trends homepage in the left-hand panel, you'll find a new link to "Visualize Hot Searches in full-screen." You’ll see the latest trending topics appear in a colorful display:


You can customize the layout by clicking the icon in the upper-left corner and expanding it to see as many as 25 searches at a time. You can also pick any region currently supported by Hot Searches. Use fullscreen mode in your browser for the biggest, purest eye candy.

...and a few design updates
We’re also continuing to spruce up our site. Among other things, now the homepage shows you more interesting stuff up front, and the search box is always available at the top:

The new Trends homepage shows a list of today's Hot Searches. Enter search terms at the top to see search interest over time and by geography.

We hope you enjoy bringing new stories to life with Google Trends. We love feedback, so please feel free to let us know what you think by posting online or by clicking "Send Feedback" at the bottom of any page in Google Trends.