Thứ Sáu, 30 tháng 8, 2013

Friday Screenshot Finally

Ready To Let The Cat Loose

So you've waited long enough for some visuals, so without any ado here are some shots for you.


Here we have three enemies, one of which I dispatched with my trusty sniper rifle with the other two making there way to my position. I can make a b-line for the door which will give me some concentrated fire, but I would also be trapped.  Not quite sure what assets in Reloaded will be sphere shaped, but the wireframe ball rolling through my battle will find it's place soon enough.


A few seconds later, one of the enemies decided to rush me and start punching and kicking his way to victory  Even though fire can be exchanged both ways now with working health, melee combat does not impose any damage or make physical contact so his efforts are in vain :)

Progress So Far

As you can see, I have integrated the characters and AI into the main terrain & physics demo and started populating it with some assets such as sandbags and buildings (well, one building to be exact).  I've given the characters a patrol route around the building and I start some way off near the shore, so I can sneak up on them and snipe them if I wanted.  The zoom feature is not in yet for the rifle, but that will come when I make a full integration of the latest weapon system from Classic which is pretty good actually.

What's Next

The week is over and the weekend begins (when I next wake up). I have some more assets in the form of finished ladders, stairs and a door plus the need to connect it all together so I can place some enemies in the building and on the roof.  My hope is that the demo will be a sort of 'assault on the building' and make it quite hard for the player to win.  If I can create that, I will have put together the first game play glimpse for Reloaded.

Signing Off

As Rick logs in for the day, I log out for the day, so until Monday (or sooner if you are lucky) I will bid thee goodbye for now.  I hope you like what you see so far!

Thứ Năm, 29 tháng 8, 2013

Thursday Demo Time

Internal Demo That Is

I have been looking forward to creating the visual demo to show off the AI so far, and so today has been fun. Had to jump back to the core module about ten times to refine and clean more of the code and resource usages, but eventually the visits became less frequent and I could start playing inside the demo.

By demo, I mean a prototype with terrain, physics, player gun sticking out, assets scattered here and there, and the hero of the hour, our AI character. As I write this he is running about, doing his patrol, reloading his gun and using the newly associated physics capsule to control his progress through the scene. Any hills he runs over them, obstructions he stops and of course the player cannot run through him and he cannot run through the player. It's all very civilized. That is not to say he is finished, and not by a LONG chalk!  He acts rather scatty when standing still, he does not melee, he does not seem to see me half the time, his general movement behavior is erratic and generally he does not behave like he does in my AI prototype. Friday will be about calming him down and getting him looking as good in the demo as he does in the AI proto room.

The physics integration with the character went very smoothly, thanks to a bit of thinking before I started coding. I was going to attach the physics to the visual object and leave the AI entity object to act as the 'advance scout' to reflect the characters mind. After reflection, I realized I want his projected future position to have the physics awareness so he does not project his final destination through recent rubble and other dynamic obstacles.  To do this, I simply decoupled the object from DarkAI, and then used a velocity force on the newly created physics object using the AI entity coordinates. I grabbed a screenshot showing this trinity of 'entity position, physics object position and visual object position' but my PC ate it so I am guessing fate wants you to wait for an even better shot.

I also spotted that my physics prototype and therefore most of the module was using hard coded resource values. I will need to deal with this as such code will totally trip up the final integration step.

The Next Bit

My main goal, aside from the bits and bobs above, is to add health, shooting and impact deaths to the demo prototype on Friday.  This way I can shoot the character, he can shoot me and we can have a good combat simulation.  Before I moved over to demo work, and part of the clean-up of the code, I added two more characters to the AI proto and it was great to see them in action. Even though I had not coded it, the AI entities would flank me, some would fire, some run up to me and kick me, others retreating to reload and as few as three characters kept my player pretty busy almost all of the time.  I have a good feeling that players of Reloaded games will get some decent high octane game-play.

Signing Off

My artist has been plagued with power cuts and it's possible I won't get my combat building before the demo day which is Monday.  The good news for all concerned is that with some liberally scattered sand bags, more characters roaming the hills and a functioning player weapon, I should be able to demonstrate a variety of AI behavior in the actual terrain setting which was my goal three weeks ago.

Three weeks seem to have come and gone!  The more I build feature upon feature, and start to see the combined results, the more I am convinced I have no earthly business writing such an ambitious piece of software.  This is no longer an augmentation of an existing product, it's a bottom-up rewrite of almost every component and considering the last one took almost 2 years I am amazed I have gotten this far.  Fear not readers, I am not feeling the strain or loosing faith, I am simply gobsmacked that me and a few guys have cobbled this together in a relatively short time and it's starting to look seriously good.  More than that, the potential of this product is insane, and I would not be surprised that Lee splits in two at some future point just to keep up with the ideas that need coding.

I will see if I can prepare a nice screen shot or video for you Friday (probably posted Saturday early hours) as we're in demo country now and the visuals are much much nicer!

Thứ Tư, 28 tháng 8, 2013

Wednesday Progresso

Pretty Good Day

Only about eight hours today but got some nice things done. In a nutshell I decided to add the ladder animations to the state engine database which is now part of the main animation system, and then integrated it into the prototype replacing the hacked version. Works much better and I am confident about it's use in the wider scenarios too so I am very happy to have put that to bed.

Our enemy can't hide forever...the little scamp!

I also added melee combat to the system so that if the player is within 150 units of the enemy, they will charge at you at high speed and swing for you! If they are REALLY close or you are ducking, they will give you a nice boot kick to let you know you're in danger.  Adding this relatively small function improved the combat system by 200% it was so freaky cool to see!

The final task was to clean up the whole module and get it so no rogue globals were being used. This task due to the fact I could break the code in fifty different ways from Sunday was a perilous task but with careful coding and constant testing it went well.

The absolutely final thing I did before this blog was to separate out the assets used in the prototype into what I call the PrototypeArt folder, which is a common folder in the main part of the software. I then recreated the AI prototype as a project as part of the main engine and transferred all the data structures to the main type file.  What all that means is that come Thursday I will be able to integrate the AI components into the current terrain + physics prototype (or create a new terrain + physics + AI prototype just in case I want to work on physics without loading in all the AI stuff).  There are some extra things I want to add to the AI system like spot effects (i.e. character running from A to B while strafe firing the players position, e.t.c.) but I can add these once the main visual demo is up and running, and the light weight AI prototype remains alongside for fine grain AI work.

Signing Off

Mark is plumbing away on a new weapon and the actual 'combat super building' that will feature in my AI demo, and hopefully I can reveal some sneak peek shots of that art when I get it.  The plan is to move my placeholder art into the AI demo and then replace it all with final artwork as it comes in.  Fortunately I already have finished sand bags, boxes, concrete barriers and characters so we should have a mini-visual fest in a few days time.  The meeting and the deadline for the AI demo is Monday, and I plan to work the weekend to get as much done as possible so there might be a rare weekend blog if the visuals start coming together at that point.

Thứ Ba, 27 tháng 8, 2013

Tuesday Tribulations

Every Flavor Bean

A common problem when writing AI is the increasing sophistication of the logic as you compound conditions on top of each other. Should a character duck before reloading, when should they run for cover and reload, do they have the time to reload while stood up without being shot, what if the player springs on them mid-reload?  As you can imagine there are multiple permutations for every single animation event the character can perform.

As such, my progress goes much slower than I would have liked, probably because I like to tinker and get things 'just so'. I am realizing all too clearly that such an attitude will keep me in AI land for months if I let it.

Story So Far

The current AI prototype is looking good now, with the character patrolling and running, ducking, running while ducking, strafing when stood, tracking down the player, finding cover before reloading and most recently, climbing the staircase up and down using proper animation which finds the steps perfectly. The animation needs improving but the mechanism has been added to the state engine to get the character to the foot of the stairs, play the animation, adjust the world position of the character and resume any previous state the character was in.

This same mechanism I can use for the ladders and intend to do so and remove the last of the hack code present in the prototype.  I also plan to accelerate adding features for the character, and fiddle less so I can get some serious progress made.

Right now the character can put up a decent fight, hiding behind cover, retreating the reload, and finding new paths to the player (thanks to a strange Search Area bug) which means combined with the character running event, your foe can run at you from seemingly random directions and spraying you with bullets.

I am not 100% happy with the character so far, not least because you can still catch him doing silly things like reloading but I can still see a bit of him round the corner, or standing point blank in front of me shooting with very little strafe. I am considering adding a melee attack at this close range as it might solve the problem and improve the dynamic all in one go.  Fighting at more distant ranges with cover objects works very nicely.

Signing Off

I am not too miffed about today's progress despite wanting to have done more. It's still over 10 hours graft and it moved the prototype forward in valuable ways.  I do feel I need to press down harder on the gas now, as we still need to see character physics adding, player weapon functionality, health logic, grenades, doors, e.t.c. All this will be needed for an 'ace' demo!

Thứ Hai, 26 tháng 8, 2013

Monday Modules And Occlusions

Bank Holiday For Some

Where some UK residents can enjoy a relaxed sunny weekend extending leisurely into an extra day thanks to the British Bank Holiday, some unfortunates must sacrifice all this in the pursuit of code.  Ladies and gentlemen, I must confess to being one of those sad creatures and reveal my meager offerings through this blog.

The Weekend

I had a mission this weekend which was to save TGC some cash and figure out if I could implement an occlusion system for Reloaded in a timely fashion. It was also an opportunity to study some middle-ware options to see if it made more sense to buy in a solution for quick integration.

After a few hours testing some third party solutions, it was apparent that I still had a substantial amount of work to do around such modules to complete the integration into the engine.  On closer investigation, segment and entity shapes, the vast amount of objects in the level and lack of fine-grain control all conspired to make the job of using a third party solution problematic.

I then spent some time researching the latest techniques on occlusion, and found that I had read most of the articles before when I explored occlusion a few months ago.  One thing that did leap out a fresh was the idea of using the GPU to help me hide occluded objects in the scene, and one article in particular was inspiring me to do an experiment in this regard.

Some of Saturday and all of Sunday was spent creating this prototype, and to make sure I got the speed I integrated directly into the DBP object engine. The essential technique is to count how many pixels are actually drawn to the screen for each object draw call. This measurement is actually free and is part of any graphics card supporting DX9 and above, and is called an Occlusion Query. Once I knew when an object no longer drew screen pixels, I could flag it as hidden and would no longer need to render it.

To make sure that hidden objects are revealed again at the right time, for any hidden object in consideration, a very quick render using depth test only is performed to check if the object is suddenly visible again, and if so, the visibility state is restored. The downside to this singular technique is that your engine is making as many draw calls as if every object was rendered. Even with the faster than light draw call, it's still not game-engine ready. The icing on the cake, and the technique I will be implementing after the AI is done, it to group the objects into a spacial hierarchy that has object sorting built-in.  The idea is that you can make a single draw call for the purpose of an occlusion test, and if it fails (i.e. no pixels drawn) then ALL the objects covered by that test are instantly dismissed from further tests. Imagine a building with four floors, and each floor has five rooms, and each room has thirty objects. When you stand in front of a wall, hiding the building, you have only one occlusion test draw call to make and if occluded, you can skip 600 occlusion tests that would otherwise have been necessary, and of course you skip drawing 600 full detailed objects into the scene thanks to the overall occlusion system.

I also have a neat idea that instead of constantly sorting the objects into front-back order as you move around the scene, that the hierarchy structure stores objects in some sort of spacial list (multidimensional array grid) and then the draw system traverses the spacial list from which ever direction you are looking at them.  Think of it as a series of connected boxes, and you start to fill water in any box on the outer edge. Eventually the water spills over to the neighboring boxes and so on until all boxes are filled. Now imagine a super fast traversal system which mimics that flow of water, ticking off each box as they are encountered and drawing the objects inside that box.  My idea is not solid yet as I need to make some prototypes, and it might turn out a simple distance based bubble sort would work just as well, but the more I think about 17,000 objects floating about the level, the more I want to keep them in a static list somewhere and only 'touch' the objects that are of immediate real-time relevance.  Sometimes a sorting algorithm can be slower than just rendering all the objects, and I want to make sure it's not the case for Reloaded.  Another benefit to front to back sorting of screen art is the massive reduction in overdraw, and when you are using intense shaders, the performance gain from this technique is substantial!

The Extra Day

I was tempted to finish off the Occlusion stuff described above, but I have set myself a mission to create an 'ace AI demo' for a meeting I am having come September. For this we need smooth character animations, clever AI bot, player weapon functionality and a good looking scene to run around in.  I only have five days, which in real terms is about three days given the nature of development to constantly throw rocks in your path.

To that end, I have finished adding the staircase traversal state to the state engine database and then cleaned up all the source code and made it a separate module for later integration.

The next step (to be done this evening) is to clean up the AI bot from the main AI prototype ready for the above state engine to replace the hacked in animation code currently there. One dilemma is that the hard fought ladder code which is pretty much a hack ought to be part of the state engine like the stair climb, but the cleanliness of the code must come secondary to making a nice looking demo for the meeting.  To that end, I plan to implement the staircase code using the proper state engine but leave the ladder hack in for the demo, and replace it afterwards. It also turns out there is a scale issue when mapping positional information from the animation to the real world, and the staircase is a much simpler animation to work with than the ladder animation. Once the problem is solved, the ladder should be integrated relatively quickly into the 'fully tested and integrated' state engine.

Mark sent me a substantial and comprehensive document on all the new weapon features introduced by the AirMod enhancement to FPSC classic. Given the power of this part of the original engine, my plan is to create a module from it as I integrate the new weapons for Reloaded.  This will be one of the module tasks for this week, as the player will need to run around and fire back at the enemies that will populate the scene.  I cannot promise the weapon system will remain untouched but if it makes sense and blends well with the improvements, it will stay in.  Naturally, some of the features of the weapon system will not feature in the set of weapons we will start with for Reloaded, but the most I would do is comment out the code, to be re-activated when new Reloaded weapons that exploit those unique characteristics emerge.

Signing Off

For now, I have to get something to eat before my head falls off, and then the work of adding the new state engine database system to the AI prototype begins in earnest.  A week might seem a long time, but with the inevitable email distractions and daily development problems, it will fly by.

Joining a moment in history through the modern web

Nearly 50 years ago, Dr. Martin Luther King Jr. delivered a stirring speech on the steps of the Lincoln Memorial with the words “I have a dream.” Today, we’re sharing a new way to take part in this historic moment through a web experience developed by our friends at Organic and Unit9 for the National Park Foundation.

Called “March on Washington,” the experience invites you to relive that moment in time by listening to an original recording of Dr. King’s words accompanied by immersive photography from the event itself.
One of the most powerful abilities of the web is that it connects people from all over the world in new ways. In “March on Washington,” you can also virtually join this historic event by recording yourself reciting Dr. King’s words. Then, you can play back other participants’ recordings as a crowd-sourced narrative of voices, hearing the timeless message repeated back from people all over the world.

We’re excited to see the modern web enable experiences like “March on Washington” that bring together people and history in new, powerful ways. Head over to wearestillmarching.com on a laptop, phone or tablet to check it out.



(Cross-posted from the Chrome blog)

Thứ Sáu, 23 tháng 8, 2013

Friday Animations Galore

Less Blog More Code

I had planned a two target strategy today, cracking the rest of the animations for the state engine and sorting out some Occlusion experiments as I need to make a decision on how we handle occlusion in the engine.

Our character volunteered to do the occlusion but he kinda missed the point...

I spent so much of Friday on animations, I simply ran out of time on the occlusion stuff :(  The good news is that aside from stairs and ladders, ALL the animations in the character are now assigned a place in the massive state engine for behaviors, which means everything from sitting in a chair all the way through to combat, crouch and swimming. Rather than wax lyrical, here is the finished set of animations assigned to their main states.

Crouch:
27=PISTOL-STAND-TO-CROUCH:1630:1663
28=PISTOL-CROUCH-IDLE1:1670:1819
32=PISTOL-CROUCH-STRAFE-FB:2075:2131
31=PISTOL-CROUCH-STRAFE-LR:2015:2072
29=PISTOL-CROUCH-IDLE2:1825:1914
30=PISTOL-CROUCH-RELOAD:1920:2010
33=PISTOL-CROUCH-RUN:2135:2153
47=PISTOL_CROUCH-WAVEFORWARD:2460:2510
48=PISTOL_CROUCH-GRENADETOSS:2520:2555
34=PISTOL_ROLL-TO-CROUCH:2160:2216
35=PISTOL-ROLL-TO-STAND:2225:2281
36=PISTOL-VAULT:2281:2334
62=IMPACT-CROUCH-FORWARD:5240:5277
63=IMPACT-CROUCH-BACK:5290:5339
64=IMPACT-CROUCH-RIGHT:5350:5395
65=IMPACT-CROUCH-LEFT:5409:5466

Patrol:
21=PISTOL_WAYPOINT_RELAXED:1290:1320
19=PISTOL_IDLE_RELAXED1:900:999
20=PISTOL_IDLE_RELAXED2:1000:1282

Combat:
3=PISTOL_IDLE_READY_1:100:205
4=PISTOL_IDLE_READY_2:210:318
5=PISTOL_IDLE_READY_3:325:431
6=PISTOL_IDLE_READY_4:440:511
22=PISTOL_WAYPOINT_VIGILANT:1325:1419
10=PISTOL_WALK_FORWARD:685:707
11=PISTOL_WALK_BACK:710:735
12=PISTOL_WALK_LEFT:740:762
13=PISTOL_WALK_RIGHT:765:789

Stood:
37=PISTOL-MELEE:2340:2382
66=DOOR-BREACH:5511:5553
23=PISTOL_STARTLED:1425:1465
7=PISTOL_RELOAD:515:605
24=PISTOL_WAVE-FORWARD:1470:1520
38=GEN_GRENADETOSS:2390:2444
14=PISTOL_RUN_FORWARD:795:811
15=PISTOL_RUN_LEFT:815:830
16=PISTOL_RUN_RIGHT:835:850
8=PISTOL_SIDESTEP_LEFT:610:640
9=PISTOL_SIDESTEP_RIGHT:645:676
17=PISTOL_STRAFE_LEFT:855:871
18=PISTOL_STRAFE_RIGHT:875:892

Unarmed:
39=GEN_IDLE1:3000:3100
40=GEN_IDLE_CONVERSATION:3110:3420
52=GEN_CONVERSATION_EXPLAIN:4260:4464
53=POINT_FORWARD:4470:4535
54=POINT_RIGHT:4540:4605
55=POINT_LEFT:4610:4675
56=POINT_BACK:4680:4745
41=GEN_IDLE2:3430:3697
43=GEN_WALK:3870:3900
44=GEN_RUN:3905:3925

Chair:
42=GEN_CHAIR_SITTING:3710:3862

Swim:
45=GEN_TREADSWIM:3930:4015
49=GEN_SWIM:4030:4072

Death:
57=GENERIC_DEATH:4800:4958
58=IMPACT-FORWARD:4971:5021
59=IMPACT-BACKWARD:5031:5090
60=IMPACT-RIGHT:5101:5160
61=IMPACT-LEFT:5171:5229

In Cover:
25=PISTOL_PEEK_RIGHT:1525:1573
26=PISTOL_PEEK_LEFT:1580:1623
50=PEEK_CORNER_GRENADE_RIGHT:2570:2668

51=PEEK_CORNER_GRENADE_LEFT:2680:2778

So these are the animations so far. In coding them up, there are a few missing ones like idles for cover positions and a few loops that don't mesh perfectly, but these can be caught in the fine tuning stage.  For now, I am happy that our character is animate and ready for the game play ;)

Signing Off

The weekend looms, and I plan to continue coding through to get a little more done before Monday.  Now the character is linked to his state graph, it should be a very smooth process of linking him to the AI bods from DarkAI now and start to see some seriously realistic reactions as the guy runs around the scene.

I am also tempted to rip out the entire weapon system from FPSC classic and make it a separate module so I can clean it up and start to improve it. The classic had some seriously cool mods added to the weapon systems thanks largely to Errant AI and FPSC community modders who demanded a vast array of features to compliment the new weapons from later packs.  So many in fact I seriously lost track of the full capabilities of the thing and now I need to re-acquaint myself.  Stuff like dual-wield, multiple weapon modes, ammo switching, iron-site and who knows what else are buried in there and I need to drag it out into the light and formalized.  We only have about eight weapons for the first release, but they will be bristling with features from the get go, using all the fancy stuff we have to date.  I don't just want Reloaded to look good, but provide the level of flexibility that creates awesome game play and the attributes of the player's arsenal will be key to this goal.

Tenth annual Code Jam competition wraps in London

Scientists at Google Labs have discovered it's not easy to be a great rock drummer. Your task is to devise an algorithm to audition drummers based on the imperfections of their rhythms.

This is the type of challenge that was posed to the finalists of our 10th annual Code Jam last week. Twenty-four of the world’s best coders were flown to London to compete for the title of Code Jam Champion 2013 and the accompanying $15,000.

Belarus's Ivan Miatselski, a.k.a. “mystic,” was crowned Champion, besting more than 45,000 other registrants and making it through four online qualifying rounds before winning the in-person finals. The final round was a four-hour, five-problem tour de code. Throughout Code Jam, contestants are allowed to use any programming language and any development environment they like, ranging from C++ to Taxi. And while the finalists are among the among the most talented, we’ve yet to see anyone earn a perfect score during the final round.

Successfully solving the problems involves more than just writing code: contestants in this year's finals needed mastery of computational geometry, dynamic programming, numerical algorithms and more. Though the competition is fierce, Code Jam is also a growing community of skilled problem-solvers, engineers and friends from all over the world. Code Jam will be back in 2014—join our community on Google+ and look for more news to come.


Thứ Năm, 22 tháng 8, 2013

Thursday Sounding

A Sound Blog

A bit of a mixed bag today, with a slight attack of emails and a certain amount of distraction, managed to create a few things before my Thursday was exhausted.  Watch out for two new model packs to hit the TGC store, which brings our grand total to 76!  Now that's a lot of model packs!!

Sound Test Alpha

As the AI work continued, there was also the increasing importance of sound files which will form a key part of the Reloaded revamp and it had to be right. Before today we relied on mapping sound files over hastily produced videos to get an impression of the final product, but it really wasn't the same as playing the game and getting a feel.  The only true way was to create a prototype which could trigger off the actual weapon animations and stage the sound effects at the appropriate frame to see what the actual in-game audio would sound like.

It only took a few hours but the result was pretty useful and I was able to compare the new sounds being produced side by side with the classic sounds from the original product and the sounds we inherited from continual modding efforts since the last classic update.

I must say the new 'fire' sound is first rate, and really transforms the audio quality of the new product, and I have send the sound test prototype to my fellow team members to get their reaction on where we are. I found a combination of classic sounds, new sounds and even newer unheard sounds worked a treat, though the goal is to entirely replace the classic sound bank so we're not quite there yet.  The strategy is to work on one weapon so it's triple AAA, and once we're convinced it's unbeatable, we'll move on. The weapon we're testing against, you might have guessed, is the Remington Rifle, which packs a serious single shot punch, especially when your target is appraised through the extremely handy zoom scope!  Other sound effects include putting the weapon away, retrieving it, zooming in and out, reloading and finally firing. I am undecided whether to add additional features such as splitting the reload into open, bullet-load and close, clip in, clip out, melee butt hit, dry fire and a few other ideas I had.  The current census is that these are surplus to requirements and a basic set of weapon sounds will suffice for V1, which is probably true enough.

Signing Off

Not much AI happened today, which is a pity as I was also planning to look closer an Occlusion which is an issue that needs looking closer at before too long. LOD rendering will help, but when you have an entire universe to hide, you really need top draw occlusion technology to bring back your engine performance!

I'm happy to tick off one more prototype for today, but finishing the AI combat and solving the occlusion question will be a good result for Monday.  In the meantime, adding the rest of the character animation actions and grafting them into the main AI proto will be a champion ambition for Friday!

Thứ Tư, 21 tháng 8, 2013

Wednesday State Power

I Made The Right Decision

Remember that fateful decision to hack in the ladders, and then my second decision to write an editor for the crouch animations and my day today was to finish that and add more animations to it form the main AI prototype. Well it turns out the 'other' animations required where equally 'interesting' and the editor plus state engine sub-system thingies worked amazingly perfectly, and probably saved me hours of frustration too!

Character Enjoying A Quick Roll

My prototype can make the character relax, stand at arms, crouch, move in different directions, roll to a crouch, roll back to his feet, leap over invisible sandbags, fidget in stood and crouch positions and lots more things I have forgotten for now. The bonus is that it all does this with perfect transitions from state to state which means no glaring switches of animation and ruining the smooth flow of the characters through the game.

Alas I got carried away and it's now 4AM (again) but I found time to clean up the routines part way through my additions which means it's now a separate module that can drop into the AI proto which will happen tommorow now. This system will replace the hacked animation code currently in there, minus the ladders which I dare not touch lest it suck me into four more days :)  I am pretty much decided to use my new sub-system on ladders and stairs as well, but I want to finish animated combat to get back on schedule somewhat.

The Trinity Of AI

As I walked the dog earlier today I got to thinking about blogging the nuance of the Reloaded AI system, and the trinity which will make it all realistic. The first level is Paul's AI bots, which in their robotic style move to locations, rotate to and track down targets, work out cover positions and firing patterns. Imagine this as the 'brain' of your enemy, thinking where they should go and what they should shoot at.

The second level is the new animation sub-system which handles how the 'real character' animates and moves to create smooth transitions between each of these states.  The states of course provided by the AI bot, but thanks to the smoothing techniques of the sub-system the 'real' object trails behind the AI bot a fraction. This gives the impression the enemy knows what they are 'going' to do, and subtle visual clues are evident in the sub-system transitions (such as the head-turn you saw in a previous video).

The third and final level is the wonderful details in the animation itself, thanks to Mark. Each animation brings out human traits like swatting a fly away from a face, looking around cautiously and shifting his weight when standing still for too long.

Combined together, they form a potent enemy, that would look real, move around the scene in a realistic manner and appear to think for real too.  You could say it's a perfect solution for our REAL-OADED product! [shudder -Ed]

Signing Off

Any video today would be a bit random and drawn out as there are a LOT of animations coded now but it really would not reveal any more of the tech background than you have already seen. I think the next good video is when all these animation states and transitions are tied to the AI bot and demonstrating a semblance of combat.

HandsOn Tech: Helping nonprofits fight poverty and strengthen communities with technology

With the right technology in place, nonprofits can do a lot to strengthen their communities and solve the world’s problems. They can help improve family literacy, support veterans as they transition out of homelessness, or help communities make information more accessible. Unfortunately, nonprofits often lack the resources to fully take advantage of technology to further their causes.

Two years ago we partnered with Points of Light to address this problem. With the support of the Corporation for National and Community Service, we created HandsOn Tech, an initiative with a mission to develop the nonprofit sector’s ability to use technology to increase community resources and improve outcomes for low-income communities and families.

Fueled by the passion and skills of AmeriCorps VISTA members, HandsOn Tech pairs U.S. nonprofits with individuals who are passionate about technology and looking to make a difference. Through this program VISTA members and skilled volunteers (including many Googlers) deliver training to poverty-focused nonprofits and work one-on-one with organizations to create individualized, comprehensive tech plans to help them achieve their social missions more efficiently and effectively.
Stephenie Lai, a VISTA with HandsOn Tech Atlanta, looks on as skills-based volunteer Jon Whitaker demonstrates the new mobile version of the Cherokee Family Violence Center’s website to Meg Rogers, Executive Director. Click to see more photos.

In the first two years of the program, VISTA members and skills-based volunteers have trained more than 7,700 nonprofit organizations in communities across the country. For example, VISTAs from a HandsOn Tech program in Pittsburgh worked with Naomi’s Place (an organization that provides transitional housing and supportive services to homeless women) to enhance their website, set up Google Apps, and develop a new Facebook page. The program also helps VISTA members and volunteers learn new skills. Alex Green, who served as a VISTA in Seattle, Wash., told us: "Apart from building awareness and ability in the community in terms of IT needs, I have also had the opportunity to grow myself as a person in all aspects of my life."
HandsOn Tech Silicon Valley recruited skill-based volunteer Elyse Tager from Constant Contact to lead a training on how nonprofits can use social media to communicate their mission. Click to see more photos.

HandsOn Tech begins its third year this fall with programs in Atlanta, Boston, Chicago, Pittsburgh, San Jose, Seattle, Atlanta, Denver, Los Angeles, New York City and San Jose. Prior to beginning their service, VISTA members come together for a week of training at our campus in Mountain View, Calif., where they learn about cloud-based tools from a variety of technology companies including the Google for Nonprofits suite that provides nonprofits with access to premium versions of Google Apps, Google Earth and YouTube channels as well as free online advertising.

A few openings still exist for VISTA members to begin serving in November in select cities. If you’re passionate about technology and helping nonprofits fight poverty, you can apply to join our team by August 31.

Congratulations to America’s eCities

I love to discover local gems that help make a community unique. Sometimes those gems can be around the corner—like Cafe Borrone, my favorite for Sunday brunch with my family; the food is great and even the dog is welcome on their patio. Sometimes I find those great places when I am on the road—places like Frank in Austin, Texas with their epic sausages or Boulevard Coffee in Ashland, Ore. which has great coffee and friendly people. Some of the best things I discover without even leaving home—like Casa Kids, which designs original children’s furniture from their studio in Brooklyn, N.Y., or Sticks in West Des Moines, Iowa, which sells hand-carved and hand-painted treasures.

The web helps people everywhere discover the best of what’s around the world or around the corner. Ninety-seven percent of Americans who use the Internet—pretty much all of us—look online for local products and services. So it should come as no surprise that the impact on businesses of being online is huge. According to a McKinsey study, small businesses that make use of the web are growing twice as fast as those that aren’t on the web. Just as businesses that embrace the web grow faster, communities whose businesses embrace the web grow faster as well.

Given the importance of the web in growing our local businesses and our local economies, we worked with independent research firm IPSOS to analyze the online strength of local businesses in cities and towns across America. Businesses in these communities are embracing the power of the web to find new customers, connect with existing ones and fuel their local economies. Today we’re introducing the eCity Award winners for 2013—the strongest online business communities in each state.
We hope these “digital capitals” are an inspiration to cities and businesses everywhere of what can be accomplished by embracing the web to grow.

Congratulations to the 2013 eCities of America.

Thứ Ba, 20 tháng 8, 2013

Tuesday Second Chances

Combat Day

Starting as a meant to proceed, I deliberately avoided taking the enemy character up the ladders or stairs (except to test the staircase fix from Paul which worked a treat by the way) so I could dedicate today to combat.

I re-activated my direct fire code, blended it into the character states between Petrol and Aggressiveness and then enjoyed the first glimpse of the enemy chasing me malevolently slowly around the small test area. When I lost him, he would switch to casual walk and went on Patrol. All good.

I then added some ducking code so the enemy could take advantage of hiding behind sandbags, and when I introduced my code to play the animation I physically cringed!

How To Duck

This was a lesson you probably learned at school, and if you went to my school you learned it very quickly! Turns out just setting the character to 'crawl mode' does not cut it. Mainly because crawl mode does not exist, I have to write it!  The crouch walk animations which I have my prototype actually included both crouch step forward and crouch step backward, and they included their own movement shift too.  The result was somewhat laughable.

There I was faced with the same dilemma again, whether to hack in a nice duck transition with crouch logic and control the animation in code, or sit back and do something sensible like write an editor and sub-system which could play back what I needed from all things crouch-like.

Here is an idea what you ACTUALLY need to do for a simple crouch mode:

1. Transition through interpolation from CURRENT animation to STAND posture
2. Use the STAND TO CROUCH animation as interpolating to a crouch is yuck
3. Transition to a CROUCH IDLE animation, and loop that animation
4. If MOVE, transition to the first frame of CROUCH MOVE
5. Play a single sequence of CROUCH MOVE and detect the end
6. Once at the end, move the actual character to a new position and reset
7. Repeat step [5] if character needs to move some more
8. If no more movement required, return to step [3] via transition
9. From CROUCH IDLE, if press to stand back up, transition to STAND UP
10. Play through CROUCH TO STOOD animation until detect end of animation
11. Return control back to main character system now out of crouch mode

Now that's just the simple version as crouch can do MANY more things eventually, and the transitions can get freaky when you are dealing with half animations, reverse animations and who knows what else.  You can see the appeal of a code hack!

From my very recent experience with ladders, I did not fancy doing a state machine hack this time so I decided to write a 'State Machine Database Editor and Previewer'.

State Machine Database Editor and Previewer

I won't show you the editor as it's bone ugly, but I will show you a preview from the 'player' which attaches to the editor to show me what I created today.


In the above shot you are looking at the steps I described coded as a sequence of instructions into the dynamic state machine. Think of it as a programming language within two larger programming languages, and ideal for tweaking and changing my mind in terms of order without some serious rewriting.  Also think of it as generic enough to allow me to do other types of character activity that deals with animations, transitions, moving position in tandem with some complex sequence, e.t.c.

The editor part allows me to play through the animation frames, move the object in relation to the floor grid, line up an animation loop exactly with the position advances built into the animation.  It's also a pretty small sub-prototype so I can add things quickly in the future should I decide to return to the ladder system and replace it with this.

What About Combat?

As I predicted, such an effort takes a good deal of time and it's now 2:30AM with no time left to return to combat coding.  What I do have however is the makings of a sub-system that can plug into the character animation system and pretty much handle my crouch behavior within an hour of integration.  

Rather than 'settle' for sliding about and taking a piggyback on excellent animation, this new system will transition in and out of crouch, cover the ground realistically, and provide a smooth and hopefully realistic feel. You may notice in the video a small glitch at the end of the 'move forward' animation, and these little disjoints are exactly what Mark is waiting for, and an editor like the one created will help test the fix on these very quickly as we can pin down the exact frame and how the code deals with it.

That said, the crouch is not complete. I need to formalize my new 'state engine database player' thing and make it a clean isolated module of functions and arrays, I need to add crouch move left and right, throw grenade, leap over obstacle, e.t.c. When the time comes to connect to the DarkAI system, it should allow the prototype to silkily move the character around the scene without me having to hack in and endlessly tweak values for all combinations.

Signing Off

Crouch proto resumes Wednesday, and once it's solid, move the relevant module and data over to the main AI prototype and try it out. The NEXT time a similar demand arrives (and there will be many), I should have a good generic system to simply add more data and not have to hack in anything other than additional 'methods and instructions' which can in time be re-used by subsequent behaviors and animation sequences required.

I was going to blog about some lovely new art from Mark but I've just looked and this blog is big enough for one late night post, so I will save these visual candy snacks until later in the week :)

Thứ Hai, 19 tháng 8, 2013

Monday More Ladders

And Some Stairs!

Actually it's pretty much the same ladder, but it's a little better now.  Our little guy can pretty much walk around after the player no matter where I try to hide now. He currently cannot dive through windows, leaping off platforms, quick-slide down ladders, ab-sail, dive-attack, shimmy or any other wild ideas you might have for getting from one level to another, but the mechanism for using a state machine to get from UP to DOWN to UP are now in.


Amazing amount of time spent on seemingly very little as I had a character climbing and descending a ladder three days ago.  If I did this again, I would write an editor which choreographed a sequence which allows an object frame, rotation and position to be exactly controlled throughout a process of getting from A to B. At the moment the system uses a hard code state engine to get the job done. The benefit of an editor for the sequence is that once it's done, the main AI only had to get the object to A, hand over to the sequence player, then return at B to resume the AI.  Sure, it would also have taken three days, but right now I would have the tool to create staircase walking, riding lifts, grabbing a lamp-post, swinging around 90 degrees and diving through a window and all manner of cool sequences.

That said, at least the ladder climbing code is smooth enough now and transitions well from the two test ladders I have. I don't have any stair climbing animation so could not anything with the staircase transition state engine, even though it's ready and waiting, except to re-use the walking animation and slide the guy up to the next floor.  Hopefully Mark will read this and add a nice 'walk up stairs' animation in the style of the ladder animation sequence.

Over The Weekend

As you can guess, I was also battling with this over half the weekend too, tweaking the animation and motions and slowly realizing the process of tweak, run, get game to exact state, see if it works, go back to editor, tweak, rinse and repeat was massively time consuming.  Not least was testing the 'walk down ladder' which of course involved coaxing the enemy to the roof before bringing him back down again.  Another lesson is that if you anticipate a lot of tweaks in speed, motion, rotation, animation or transition then set-up your principals at their ideal starting positions. Ten minutes spent here saves hours later on!

The Last Four Hours

Of all the frustrations, the latest one which hit me around 10 PM was the sudden switching of the entity AI from container one (first floor) to container zero (ground floor), for seemingly no good reason. The container logic on the roof and the outdoor platform was fine, but my interior staircase seemed to have a large hole, be it in DarkAI or my code.  I eventually had to create a game loop within a game loop at the exact moment the AI would do this crime and start chopping out code to see what caused it. I eventually discovered none of my code caused it (as I had deleted it all by then), and it was quite true that DarkAI decided through automatic mode to switch the entity back to container AI (I guess because it was more interesting than container one).

Rather than waste any more time, the smart move was to send the whole shooting match to Paul to investigate this odd behavior while I resort to simpler implementations such as ducking, running, shooting. The easy stuff.

Signing Off

It's coming up to 3 AM now and although I have a need to move this prototype along quickly, I don't think I should do more coding with tonight's brain. Tuesday I will abandon containers for a day and focus on combat (now that our guy can chase the player).  It's fair to say I am now behind my own personal schedule as I had hoped to have all the basic AI states in place by now, but a bit more focus and few extra hours per day should see things right.  
Hopefully I can bring you a nice video in a few days once I've got the enemy putting up a good fight.  Right now he just follows me around like a puppy!

Learn about the atomic bombings of Hiroshima and Nagasaki on the Google Cultural Institute

This August marks the 68th anniversary of the atomic bombings of Hiroshima and Nagasaki at the end of World War II. Working together with the Hiroshima Peace Memorial Museum and the Nagasaki Atomic Bomb Museum, we’ve launched seven new online exhibits on the Google Cultural Institute that help tell the story of the two cities and their tragic fate.

Explore four collections from the Hiroshima Peace Memorial Museum that illustrate the bombing from different perspectives: a pocketwatch stopped at the exact time of the detonation, diaries of young women cut off abruptly on August 6, and panoramic photos of the hauntingly barren city center days after. While most of the materials document the harrowing devastation of the bomb and its aftermath, the gallery “Recalling the Lost Neighborhoods” helps archive the old Hiroshima that vanished off the map.
Pocketwatch showing 8:15, the time of the atomic bomb drop (from the Hiroshima Peace Memorial Museum)

The Nagasaki Atomic Bomb Museum meanwhile curated photos, videos, and drawings in three exhibitions. One collection focuses on the famed Urakami Cathedral—the largest cathedral in East Asia where 15,000 Japanese Catholics once worshipped. The church completely collapsed after the bombing, but thanks to a post-war reconstruction effort, the Urakami Cathedral now stands triumphant as a symbol of the city’s rebirth.
Urakami Cathedral exhibition (from the Nagasaki Atomic Bomb Museum)

Speaking at an unveiling ceremony for the exhibits in Hiroshima today, Hiroshima Mayor Kazumi Matsui said, “Through the Google Cultural Institute exhibitions, we hope that people around the world would learn about the terrible experiences of the Hibakusha, or A-bomb survivors, and wish for peace.”

The Cultural Institute was created to help preserve the world's history and heritage. Given the average age of the Hibakusha is now past 78, we're honored that our digital exhibit can help keep the memories from both cities alive for the future.

Thứ Sáu, 16 tháng 8, 2013

Friday Ladders

Short Blog Today

After a long day of making what felt like one million very minor tweaks to line up the character ladder animation with the AI system, I have decided although it was presentable at high speeds it was not quite right. 

The solution will be to have Mark tweak the animation so 'movement' and 'animation hand and foot positions' line up perfectly, or code some sort of animation database system whereby each frame can move the character model an arbitrary amount.  The second is better as it means characters will move around the world less 'smooth' and create a very subtle sense of realism which I think is what we are chasing here.

I will think on it more over the weekend, and move my current implementation from a test hack to a real engine feature I can use for all ladders across multiple enemies and allies.

Signing Off

I was planning to present a video, but I would not create the impression Mark had in mind when he created the ladder animation.  Rather than rush one out, I will implement some more care into the ladder stuff and the next video opportunity will show a really nice 'hopefully realistic' climb action.

I must say it is pretty cool to see the character swing onto the ladder and bolt up it giving chase, with the head limb relatively independent and looking around!  It also occurred to me that the same system could be used for staircases, so instead of the usual 'enemy sliding diagonally' up the ramp stairs, the character actually places one foot perfectly on each step and ascends properly.  Exciting visuals ahead, although I had hoped to be at the point of some small combat functionality with the player taking pot shots by Friday but it's probably the right move to aim for the best realism before moving on (as it's not rocket science).

Thứ Năm, 15 tháng 8, 2013

Thursday Visual Teaser

Visual Fun

Aside from the usual regular stuff like cleaning up source code and structuring everything so it slots nicely into the main engine (boring, zzzzz), I had the rather fun task of introducing the animated character model to the AI system for the first time.  I kept it pretty light and free-form and ended up with a neat concept whereby the character you see trails the AI entity object by a split second, allowing the real character to look ahead where he is 'going to be' and it struck me that this is what we do in real life.

We actually look in the direction where are 'going to go' rather than where we are going that split second, and when you see it portrayed visually you see what I mean. No way to describe it in words, or even screenshot it, so I loaded up with some community disclaimers (thanks guys), and made a video for you:


As you can see, as the AI entity advances ahead of the character you can see, it looks like the guy is anticipating the action he is about to commence which in this case is turning a corner. It got me thinking that the same system could be employed across many other actions and create this wonderful sense of character whereby the head and upper body observed seems to be thinking ahead, before the rest of his body catches up.  I think this is an interesting ingredient and perhaps something that can only be created in real-time, subject to circumstance.

I am sure this artifact won't work in every case, especially when the action is fast and the character must obey split second directions, but for someone casually walking about it looks pretty cool!

Signing Off

A pretty busy day in the mill of code today, so not much more to reveal past the antics of getting more and more character animations tied to AI activities. The challenge for Friday will be getting the character to 'realistically' climb the ladder, dismount, continue his searches on the roof, find nothing and come back down to continue his patrol.  Sounds simple enough, but getting all the separate animations to blend together and look 'natural' will be a bit of headache.

And once we have that looking good, to tame the code so that it's modular and flexible enough to handle multiple characters, be they enemies, allies or neutrals.  I find the excitement of visuals is short-lived and quite quickly I am brought low again to the reality that I have to make all this fancy stuff work in a practical way in a larger context.  Granted, not the worst job in the world ;)

Join the Hangout Comedy Club where your laugh can help change a life

The Edinburgh Fringe festival is one of the largest arts festivals in the world, where thousands of people descend upon Edinburgh to see hundreds of up-and-coming comedians and artists perform in bars, basements and venues around the city.

For those of you who have always wanted to go to the Fringe, now you can enjoy great new talent, laugh ‘til your belly hurts and even heckle a comedian from the comfort of your own home. This summer, you can be in the front row of a live comedy show wherever you are thanks to Google+ Hangouts and YouTube. While you’re at it, your laughs can raise money for a good cause.

We’re partnering with the U.K. charity Comic Relief to bring you the first online comedy club—the “Hangout Comedy Club.” We’ve created a clever gizmo called the “Laughometer,” which will measure how much you enjoy the show and turn your lol’s into an optional donation to Comic Relief. They use the funds they raise to tackle the root causes of poverty and social injustice.

To be part of the Hangout Comedy Club, simply join a Google+ Hangout hosted by one of our famous comedians, including Katherine Ryan, Sanderson Jones and Joey Page. Just like a real comedy club, if you’re brave enough, you can join the front row with up to eight others. If you’d rather sit out of sight, join a Hangout, add your friends, and watch from the safety of the back row.

The next Hangout is today, Thursday, August 15, 10pm BST with Sanderson Jones. Find out who else is starring and get your free seat at www.youtube.com/user/rednoseday/hangoutcomedyclub.

So even if you can’t make it to the Fringe, we’ll bring the best new talent directly to you—and your laugh could help change a life.

Thứ Tư, 14 tháng 8, 2013

Wednesday More AI Cleverness

The Growth of AI

Amazing to watch a brand new AI behavior grow and unfold, it really is. Like watching a baby grow up super super fast, with the bonus of being able to program out all the annoying crying :)  I think I prefer computer babies!

The AI Story So Far

The AI prototype now as a few more cool features like the ability to open and close off doors and windows in real-time, forcing the AI path finders to choose another route. Very important!

Aside from the many small features, the one that has kept me up to almost 3AM (against my decision to stop at 12 midnight) was the container path system. It turns out DarkAI has a rudimentary container connection system but it requires a few more bells and whistles before it can compete with what I have been building in DBP code.

Right now, my AI entity can start on the terrain circling a building on patrol, spot me, then chase me into a room and up some stairs, I can dive through the window but he does not have 'leap' yet so I lose him, but he can still see me through the window. The AI entity realizing he cannot reach me that way, recalculates and figures out a slightly longer route to the ledge I am now standing on. It appears a second ladder drops from the roof of the building to this ledge, and there is a ground floor ladder which reaches the roof. Off the AI entity trots, back down the stairs, and by now it's been a long while since he saw me escape, which means I could be somewhere other than the ledge, so to keep me on my toes he reverts to a Patrol mode, walks out the door and continues his route. One point in his patrol route allows him to spot me still standing on the ledge platform.

Now here comes the smart bit. The AI entity was instructed to 'remember' that the stairs inside the building to the window and the ledge did not work out too well 'a failure' in fact. He then thinks about another route that does not include the staircase that lead to failure (at the moment this is all he is programmed to remember). He then plots the alternative route, up the long ladder across the roof and down the shorter ladder to the ledge, which he does right away. On descending the second ladder, I am once again in his line of sight and he registers to fire.

All the above currently works in the AI system and with the addition of more containers, more paths, more obstacles and more enemies and allies, the complexity only increases to present a very dynamic combat area to navigate. Naturally there will be tweaks on the way, but the bulk of the 'path finding' is in place and I am happy with the behavior exhibited.

Character Animation

At present, all the above AI described is using debug models and artwork, ideal for testing behaviors and values, e.t.c.  I do feel though that leaving the character model out for too long will create problems come the 'union' of AI entities which move in a cludgy manner and the high quality character art that needs to walk, run, dodge and climb in a smooth liquid performance.

These two elements can remain separate but there are key hooks in both that need to be coded carefully. I am tempted to start this work before the week is out, probably while I am waiting for AI inspiration.  This is when the general public will see the AI prototype and in one voice go 'Oooooo'.

Signing Off

I am keeping today's blog short as it's 3AM and I was supposed to be resetting my body clock for the weekend - Grr.  Still, I quite enjoyed the AI tinkering and look forward to the next steps on Thursday.  The census is that my art skills are painful at best, so I am inviting a regular reader with a panache for art to produce for me a banner which I can slap on pre-release screenshots that read the slogan "Disclaimer: Visuals/gameplay/sounds are not representative of final product".  With this, I can start posting visuals again ;)

Just ask Google for your flights, reservations, package delivery info and more

Ever had trouble checking your flight’s status on the go because it meant digging through your email for the flight number? Or wanted to just quickly see whether your package would arrive on time, without having to look up the tracking info first? You’ve told us it would be much easier if you could skip the fuss and just ask Google.

Soon you’ll be able to find this info instantly in Google Search if it’s in your Gmail, Google Calendar or Google+. For example, just ask or type, “What’s my flight status?” or “When will my package arrive?”
Over the next several days, we’ll be rolling this out to all U.S., English-speaking users on desktop, tablet and smartphone, with voice search (so you don’t have to type). Here are just a few of the time savers that will be available this week, with more to come:

  • Flights: Ask Google “Is my flight on time?” to get info on your upcoming flights and live status on your current flights.
  • Reservations: Ask for “my reservations” to see your dining plans or “my hotel” to get your hotel name and address. With one tap, you can get driving or public transit directions straight there, saving you lots of steps.
  • Purchases: Ask for “my purchases,” and you’ll get the status of your current orders, so you know whether your mom’s birthday present will arrive on time.
  • Plans: Ask Google “What are my plans for tomorrow?” to see a summary of upcoming flights, hotels, restaurant reservations and events—very useful when you’re traveling.
  • Photos: Say “Show me my photos from Thailand” to see the photos you uploaded to Google+. You can also ask for “my photos of sunsets” if you want to show off the shots you’ve taken over the year; Google will try to automatically recognize the type of photo you’re asking for.

We’ve been offering this kind of info—flights, reservations, appointments and more—for more than a year in Google Now. We’ve gotten great feedback on how convenient it is, especially when you’re on the go. Now that it’s in Google Search, you can get it anytime you need it.

This information is just for you—secure, via encrypted connection, and visible only to you when you're signed in to Google. Likewise, you can also control whether you want the service on or off. Whenever you don’t want to see it, simply click the globe icon at the top of the search results page to turn it off for that search session. To turn it off permanently, visit the "Private results" section in search settings.

We hope this makes your day a little easier. Visit our Inside Search site to learn more about the questions you can ask.