Thứ Sáu, 28 tháng 2, 2014

Back From MWC 2014 - Phew!

Play

As the trip was not strictly Reloaded work, I had to book this one as a holiday so I am listing it as a 'Play' report :)  Did some networking and showing AGK to all and sundry, plus met a lot of very cool dudes and dudettes.  Saw some great tech and had some very techie conversations. Also managed to get up on stage a few times and talk my spiel.  My warmest thanks to Intel for inviting me there and making me feel at home, and it was great to meet my friends from across the waters again, both old and new.


Also visited my countryman at the Wales stand at MWC too, which was great to see!  Got a free glass of wine for my trouble ;)  I also really enjoyed the hotel, despite it's distance from Barcelona. If you want a place that gives you REALLY big measures, this is the place!


In those quiet moments on the plane, or in the airport, or in the 40 minute taxi ride to the MWC event every day, I had time to reflect on WHY OH WHY do I only get a handful of content in the level before it bombs. It did not take long to realize a 5000 polygon building with three LOD levels should not cost 15MB of system memory.  The most the vertices should cost is 800K and the textures 'should' be in video memory, so what on earth could eat SO MUCH system memory when it is clearly not required? This was the burning question I came back with, and I will be writing a very mean prototype this weekend to trace a series of LOAD OBJECT processes with some of the offenders from the FPSC Reloaded Asset Library such as the character and the large buildings.

I now have two solid weeks of Reloaded coding ahead of me, and my mission to reduce memory and increase performance is approaching make or break time. After that, I am off to GDC for a few days to present it to 'die hard FPS players and top game developers' and I am presenting on an Ultrabook (mobile PC) so the challenge is definitely on. To that end, I have given my mid-range graphics card to Simon so he can work on higher quality visuals in the Construction Kit and I have installed a GeForce 9600 GT:

http://www.videocardbenchmark.net/gpu.php?gpu=GeForce+9600+GT

Essentially the card is 5 times slower than the one I have been using, which will give me some first-hand experience of the problem. It also simulates the speed of a mobile PC too so I can craft the best GDC demo in the time available. I plan to put out a few versions to our internal testers during this process and if the results are good, we will look to release something to the Alpha cadence guys to get a taste of the improvements.  If the version is bad, our internal testers will protect you from it :)

Anyway, a LONG day almost done. Now my emails and blogging duties are complete, I think I will sneak off to bed and get a head start on the weekend. I am pleased to see the roof still on, so hopefully we've seen the last of the UK storms for the time being. 

Thứ Năm, 27 tháng 2, 2014

Wandering in the footsteps of the polar bear with Google Maps

This guest post is from Krista Wright, the executive director of Polar Bears International. We’ve partnered with PBI to share a fascinating look at polar bears in the wild using Google Maps. -Ed.

In Inuit poetry, the polar bear is known as Pihoqahiak, the ever-wandering one. Some of the most majestic and elusive creatures in the world, polar bears travel hundreds of miles every year, wandering the tundra and Arctic sea ice in search of food and mates. Today, with the help of Street View, we’re celebrating International Polar Bear Day by sharing an intimate look at polar bears in their natural habitat.
The Street View Trekker, mounted on a Tundra Buggy, captures images of Churchill’s polar bears

We’ve joined forces with Google Maps to collect Street View imagery from a remote corner of Canada’s tundra: Churchill, Manitoba, home to one of the largest polar bear populations on the planet. With the help of outfitters Frontiers North, the Google Maps team mounted the Street View Trekker onto a specially designed “Tundra Buggy,” allowing us to travel across this fragile landscape without interfering with the polar bears or other native species. Through October and November we collected Street View imagery from the shores of Hudson’s Bay as the polar bears waited for the sea ice to freeze over.

One of Churchill, Manitoba’s Polar Bears on Street View

Modern cartography and polar bear conservation
There’s more to this effort than images of cuddly bears, though. PBI has been working in this region for more than 20 years, and we’ve witnessed firsthand the profound impact of warmer temperatures and melting sea ice on the polar bear’s environment. Understanding global warming, and its impact on polar bear populations, requires both global and regional benchmarks. Bringing Street View to Canada's tundra establishes a baseline record of imagery associated with specific geospatial data—information that’s critical if we’re to understand and communicate the impact of climate change on their sensitive ecosystem. As we work to safeguard their habitat, PBI can add Street View imagery to the essential tools we use to assess and respond to the biggest threat facing polar bears today.
Polar Bear International’s Bear Tracker

We also use the Google Maps API to support our Bear Tracker, which illustrates the frozen odyssey these bears embark on every year. As winter approaches and the sea ice freezes over, polar bears head out onto Hudson Bay to hunt for seals. Bear Tracker uses of satellite monitors and an interactive Google Map to display their migration for a global audience.


Mapping the communities of Canada’s Arctic
Google’s trip north builds on work they’ve done in the Arctic communities of Cambridge Bay and Iqaluit. In the town of Churchill, the Google Maps team conducted a community MapUp, which let participants use Map Maker to edit and add to the Google Map. From the Town Centre Complex, which includes the local school, rink and movie theatre, to the bear holding facility used to keep polar bears who have wandered into town until their release can be planned, the citizens of the Churchill made sure Google Maps reflects the community that they know.

But building an accurate and comprehensive map of Canada’s north also means heading out of town to explore this country’s expansive tundra. And thanks to this collaboration with Google Maps, people around the world now have the opportunity to virtually experience Canada’s spectacular landscape—and maybe take a few moments to wander in the footsteps of the polar bear.

Thứ Sáu, 21 tháng 2, 2014

Internal Tweaks Day

Work

I have spent most of Friday running around finding little jobs to make my mental desk a little cleaner before the weekend. I found time to add some nice things to Reloaded such as character fading when they die, switching them to instance objects when they are far away to save performance, fixing reflections which disappeared when the new occlusion system went in, adjusted the metrics readout so that polygon and draw calls don't instantly fill up the bar to appear more representative of a 'maximum' state, tweaked collision properties for some trees and generally bashed the engine with a variety of performance tests to locate the worst offenders.

I have isolated two performance jobs which move to the top of my plate, and both of them hooks into the new occlusion system very nicely.

The first is to replace distant objects with quads. I did this before for the instance stamp system but it proved to be a huge memory hog and hurt performance when a large group of high polygon objects entered the rendering zone.  The great news is that I can re-use the quad system I created from the instance stamp mechanism but drop the 'dynamic VB filler part' which was the troublesome bit.  I will be using the feedback from the occlusion system to work out whether quads should be rendered (as they are in a single draw call so need vertex shader magic to hide/show them individually). By passing in a quad distance value to the bound sphere submissions of the occluder I can control quad visibility entirely on the GPU :)  Naturally, I can get the CPU to skip draw calls on the higher polygon objects by simply hiding them when they enter the assigned quad range (which will be a value in the FPE so you can change it per entity).

The second boost work will be to move the vegetation generator which currently kicks in hard when you run really fast through lots of grass (and can take good 60 fps levels down to 42 fps). I will move it from the DBP code which creates and destroys meshes on mass into a space between the start and end of the HZB query calls (that's right, exactly where the stall is located). My hope is that the vegetation generation becomes completely free as the CPU would be sitting around waiting for the GPU to come back with my juicy visibility values for the occlusion.

Put these two optimizations together and you should get a reduction of draw calls amounting to half the scene, and a substantial drop in polygons too. I've run measurements with other speed-up ideas, and these two will help increase frame rates significantly AND ensure the rate does not drop when you run quickly through dense foliage.

Play

If you've been wandering why I've been pulling 12-14 hour days for the last five days it's because I am off on a short busmans holiday from Sunday. I will be changing my overworked programmer hat for my CEO and Marketing hat as I represent TGC at this years MWC.  Specifically I will be presiding over the huge hackathon there and helping fellow developers as they attempt to code a game in just three days.  It's a chance to chill, unwind a little and talk shop with 3000 developers, and trade horror stories to find out if there are any crazy cool optimization tricks that I have missed.  If you're heading there yourself, I will be tweeting my location throughout the event in case you want to say hello and swap travelling tales.

Alas I have left myself just one day to prepare all my demos and test my equipment, but I have one more day of Reloaded performance work during Saturday, and hopefully the internal build I am preparing will be received well by our internal team testers.  I made a video showing the new occlusion in action, but I have been warned repeatedly about showing 'not great' videos so you will have imagine what it looked like;

"Picture it....I start far from a set of 15 buildings standing on a super flat terrain, each building has 3 barrels placed outside, the draw call count is 62. I run into the nearest building and make sure I can see through no open windows, and the draw call drops to just 12. Apart from the floor, sky and a few other quads the only thing being rendered is the building I am in. Frame rate on my machine stayed well above 60 fps, the target I aim for during my tests. Once I add in quads for singular objects, I expect the initial draw call of 62 to drop to more like 32. All to play for in Reloaded land!"

Thứ Năm, 20 tháng 2, 2014

Brain Approaching Burn Out

Work

Another crammed day, ending past 4AM.  I had planned some other small tasks around the office, emails and tasks, but I spent pretty much the whole time continuing the saga of performance improvements.  I seem to be addicted to saving frames!

I'll give you the highlights (as I am wiped out). I've replaced the terrain physics system from a height map to a trimesh, extracted from the actual visual terrain geometry at LOD1. What this means is that your dynamic objects will not sink into the floor or float above it in strange ways.  My initial code was to use LOD0, but that dragged me from 170 fps to 110 fps, so by using LOD1 there was no speed loss and I could continue.  No extra speed but much more accurate floor collisions, and the back-end meshes I needed for the next achievement.

Terrain occlusion for objects was possible thanks to the LOD1 meshes I generated as part of the physics creation. Converting them down to just flat vertices and passing them to the occlusion system means that hills and high ridges now occlude any objects that sit behind them.  A BIG occlusion win. I have not run extensive tests on the performance gain (too busy on related wins) but it did not cause any slow down which we can thank to the GPU stall which extolls a single one-off cost for the occlusion so extra submissions to the system was free.

I also improved the occlusion system to use a dynamic vertex buffer instead of a fixed static one. This means I can calculate and render the best 60K polygons worth of occluders instead of rendering over 2 million vertices (the entire scene) through a static draw call. It is still one draw call, but it only renders the polygons immediately surrounding the player. There is also room for further optimizations here which is a boon as this one DID show a performance boost from 101 fps to 166 fps :)

The final task which I did not quite finish due to running out of brain juice was to work out which terrain sectors (small patchwork blobs of terrain) are hidden by the occluder depth render.  I hacked into the BlitzTerrain module and had it skip the sector render if the associated object had been hidden by the occlusion system (if you recall, the object I used to make the new terrain physics floor is the one I left to pick-up the occlusion info for this technique). The problem is that there are many LOD levels, and a relational scatter of sectors per LOD level, and I am only associating DBP objects with LOD Level One.  My early tests show it working, but it needs to work A LOT BETTER before it's ready for the public (as only a small part of the hidden terrain is actually skipping a render).  Alas when I did an aggressive test and did some hack guesswork on neighboring sectors, wiping out most of the distant terrain, the frame rate did not get much past 180 fps.  I have left in a conservative implementation which just acts on LOD1 which will ensure you don't get 'missing terrain squares' when playing your levels.


Notice the empty terrain sector?  I set terrain to wire frame and put a hill between me and the distant plane. I am not saving huge amounts here, but the theory is sound and with more work we should be able to throw out a LOT of polygons when such occluded sectors are well hidden. I have emailed the author of BlitzTerrain in the hopes of gaining more insight and inspiration into the relationship between LOD levels and Sector Objects.

The fact I don't get much higher than 180 fps even when I obliterate the terrain rendering suggests that the bottleneck I must chase next is the GPU stall, and as described in a previous blog post I have a fiendish plan to solve it.  Alas I probably will not get to that until the pile of little issues that my plate is starting to collect have been vanquished.  All in all though, some nice progress and the occlusion system continues to pay dividends!

Thứ Tư, 19 tháng 2, 2014

Putting The Wires Away

Work

Today was about cleaning up my workbench and putting the wires back into the box, making sure that the Reloaded engine is no worse for all the little changes I have inflicted on it this past five days.  In order to create a sense of urgency I also produced an installer for internal testing:


The good news is that most of it went back without a fuss, but there was enough to keep me going as I tested the version back and forth. I also found time to add a few more core tweaks in there too such as the ability to save which terrain, sky and vegetation your level used when you save the level file. I also aligned the sky scrolling system to match that of the legacy classic system, removing the cloud portal idea for the moment.

There is a chance my Internet will be down Thursday, so I have uploaded my emergency internal test installer and will be starting the Memory Management part of my week which will attempt to analyse where all my system memory is going, and why the engine suddenly decides not to create relatively small contiguous chunks of memory when I need it.  Dave Ravey has done a grand job creating the memory manager subsystem reporting tool, now I need to integrate it and see what comes out the other side.

So far it looks like I won't be restoring my sleeping pattern this week, so hopefully the lack of Internet bringing new emails on Thursday will allow me to finish my regular coding quota early so I can reign in these 4AM finishes!

Thank you, and welcome to the new Google Maps

Over the coming weeks the new Google Maps will make its way onto desktops around the world. Many of you have been previewing it since its debut last May, and thanks to your helpful feedback we’re ready to make the new Maps even more widely available.

It’s now even easier to plan your next trip, check live traffic conditions, discover what’s happening around town, and learn about a new area—with Pegman’s help if needed. Here’s a quick refresher on what to expect in the new Google Maps:

  • Make smarter decisions. Simply search for “coffee” in your neighborhood, and you’ll be able to see results and snippets right on the map. When you click on a cafe, the map will suggest related results that you may not have known about.
  • Get where you're going, faster. Car? Bike? Train? Find the most efficient route for you, with your best options laid out on the map, including the time and distance for each route. And with the new real-time traffic reports and Street View previews, you’ll become a commuting ninja.
  • See the world from every angle. Rich imagery takes you to notable landmarks, sends you flying above mountains in 3D, and gives you a sneak peek of businesses you plan to visit. The new “carousel” at the bottom of the map makes all this imagery easy to access, so you can explore the world with a click.
With any product redesign, there may be bumps along the road. We're hoping that you're as excited as we are to navigate uncharted territory in pursuit of the perfect map. As always, we want to hear what you think as we work to improve the new Maps over time.

Here’s to many more years of mapping together!

Exploring new cities for Google Fiber

Over the last few years, gigabit Internet has moved from idea to reality, with dozens of communities (PDF) working hard to build networks with speeds 100 times faster than what most of us live with today. People are hungrier than ever for faster Internet, and as a result, cities across America are making speed a priority. Hundreds of mayors from across the U.S. have stated (PDF) that abundant high-speed Internet access is essential for sparking innovation, driving economic growth and improving education. Portland, Nashville (PDF) and dozens of others have made high-speed broadband a pillar of their economic development plans. And Julian Castro, the mayor of San Antonio, declared in June that every school should have access to gigabit speeds by 2020.

We've long believed that the Internet’s next chapter will be built on gigabit speeds, so it’s fantastic to see this momentum. And now that we’ve learned a lot from our Google Fiber projects in Kansas City, Austin and Provo, we want to help build more ultra-fast networks. So we’ve invited cities in nine metro areas around the U.S.—34 cities altogether—to work with us to explore what it would take to bring them Google Fiber.
We aim to provide updates by the end of the year about which cities will be getting Google Fiber. Between now and then, we’ll work closely with each city’s leaders on a joint planning process that will not only map out a Google Fiber network in detail, but also assess what unique local challenges we might face. These are such big jobs that advance planning goes a long way toward helping us stick to schedules and minimize disruption for residents.

We’re going to work on a detailed study of local factors that could affect construction, like topography (e.g., hills, flood zones), housing density and the condition of local infrastructure. Meanwhile, cities will complete a checklist of items that will help them get ready for a project of this scale and speed. For example, they’ll provide us with maps of existing conduit, water, gas and electricity lines so that we can plan where to place fiber. They’ll also help us find ways to access existing infrastructure—like utility poles—so we don’t unnecessarily dig up streets or have to put up a new pole next to an existing one.

While we do want to bring Fiber to every one of these cities, it might not work out for everyone. But cities who go through this process with us will be more prepared for us or any provider who wants to build a fiber network. In fact, we want to give everyone a boost in their thinking about how to bring fiber to their communities; we plan to share what we learn in these 34 cities, and in the meantime you can check out some tips in a recent guest post on the Google Fiber blog by industry expert Joanne Hovis. Stay tuned for updates, and we hope this news inspires more communities across America to take steps to get to a gig.

Google Capital: investing in growth-stage companies

Ever since our founders began working out of a garage in Menlo Park, we’ve thought about what it takes for entrepreneurs to build the companies they dream of. Sometimes this means bringing great startups to Google—but other times, it means we go to them. Today, we’re launching Google Capital, a new growth equity fund backed by Google and led by partners David Lawee, Scott Tierney and Gene Frantz.

Like our colleagues at Google Ventures, our goal is to invest in the most promising companies of tomorrow, with one important difference. While Google Ventures focuses mainly on early-stage investments, we’ll be looking to invest in companies solely as they hit their growth phase. That means finding companies that have already built a solid foundation and are really ready to expand their business in big ways. We’ll look across a range of industries for companies with new technologies and proven track records in their fields. Our investments to date include SurveyMonkey, Lending Club and Renaissance Learning—with many more to come.

But it’s not just a monetary investment for us. The most important—and distinctive—feature of Google Capital is how we work with our portfolio companies. Over the past 15 years, Google has built a strong business, and that’s mostly thanks to the great people who work here. Our portfolio companies have abundant access to the talent, passion and strategic expertise of some of Google’s technology and product leaders. While many investors may contribute money and advice to the companies they support, Google Capital is going beyond that and tapping into our greatest assets: our people. They help us succeed, and we believe they can help our portfolio companies do the same.

It’s still very early, and investing is a long road. We’re excited about what we’re doing today—but even more excited to see what happens in the years to come.

Thứ Ba, 18 tháng 2, 2014

Winning On Tuesday

Work

Having discovered Microsoft PIX (six years too late) I found it would crash out when I gave it my full engine. Turns out PIX is not the most stable creature at the best of times, but it took me seven hours of coding to find that out. In the process of 'making my code work with PIX' I cleaned up a huge amount of small DirectX errors which where there since October 2013 but silently failed in the driver. Having debugging modes set to MAX I could see all the dirty laundry of the app. It's now much cleaner.

I still cannot use PIX for the engine, but I can use it for all my smaller prototypes so it will still be a valuable tool when writing new techniques. I am also holding out for a version of NSIGHT that actually works too, but that will probably be down the road (or never) as I am not sure they care much for DX9 support these days.

On realizing I was not going to see my scene through PIX, I created my own depth scene debug view so I could see the hidden depth information from my occluders, and would you believe, I discovered the 'why'..


Those cyan colored shapes in the top right are 'supposed' to be my building occluders. Yes that's right, the ones that are rotated all over the place with scaling gone screwy.  Seems I have found the reason why some of my levels work and some show no occlusion activity.  You cannot hide behind a building that just rotated underground ;)

It's 3AM again so no time to go in and fix it, but the great news is that the fix should be very trivial and then I will have objects that don't suddenly disappear, a constant occlusion rate no matter how many objects in the scene and an excuse to move onto adding terrain to the depth map and solving the polygons for the physics terrain floor (related work).

Between compiles, I also managed to integrate a new installer script for the beta which means your BIN and DBO files will be removed when you uninstall, temp files removed and your precious levels and contain retained.  Just as the installer should be.

I also fixed a bug which caused levels with more than 100 unique entities to crash out, and added physics code to prevent objects from penetrating any surface which should please the placer of small keys :)

A special thanks goes out to DVADER (and R4D5) for helping fix the Reloaded scaling issue!  R4D5 ran off into the scene before I could thank him, he's hiding somewhere...


Also received my new UPS today which should provide 10 minutes of back-up power in case of total power failure (allowing me to save my work). Turns out it needs C14 cables to connect my devices, so not quite ready to set that up. With the new one in place, I can move the smaller UPS to the main router at the other end of the house which should provide power and surge protection for my telephone and broadband too, even when grid power goes. Neat!

Thứ Hai, 17 tháng 2, 2014

This Is Not A PR Blog Jim, It's a Dev Blog - Look Away

Work

If you want good news, best not to read any more of this blog and wait for later in the week :) If you are brave and want to learn about the real world of software development, read on..

I integrated my new occluder into the Reloaded engine (you remember, the one that did amazingly well in my simple one occluder prototype). Only took a few minutes really, and the results where rather poor. My un-occluded scene I created from 50 buildings rendered flat out at 194 fps. This was my benchmark. I then added the code to submit all buildings as both occluders and ocludees and run the occlusion system on the exact same scene view. My frame rate dropped to 101 fps and my polygon count jumped by 90,000 polygons. Oh woot!

I did some investigative tests and it seems when I removed the 'GetRenderTargetData and System Surface Lock' commands, it jumped back up to 186 fps. My conclusion was that the 'slight' GPU stall I was anticipating is in fact a huge stall when you are running at frame rates in the hundreds.  

When I put it back in and reduced the depth buffer rendering to one draw call, it was slightly better than the worst score at 129 fps. This means the GPU lock is the biggest spender and the fifty depth scene render calls is the next cost to bear.  As I could not avoid the big stall, I worked to combine all the 50 building geometries into one large vertex buffer to do a single draw call but this only yielded a marginally better frame rate of 105 fps.

The good news is that this overhead will not get any bigger as the scene grows in size (with a few optimizations I have in mind). The cost is in the depth scene render and the GPU stall, and those won't get any bigger which means I can throw thousands of objects at it and the cost will be the same (or near as dammit) as ten objects.  I cannot rely on that assumption until I have field tested this new occlusion system with some other machines (and other users).


I then created a second level which had 25 barrels hiding behind a tent, and the system correctly occluded all the barrels when I stood behind it, and the draw call count dropped respectively.

The good news ended pretty soon though as the barrels exhibited visual popping because they where trying to occlude themselves (not a good idea to have an object that is both occluder and ocludee), and the last thing I wanted to see what popping (the whole reason this new occlusion system was created).

More Work To Come

If you read this far, I will now treat you to the very good news.  I half anticipated all the issues above, and despite the laundry list of woes I am quite pleased with how the HZB is able to work out occlusion and distribute that through the engine.  My next plan is to create a 'preferred occluder' system which only selects near 'large' objects for the occluders which will stop that annoying popping and speed up the depth render stage.  I can speed up the management of the return results by allocating some fixed memory instead of creating and deleting the memory allocation every cycle and perhaps my MOST AMBITIOUS plan of all, to eliminate the GPU stall.

I've hunted around I could not find any clever white paper which solves this issue, and prefers to put you onto DX10 and DX11 to solve it with the much friendlier stream-out operation.  It seems DX9 coders are left to fend for themselves with this problem, and my idea is utterly radical and perhaps just as slow as the current GPU lock.  You are welcome to stop me if you think the idea is mad..

Instead of doing a 'GetRenderTargetData' command to get the visibility results back into CPU memory so I can switch objects on and off, I instead redirect the occlusion visibility texture (which contains little 1's and 0's for each object being represented in the scene) and pass it to my entity shader as a new texture. I then use what is called a vertex texture fetch to grab the visibility state from the texture produced by the occlusion system. If the value is 'not visible', I simply adjust the vertex position to 'behind the camera' which will force whole object to skip sending the polygons to the fragment shader. The draw call for the object would still be made, but the shader would quickly reject all its polygons and move on.  Not sure if a draw call that renders no polygons is a freebie, or still a performance drain, but that plus the vertex texture fetch are the two problem areas I anticipate.  If my fears are unjustified and the performance hit is negligible, I will have created an entirely GPU-only occlusion method in DX9.  The reason I am confident is that the current method produces a GPU stall that effectively halves my frame rate, and the benefit of a non-stalling occlusion pipeline will be apparent the moment I finish coding it and run a test (fingers crossed).

It's another 3:30AM in the morning, and no sight of my normal 9-5 day so far, but if we can crack this occlusion question and have it perform splendidly in the main Reloaded engine, I can draw a clean line under it and move on with confidence.  No sense moving on until then (unless it starts to gobble up weeks!).  As a fallback plan, I emailed a middle-ware company that provides one of the advanced occlusion methods used by Unity (apparently) to ask them for the price for including their tech. No reply yet, but from experience the answer is usually (you cannot use it in a game maker) or a number with many zeros on the end.

Still, the occlusion system seems to be holding up very well, and if the frame rate never drops below 80fps, even with the current stalling system, it is still a benefit over an engine that would otherwise slow down as you start hiding objects around your scene.  Plenty more occlusion news to come, watch this space!

Chủ Nhật, 16 tháng 2, 2014

Occlusion Prototype Done

Play

Wanted to go to bed three hours ago, but could not let go of this damnable algorithm. It's now 3:15AM, my eyes and head are throbbing, but I finally bashed the HZB occlusion system into a working state and my little box prototype runs with full 'super fast' occlusion.

I created a world with 400 boxes on the floor, and one large central box which I set as an occluder. Without occlusion my frame rate was 160 fps, due to the full textured shader I applied to all of them.  When I switched on automatic occlusion, it jumped up to 350 fps with no change in visuals.  Of course when I ran up to the wall to force full occlusion, it jumped to 1000 fps. Thanks muchly Mr HZB!

I have yet to see how the system handles more objects and more occluders, but as the system is predominantly a GPU operation scaling up should be almost free on higher end graphics cards. Time will tell if this approach to occlusion works on the lower end cards.

I've probably binned half of Monday with my 'burnout' weekend but I think it was worth it, and now I have the theory running well in a prototype, the next step will be to transfer the few occlusion commands to the main engine and see what kind of 'before' and 'after' frame rates I get.  Fingers crossed.

NOTE: I also dealt with a few emails over the weekend too, including one that helped me fix the scaling issue bug causing SCALE entities to lose their scaling when you load them in or save an executable.  A five minute fix, but testing with the R4D5 model was lots of fun!!

Thứ Bảy, 15 tháng 2, 2014

HOQ say hello to HZB

Play

For my leisurely weekend, I decided to tackle the problem of Occlusion once and for all, and to that end spent the first six hours reading every technique every used in computer games from CPU software rendering through to insanely ambitious prediction systems for reducing hardware query checks. The reason I am not satisfied with the current hardware query system in Reloaded is that the object 'popping' you see when you run around corners is not just 'occasional' but pretty much in your face!  No-one has come up with a decent solution to solve the occlusion query popping as it is an inevitable result of having the data for the occlusion cull one frame behind the rest of the visual rendering. Only coherence systems attempt to solve this, but it only amounts to guesswork. I even pioneered some non-white-paper thoughts about creating multiple queries as a product of the player running in several dead-reckoning vectors, but I quickly realized the cost of those queries would be insane, and there are a lot of articles out there which feverishly attempt to reduce query count so why would I deliberate make multiples of them!  Here are my brain storming notes from the day:





During my researches, I stumbled onto a technique called Hierarchical Z Buffers (used in several top end games you know) which basically renders a smaller version of the scene into a depth buffer and squashes it down into a real-time mipmap. It is then used in concert with any object you wish to test the occlusion state of using some pretty wicked GPU based test. The wicked part is how they get the DX9 test data back into system memory ;)

I did not wish to go this route originally as adding another pre-render to the scene seemed like spending more GPU time than saving it.  It's now nearly 3AM and I want to eat and watch a movie before I sleep, and I am currently at the point where most of the C++ code is in a new set of DBP commands and I just need to establish some occluder candidates, create an object bounds database and then set the object visibility flags.  These last three steps should allow me to choose key occluders from any Reloaded scene, then sit back and watch as the GPU performs full occlusion testing with a single draw call and instantly rejects thousands of arbitrary objects from the scene :)  I will know more come Monday afternoon, but the early results are exciting!

I have recently discovered how to use Microsoft PIX properly (now replaced with the VS graphical debugger) and have been having a whale of a time watching the scene built up one draw call at a time. I plan to use this tool A LOT now I know where all the buttons are. For those who just fell over in shock, I do admit that I was one of those coders who mostly 'guessed' where the rendering problems where in my engine. The upshot of course is that these days my guesses are pretty accurate!

Thứ Sáu, 14 tháng 2, 2014

Power Comes On - Power Goes Off

Work

Sorry for the intermittent and less than relevant blogging, it's pretty disconcerting to know your workplace can disappear at any moment :)  Made some progress speeding up the AI system, fixed some erroneous crashing on larger character levels and managed to track a memory leak directly connected to the DarkAI module.  The Memory Management Module (MMM) is currently underway thanks to Dave Ravey which means the smart money is for me to switch to QUAD rendering of distant instance objects to get some serious performance back. I can then return to memory issues on Monday with my new 'rough and ready' memory reporting tool and see where the leak is, plug it, and more importantly find out where 700MB is going for a blank level :)  Also progress being made with the 400 Reloaded ready models as part of your Gold Pledge so watch this space for news on that, and some nice progress in Construction Kit country (thanks to Simon), so despite the lack of electrons, we move ever forward. As a final note, I have also made contact with a few users who have been experiencing extreme problems with their larger levels, in an attempt to identify, isolate and correct the root cause of their pain. The next update WILL be good, watch this space!

Thứ Năm, 13 tháng 2, 2014

Storms = Power Cuts = No Coding

Work

Wednesday afternoon saw fallen trees and powercuts. Thursday looked like we had full recovery, but come 7PM another powercut. It's back up now so I thought I would post a quick blog before it went again.  Got some good work done on speeding up the AI part of the engine and firmly focused on making sure some of the more aggressive FPM levels run at a cool 60fps on my machine. This should produce some nice performance improvements for everyone else and ensure we hit the mark when it comes to making user levels playable and fun.  I will report more when I can be sure my blog post does not get wiped out mid-type ;)

Thứ Tư, 12 tháng 2, 2014

Kicking off the 2014 Google Science Fair: It’s your turn to change the world

What if you could turn one of your passions into something that could change the world? That's just what thousands of teens have done since the first Google Science Fair in 2011. These students have tackled some of today’s greatest challenges, like an anti-flu medicine, more effective ways to beat cancer, an exoskeletal glove, a battery-free flashlight, banana bioplastics and more efficient ways of farming.

Now it’s time to do it again: we're calling for students ages 13-18 to submit their brilliant ideas for the fourth annual Google Science Fair, in partnership with Virgin Galactic, Scientific American, LEGO Education and National Geographic. All you need to participate is curiosity and an Internet connection. Project submissions are due May 12, and the winners will be announced at the finalist event at Google headquarters in Mountain View, Calif., on September 22.

In addition to satisfying your curious mind, your project can also win you some pretty cool prizes. This year’s grand prize winner will have the chance to join the Virgin Galactic team at Spaceport America in New Mexico as they prepare for space flight and will be among the first to welcome the astronauts back to Earth, a 10-day trip to the Galapagos Islands aboard the National Geographic Endeavour and a full year’s digital access to Scientific American magazine for their school. Age category winners will have a choice between going behind the scenes at the LEGO factory in Billund, Denmark or an amazing experience at either a Google office or National Geographic.

For the 2014 competition, we’ll also give two new awards to celebrate even more talented young scientists:
  • The Computer Science Award will be given to a project that champions innovation and excellence in the field of computer science.
  • Local Award Winners—students whose projects have attempted to address an issue relevant to their community—will be honored in select locations globally.
And the Scientific American Science In Action award will once again honor a project that addresses a health, resource or environmental challenge. The winner will receive a year’s mentoring from Scientific American and a $50,000 grant toward their project.

Stay updated throughout the competition on our Google+ page, get inspired by participating in virtual field trips and ask esteemed scientists questions in our Hangout on Air series. If you need help jump-starting your project, try out the Idea Springboard for inspiration.

What do you love? What are you good at? What problem have you always dreamed of solving? Get started with your project today—it’s your turn to change the world.

Thứ Ba, 11 tháng 2, 2014

What Lee's List Looks Like

Work

Here is a snapshot of some of the items on my list that I am working through for you. If you feel I have missed anything critical out, please let me know!

A FEW ITEMS ON MY LIST FOR THE NEXT UPDATE:

Frame rate. Better frame rates if possible.
DAVE to write MEMORY MANAGER and integrate into DBP/Compiler source
Physics. Player physics seem to be all over the place of different PCs.
Shooting. Enemy shooting has to be sorted once and for all.
Full compliance with Win 7 and 8 (UAC)
Item limit when you place a item it increases the memory bar when you delete it its still using the memory
Map blink/screen blink I had this a few time I found what fixes it is when you load the editor wait till you see the blue screen
Jumping/flying the jumping and gravity is spot on the only issue is when the game lags it makes the player seem as if he is flying
Lag/low fps when going into buildings
Enemies Walk Into Water still the seem to fall between the land and water the shoot from under the water
Ai wont die I believe this is down to fps also as when I take out a few AI they are ok
Fps I believe it shouldn't be limited to screen refresh rate it seems to cause more problems
Selecting entities on top of other entities not working (cannot select key sitting on table)
Ability to range select within the editor to move/delete many entities at once
Ability to turn off skyboxes in SETUP.INI
Ability to specify terrain size in SETUP.INI and have terrain created to suit that size
dynamic entities fall through the terrain
In most cases the AI has no idea I am there (see HOOD1 FPM demo)
you can not place a character on top of a building
Introduce new QUAD Reducer (load time only - no modifications afterwards)
Script commands for (only if time moving, visibility, sound loop), animate, spawn
Reduce the amount of memory used in core engine and as levels are added to
Raycast terrain for enemies
Shoot raycast use geometry, not bullet polys
Shoot through transparent objects
Stop enemies from going NEAR water boundary (can be a sharp fall off and enemy drops)
Stop enemies from climbing steep hills
Blood splats on enemies some times show as dust
Stop enemies from running on spot / straf too much
Enemies to partrol if they hear a sound from distance
In combat, remove slow inertia
Collect dropped weapons from dead enemies
Enemies to look around when hearing a shot from distance or close friend killed
Enemy to react a little when shot but not dead
Player water splashes
Big entity splashes
Slider for sky scrolling speed
Reset objects in test mode
Cloud overlap issue raised by Rolfy (email send to Lee)
Old levels did not run due to old.dbo and .bin files
DVADER Issues - placing items on table / in buildings
Shooting enemies does not always hurt/kill them
Play loop sound for atmospheric start - and add script to player start to trigger it
Switch which activates(opens) a door
Stand on pad which activates a stone block to slide to one side (move entity command)


Thứ Hai, 10 tháng 2, 2014

Solve for X 2014: Celebrating and accelerating moonshot pioneers

Last week, Solve for X gathered 60 entrepreneurs and scientists from around the world to discuss 18 moonshot proposals—world-changing projects that work to address a huge problem, suggest a radical solution and use some form of breakthrough technology to make it work.
Solve for X attendee Sara Menker shares ideas and critique from her group’s brainstorming session.

Ira Glass opened the summit with a talk on climate change entitled “Ira Glass tries to boss you into a moonshot.” Ira mixed data, devastating personal experiences, potential technical solutions and insightful ways to think about the issue and made an excellent case that generalists should consider shifting focus to climate change.

Following Ira’s talk, we heard proposals on a wide variety of topics, including: Leslie Dewan’s proposal for generating power from nuclear waste building on technology ideas abandoned in the 1950s; Lonnie Johnson’s JTEC invention, which would allow us to convert heat directly into electricity; Howard Shapiro’s global collaboration that uses some of the newest and oldest technologies in agriculture to end stunting for rural poor; Julia Greer’s exploration of the relationship between a material's strength and its weight through 3D architected nanomaterials; Yael Hanein’s artificial solar retina, which has the potential to cure blindness; Erez Livneh’s virus decoys, which could slow and eliminate disease; and Asel Sartbeava’s proposal for thermally stable vaccines that remove the need for refrigeration cold chain during transport.
Ido Bachelet explains how certain surgical interventions could be accomplished through nanorobots.

During a “show and tell” session, participants from previous Solve for X events shared updates on their moonshots. Omri Amirav-Drory showed us plants that glow when activated; Dr. Keith Black brought delicious Dr. Black’s Brain Bars; Karen Gleason brought solar cells printed on paper; Andras Forgacs brought the first “steak chips” that Modern Meadow is beta-“tasting.”
Suchitra Sebastian’s demonstration during her proposal on a new generation of superconductors.

In an effort to include more people in the Solve for X experience, this year we ran 10 experiments to bring our exploration session format into other organizations’ events, including TEDx Beacon Street, SXSW and Tribeca Film Festival; we even held an event on Capitol Hill. FabLab, ReWork and AAAS recently became collaborators, joining Singularity University, XPrize, TED and others. We hope we’ll run into you at an event in your area.

To learn more, watch our video “On Taking Moonshots” in which several moonshot pioneers talk about the mindset needed to do this kind of breakthrough work. You can find all 18 of the proposals from the 2014 Summit, as well as 200+ moonshots posted by other pioneers, at SolveforX.com. You can also submit moonshots—your own or others that fit the tech moonshot proposal format. Join our #TechMoonshots conversations on Google+ and Twitter.

Back From Vacation

Work

Just got back into the 'hot' seat and going through my emails and post histories, and I am in full agreement with the posts in the forum that the next few releases should be heavy on core stuff. Personally, I would like to focus entirely on core to the exclusion of all else, but development life is rarely that pure 

I'm downloading the HOOD demo now, posted on the Reloaded forum, to get a feel for the feedback first hand, and reading the thread you will get no argument from me on the items mentioned. More performance and proper AI seem to be high on your list, and mine too, so I'll be looking to push these as the highest priorities in the development weeks to come. I have also extended the testing period for the next update, so you only get it once a few more eyeballs and a few more days have tested the release. I have a development meeting Wednesday to discuss the exact order of importance on which core issues will be addressed for the next release, and in the meantime I will be knocking on the head the 'easy to fix silly ones' that crept into the beta since the last version.

I would like to post on the forum more, and I do jump in once a week to reply to every post on the first page (when I am not skiing), but I feel going to the forum every day might detract from actual coding and I usually end up repeating myself after a while.

Hopefully this blog provides enough information on the daily stuff and a 'once a week' appearance in the forum will fill in any gaps from me. If you have any recommendations how I could further reduce the apparent communication gap without adversely affecting my coding hours, please get in touch!

I came back from my vacation with a few more ideas how I could get more speed from the engine, so unless I am molested by higher priority items, I will be spending today and Tuesday trying those out (along with fixing those silly items I mentioned).

We also had the foresight to get some parallel developments coded in my absence, so I have returned to a new installer revision which will cleanly uninstall all your installed files from the next version onwards, but leave your user content and levels in tact. 

We've also rewritten the LUA scripting engine module, which now gives us full control of the underlying source code, not to mention adding some nifty extras like multi-scope LUA scripting which will eventually allow you to create scripts for HUD and interface displays, plus other independent logic within the game (more on that in the future). Other work has been done for the Construction Kit and Object Importer, but I am strongly inclined to defer these releases as there is an integration cost that would distract me from the singular pursuit of core work.

Time for me to open up the engine now and start working on performance and the sillier core issues in there. If you feel I should be working on something else until Wednesday (as it's very likely I have forgotten stuff since the start of this post), please let me know by posting here so we can get a general overview of how you would prioritize my next two days  I will post again after the Wednesday meeting to let you know what priorities were decided (or you can read the blog post on that day as it will be repeated there as well).


For those with some patience left, thanks for sticking with us. For those who just ran out of it, welcome to the world of game engine development and I'll see what I can do to restore your faith. To take the guesswork out of it, please post here by Tuesday PM, one item that you would like to see in the next update and I'll add it to the meeting on Wednesday.

Thứ Năm, 6 tháng 2, 2014

Chromebox, now for simpler and better meetings

The best meetings are face-to-face—we can brainstorm openly, collaborate closely and make faster decisions. But these days, we often connect with each other from far-flung locations, coordinating time zones and dialing into conference calls from our phones. Meetings need to catch up with the way we work—they need to be face-to-face, easier to join, and available from anywhere and any device. Starting today, they can be: Any company can upgrade their meeting rooms with a new Chromebox, built on the Chrome principles of speed, simplicity and security.

Chromebox for meetings brings together Google+ Hangouts and Google Apps in an easy-to-manage Chromebox, making it simpler for any company to have high-definition video meetings. Here are a few highlights:

  • Instant meeting room. Chromebox for meetings comes with a blazing-fast Intel Core i7-based Chromebox, a high-definition camera, a combined microphone and speaker unit and a remote control. Set up your entire room in minutes and easily manage all meeting rooms from a web-based management console. All you need is the display in your room, and you’re good to go.
  • Simpler and faster meetings. Walk into the room, click the remote once and you’re instantly in the meeting. No more complex dial-in codes, passcodes or leader PINs. Share your laptop screen wirelessly, no need for any cords and adaptors. Integration with Google Apps makes it easy to invite others and add rooms to video meetings, directly from Google Calendar.
  • Meetings with anyone, anywhere. Up to 15 participants can join the video meeting from other conference rooms, their laptops, tablets or smartphones. Need to meet with a customer who doesn’t use Chromebox for meetings? That’s easy too—all they need is a Gmail account. You can also connect to rooms that have traditional video conferencing systems using a new tool from Vidyo, and participants who prefer phones can join your meeting with a conference call number from UberConference.
Chromebox for meetings is available in the U.S. today starting at $999, which includes the ASUS Chromebox and everything you need to get going. That means for the same price that companies have typically paid for one meeting room, they'll be able to outfit 10 rooms—or more. CDW and SYNNEX will help bring Chromebox for meetings to customers and resellers, and Chromeboxes from HP and Dell will be available for meetings in the coming months. Later this year, we plan to launch in Australia, Canada, France, Japan, New Zealand, Spain and the U.K.

Companies like Eventbrite, Gilt, oDesk and Woolworths have been testing Chromebox for meetings, and have told us that they love the simple setup, the ease of use, and being able to see their colleagues in other offices. More importantly, the low price will enable them to extend these benefits to even more employees, rooms and offices. Find out how Chromebox for meetings can help you and your coworkers see eye-to-eye. Happy meetings, everyone!

Thứ Tư, 5 tháng 2, 2014

Art, made with code: calling all future interactive artists

In between creating masterpieces like the Sistine Chapel and “Madonna and Child,” Michelangelo dissected cadavers in the hopes of understanding how the human body worked so he could paint it accurately. He’s not the only one: there has long been a connection between science and art. And it’s true today more than ever, as modern artists use technology for inspiration, inventing ways to give life to code, letting it spill from the screen and onto the canvas. We call this “DevArt,” and this summer, we’re teaming up with the Barbican in London and their Digital Revolution exhibition to celebrate DevArt in an interactive gallery. And we want you to be a part of it.

As part of this exhibition, we’re looking for the next up-and-coming developer artist. This is your opportunity to express your creativity, and to have your work featured in the Barbican and seen by millions of people around the world. To throw your hat in the ring, build a project on the DevArt site and show us what you would create. From there, we’ll pick one creator whose work will sit alongside three of the world’s finest interactive artists who are also creating installations for DevArt: Karsten Schmidt, Zach Lieberman, and the duo Varvara Guljajeva and Mar Canet.


The exhibition will open at the Barbican this summer. Until then, visit g.co/devart, where you can submit your own project. If you’re not the creative coding type, visit the site to see some incredible art and follow the artists’ creative process—from concept and early sketches to the finished piece—on their respective Project Pages. You'll get a rare look into artists’ ways of working with modern technologies (including some Google products), and maybe even get inspired to create something yourself.

If you had the chance to make your mark in today’s art world with technology as your canvas, what would you create? We’d like you to show us.

Thứ Ba, 4 tháng 2, 2014

It’s time to Doodle 4 Google! How would you make the world a better place?

Before there was an airplane, there were doodles of flying machines, and before there was a submarine, there were doodles of underwater sea explorers. Ideas big and small, practical and playful, thought-provoking and smile-inducing, have started out as doodles. And we’re ready for more!

Doodle 4 Google is the chance for young artists to think and dream big. Our theme this year, "If I could invent one thing to make the world a better place…” is all about curiosity, possibility and imagination.
Creating the best doodle comes with major perks: this year—for the first time ever—the winner of the competition will become an honorary Google Doodler for a day and animate his or her Doodle for the homepage with the Doodle team. The winning Doodle will then be featured on the Google homepage for a day for millions to see. If that’s not cool enough, the winner will also receive a $30,000 college scholarship and a $50,000 Google for Education technology grant for his or her school.

If you feel like your young artist may need a little nudge to get their creative juices flowing, we’re partnering with Discovery Education to offer videos and activities for teachers and parents as well as a virtual field trip to Google’s headquarters. We’re also offering interactive “Meet the Doodler” Connected Classrooms sessions where kids can meet Google Doodlers, learn about their process from idea to a Doodle, and ask questions along the way.

Mark your calendar to send in your kids’ submissions by March 20. Judging starts with Googlers and a panel of guest judges, including astronaut Ron Garan, author of the Percy Jackson Series Rick Riordan, Google[x] Captain of Moonshoot Astro Teller, directors of The LEGO Movie Chris and Phil, President of RISD school Rosanne Somerson, robotics designer Lee Magpili, and authors Lemony Snicket and Mary Pope Osborne.

On April 29, we’ll announce the 50 state finalists and open up a public vote to select the national winner. These 50 kids will all get to visit Google’s headquarters in Mountain View, Calif. on May 21 for a day full of creative workshops and other fun activities—and the winning (animated!) doodle will be revealed on google.com in June.

Participating is easier than ever. Teachers and parents can download entry forms on our Doodle 4 Google site. Doodles can be uploaded digitally to our site or mailed in. We encourage full classrooms to participate too! There’s no limit to the number of doodles from any one school or family... Just remember, only one doodle per student.

That’s all I’ve got. Now get to doodling!

Thứ Hai, 3 tháng 2, 2014

Supporting computer science education with the 2014 RISE Awards

"We need more kids falling in love with science and math.” That's what Larry Page said at last year's I/O, and it's a feeling shared by all of us. We want to inspire young people around the world not just to use technology, but to create it. Unfortunately, many kids don’t have access to either the education or encouragement they need to pursue computer science. So five years ago we created the Google RISE (Roots in Science and Engineering) Awards, which provide funding to organizations around the world that engage girls and underrepresented students in extracurricular computer science programs.

This year, the RISE Awards are providing $1.5 million to 42 organizations in 19 countries that provide students with the resources they need to succeed in the field. For example, Generating Genius in the U.K. provides after-school computer science programs and mentoring to prepare high-achieving students from disadvantaged communities for admission into top universities. Another awardee, North Carolina-based STARS Computer Corps, helps schools in low-income communities gain access to computing resources for their students to use. Visit our site for a full list of our RISE Award recipients.
Created in 2007, the Children’s University Foundation has been carrying out educational programs for more than 20,000 children aged 6-13. Click on the photo to learn more about this and other RISE Awardees.

This year we’re also expanding the program with the RISE Partnership Awards. These awards aim to encourage collaboration across organizations in pursuit of a shared goal of increasing global participation in computer science. For example, more than 5,000 girls in sub-Saharan Africa will learn computer science as a result of a partnership between the Harlem based program ELITE and the WAAW Foundation in Nigeria.

We’re proud to help these organizations inspire the next generation of computer scientists.

Shedding some light on Foreign Intelligence Surveillance Act (FISA) requests

We believe the public deserves to know the full extent to which governments request user information from Google. That’s why for the past four years we’ve shared and continuously expanded and updated information about government requests for user information in our Transparency Report.

Until now, the U.S. Department of Justice (DoJ) opposed our efforts to publish statistics specifically about Foreign Intelligence Surveillance Act (FISA) requests. Under FISA, the government may apply for orders from a special FISA Court to require U.S. companies to hand over users’ personal information and the content of their communications. Although FISA was passed by elected representatives and is available for anyone to read, the way the law is used is typically kept secret. Last summer’s revelations about government surveillance remind us of the challenges that secrecy can present to a democracy that relies on public debate.

Last year we filed a lawsuit asking the FISA Court to let us disclose the number of FISA requests we may receive and how many users/accounts they include. We’d previously secured permission to publish information about National Security Letters, and FISA requests were the only remaining type of demands excluded from our report.

Today, for the first time, our report on government requests for user information encompasses all of the requests we receive, subject only to delays imposed by the DoJ regarding how quickly we can include certain requests in our statistics.
Publishing these numbers is a step in the right direction, and speaks to the principles for reform that we announced with other companies last December. But we still believe more transparency is needed so everyone can better understand how surveillance laws work and decide whether or not they serve the public interest. Specifically, we want to disclose the precise numbers and types of requests we receive, as well as the number of users they affect in a timely way. That’s why we need Congress to go another step further and pass legislation (PDF) that will enable us to say more.

You have the right to know how laws affect the security of your information online. We’ll keep fighting for your ability to exercise that right by pushing for greater transparency around the world.