Spring & summer are really good periods for rendering programmers. They allow observing all these lighting effects definitely better implemented by Mother Nature ;) …in addition to holidays, “far niente” , french pastis (with moderation) & petanque under the sun.
More seriously, don’t worry, new stuff is coming for this blog. Little teasing: EASTL, Light Pre-Pass & FXAA iPhone benchmarks and personal reviews…Stay tuned.
Need For Speed Undercover, Shift 2 Unleashed, UFC Undisputed, Ridge Racer Unbounded, Tony Hawk Underground, Sonic Unleashed, Tomb Raider Underworld, Mortal Kombat: Unchained, Rock Band Unplugged
I’d like to share with you a reflexion I’m having about the unit test execution of my ‘Z’ engine. By default, I choose to have them launched each time my application is started (except for my “master” build target). Of course I can skip all tests or launch them as standalone (ie without running the full game) using some command line arguments.
This default behavior was chosen because I want to enforce me to execute them really regularly on each tested platforms (Window, Mac, iOS) and avoid the classic workflow: tests launched “sometimes”, about every two or three months after some deadlines or milestones reached (ie big features implemented, I don’t have any ship date to fulfill, that’s the benefit of an entirely personal project), observing that a lot of them are in “broken” state on some or all platforms, and waste some time to debug and fix ’em all. (Also, at home, I have no continuous-working server that could periodically launch them for me)
Currently I’m not having so many tests to launch, so the execution is not very time expensive but I’m thinking about a way to have each test executed only when required (ie, there was a code modification that could affect it), thanks to the __DATE__ & __TIME__ C++ standard preprocessor macros.
The pseudo code inside a zFooClassTest.cpp implementation (not representative) :
zTime buildTime = zTime::ParseDateTime(__DATE__, __TIME__); //convert strings to my datetime format
// retrieve last launch of this test (from a local file, a database, etc…)
zTime lastLaunchedTime = zUTEST::RetrieveLastLaunch(testId); //testId could be *this, a string name, a hashcode or whatever test identification
if (buildTime > lastLaunchedTime)
//do the test execution
This is really simple. The major issue to solve is: How can I ensure my zFooClassTest.cpp is rebuilded (and so the “buildTime” is refreshed) each time a modification occurred on the tested zFooClass or one of its dependencies.
- Adding a #include “zFooClass.cpp” directly in the zFooClassTest.cpp. You get kind of ubercpp behavior, but you also have to exclude the zFooClass.cpp file from the supported builds in your project to avoid multiple definition at link-time. This could be quite unfriendly to maintain especially if you have several platforms IDE projects manually managed instead of an automatic generation (CMake or alike). Moreover, this is ok only if your test is very low-level and don’t uses directly other objects that will then require to be added in several.cpp test files…unless you have an executable or module generated for each of your tests, loaded and launched separately by the main application execution. A viable solution only if you have a tool to manage parts of all this horrible stuff automatically for you, and if this kind of execution cost (loading modules, etc..) is still very faster than the test execution itself… I really doubt that.
- Having the __DATE__ & __TIME__ also used and stored inside a static member of each object classes (zFooClass, …) and retrieved by the test. The test buildTime information is still kept cause we of course want the test to be launched when modified itself.
During runtime it compares each used class build date & his own one with lastLaunchedTime. This class parts could be easily implemented automatically using macros, but there is more things to write and to be aware of when you implement your test. Having buggy test not launched when required doesn’t sounds pretty. Furthermore, you can’t detect inlined modifications in the .h header file.
Finally another issue (and there are probably more I’m not thinking about) is, you can’t have this kind of smart management if your test use a static or dynamic library. This is definitely a low-level tests trick.
I plan to implement one of these propositions someday in the ‘Z’ engine, but only for really low-level small tests that require minimum refactoring. Fortunately, I write test essentially for low-level critical things. Maybe I will think about a more heavy-to-implement-but-convenient-to-use solution only when I’ll have enough big and long tests to justify this work. I’m conscious this is not essential part of an engine, but hey, I’m here to play…
…with wrong screen settings.
You probably know that Gamma control among the whole production chain (3rd party tools, level builder, final platform rendering on monitor, …) is a hot and recurrent presentation topic in Game Developer Conferences & Co (I recommend Naughty Dog’s one at GDC10, fairly basic but well illustrated and understandable).
But, here is an extract from the Crytek ‘s Crysis key rendering features document:
“Any color/contrast/brightness differences are most likely due to an improper HDMI setup for the user’s display. To ensure a correct set-up, the user can follow these simple steps:
- If the TV supports HDMI 1.3 or higher:
- 360: pick “Expanded” in console dashboard display setup, in reference levels
- Ps3: enable “RGB full range” in console display setup
- If incorrect settings are used, the brightness calibration icon might be always visible (too bright), resulting in a lack of contrast (insufficient darks) and visible (color) banding. This very likely also means the TV does NOT support HDMI 1.3 and higher, or is not properly detected by hardware.
- If the TV does not support HDMI 1.3 or higher:
- 360: pick “Standard” on console dashboard display setup, in reference levels
- Ps3: disable “RGB Full range” in console display setup
- If the user has the wrong setting, the brightness calibration icon will always not be visible, or almost invisible, resulting in crushed darks.
- Another common mistake from users is not picking the correct resolution to match the display’s native resolution, resulting in an additional image upsample from the TV. If the display monitor’s native resolution is 1080p, the user should pick it as default on either XBox 360 or PS3. “
This is something I didn’t know, as a rendering programmer, so I guess most players don’t too… It’s curious if consoles can’t retrieve HDMI protocol version used by the display they are connected on and automatically choose te correct settings.
Now, all I have to do is to find the detailed specifications of my old LG 26LC45 to know if HDMI 1.3 is supported or not…that sounds like an impossible task. Anyone can help ? ;)
Just this once won’t hurt, I’m not talking about openGL or iPhone today.
I found the DirectX10/11 control panel really awful to set the debug layer. Sometimes on some computers, for any unknown reason it has no effect either if you set it to “forced on“. Moreover, it could be hard to verify if the path added for the application you want to debug is correct, etc…
Here is a little but useful trick I’m using to know inside my application if the debug layer is really currently activated or not (could be used also to know if the user overrides your device creation layer setting via the control panel) :
bool IsDebugLayerActivated(const ID3D10Device& a_rDevice)
a_rDevice.QueryInterface(__uuidof(ID3D10InfoQueue), (void **)&pInfoQueue);
bool bActivated = (pInfoQueue != NULL);
} return bActivated;
If you are an iOS developer you probably know that the brand new build of Apple XCode has been released on march and is available for free with SDK 4.3. New features are quite sexy, especially the single window as I was totally disappointed and stressed with XCode3 and its 23 dialogs across the screen and beyond…
The first 30 minutes were troublesome and I was lost again like when I ported my engine to Mac the first time on XCode3.
It was hard to find all the things I often use, I had to remap again my keyboard shortcuts to more friendly ones to me, find how to correctly use the new LLVM 2.0 and the new “schemes” concepts in my existing projects, etc… but after this little accommodation time, I’m really seduced.
However, I’m not a heavy user of XCode ([a little bit of my life story]I like to code cross-platforms things on my windows laptop, comfortably installed on the sofa, then try things on device using the desktop mac[/end]) so the move was not a big pain. I don’t have a lot of practice and tricks on this IDE. I guess this is probably different for an old Mac-maniac.
To finish, just a big inconvenience for me: SVN and Git were really nicely integrated (it seems), but perforce isn’t supported yet… Apple, if you hear me.