Haxe Preloader For Flash – Written in Haxe.

There has been some talk about creating flash preloaders for haxe. However, these methods step outside the haxe toolchain and add some additional complication.

I have come up with a reasonably simple method for creating a haxe preloader in haxe, and then linking to a (very almost) unmodified swf generated in haxe using a small neko program to produce a single file. The neko program uses code from the hxformat project, some of which is provided so you can easily recompile the tool.

Output.swf

Since each original haxe swf contains one frame, the resulting code
contains 2 frames. The first frame contains the preloader
which waits for the whole file to load and then locates the
PreloaderBoot class by name. This class runs the appropriate
initialisation code, creating and running the correct “main” class.

For classes that appear in both preloader and the main swf,
flash takes the first one – the one form the preloader. This means
that both classes will have the same “flash.Lib.current” and
(almost) everything will just work.

One complication comes from the fact that the flash.Boot class
is given a unique name for each of the swfs. This means flash.Boot
class in the main swf is not automatically “new”ed and placed
on the stage, and the standard haxe initialisation code is not
run. To compensate, we manually set the trace function and
call the Boot initialisation code explicitly.

This sounds a little dodgy to me, but it seems to work – I will
have to do some more testing.

The “Main” class in the example zip contains a resource to pad it out. The preloaded swf can be seen on it’s own screen. You can refresh to see it loading.

The example code is in
haxe-preloader-0.1.zip

All code & data there is public domain, except for the hxformat code,
which has its own license. Use at your own risk.

Cross-platform again

blinkdemo.png
So far, I’ve mostly looked at the flash/swf version but now I will return my attention to cross-platform development.

There are a number of existing libraries that can be used with haXe, but most of these are low level, but what I’m after is a higher level option. So the plan is to build a higher-level layer on top of an existing module. I have chosen to build on top of NME, which is SDL based. My decision was mainly to do with support for opengl, sound/music, font, input and screen management.

In the end, the design wrote itself, based on the simple rule “it should be easy to port something from existing flash code”. Initially I tried writing a substitute library called “flash”, but the haxe compiler rejected me. This is probably best, because, although the alternative requires slightly more porting for the flash case, I think it allows for greater possiblities of minor architectural changes. This has two big advantages – half the work is already done for me and there is an excellent design document for the rest.

The result is a library I have called “blink”. There is essentailly one blink class definition for each flash class. On the flash platform, a simple “typedef” is used to get exactly the same code as native flash. On the neko platform, there is a haxe implementation that ultimately falls through to an NME call.

The library is only at the demo stage, and only implements enough to get the APE demos off the ground, but I think is shows the possibility. The only changes required were to change “flash.” to “blink.”, modify the main-line boot function slightly and make sure to use cross-platform constructs (eg, no “__as__” casting).

The code here (BlinkDemo.zip) shows the same code compiled for flash and neko. It uses a slightly extended NME library, which is provided as a dll in the bin directory – to use the dll, make sure you run the neko.exe in the bin directory so it finds the right one.

The updated performace is (note:using “cast” not “as”):

Car Demo Robot Demo
Original 2.0ms 9.5ms
haXe 1.58ms 9.45ms
hx->as3 1.56ms 9.47ms
neko/nme 4.0ms 16.9ms

On first glance it would appear the numerical processing takes about twice as long under neko as it does under flash. However this code might not be the greatest test because we can see how the performance of the “cast” command can effect the results.

Also of note is that the graphics is quite capable of reaching 100fps, so I do not think the SDL code will be a bottleneck.

I am very pleased with this approach, and I think it might be the way forward for cross platform game development. In some ways (certain) games are easier because they use a generally smaller sub-set of graphics primitives – mostly image drawing.

Change a few lines, get a big speedup.

It was pointed out to me that there was a better way to do a “cast” and a few simple changes to to porting script yielded some big improvements. So, the new bundle [here](https://hughsando.com/wp-content/uploads/2007/11/apeport2-a045.zip) now gives:

Car Demo Robot Demo
Original 2.0ms 9.5ms
haXe 1.58ms 9.45ms
hx->as3 1.56ms 9.47ms

So now you can add speed as a reason to use haXe.

Porting APE (Actionscript Physics Engine) to haXe

I see that APE [http://www.cove.org/ape](http://www.cove.org/ape) has moved on to version 0.45 alpha, and has an extremely beautiful “robot” demo. So, with the faster version of haXe, and improved knowledge, I though it was time to try porting it again. This time, I took a different approach – I wrote a program to do the porting for me. This has a few advantages. It allows for easy porting of future versions. It provides a list of things required, and it allows for modifications (such as the FPS counter) to be done only once (to the as3 code) and ported automatically to the haXe code.

[The full project can be found here.](https://hughsando.com/wp-content/uploads/2007/11/apeport-a045.zip) It contains source, conversion program and demos.

The timings for the calculations are as follows:

Car Demo Robot Demo
Original 2.0ms 9.5ms
haXe 2.04ms 12.1ms
hx->as3 2.3ms 24.1ms

Which I think is pretty good – except for the last entry – not sure what happened there.
Note that the haXe speed required a hack to avoid the “as” and “is” cast/query operators – and used a virtual function to achieve the same result in a neat way.

The conversion program is not a complex parser, rather a bunch of regular-expressions that relied on coding style as much as syntax. However, it worked pretty well in the end, once I got the “properties” sorted out – APE uses these quite a bit – and you must have “strong” types to use them in haXe. This program may be reusable to a small extent, but it pretty much tied to APE.

An outline of the porting tasks is as follows:

– Convert “int”, “void”, “Number” etc.
– Convert “package xxx {” to “package xxx;”.
– Expand out “import xxx.*” imports.
– Remove “private”, “final”, “internal” etc.
– Scan the class for “get” and “set” functions and insert “var prop(get_,set_):type” where appropriate. This was complicated by the fact that some of these were “override” properties and should not have this extra insertion. (I should have looked for the “override” keyword to make this easier).
– Add return statements to set functions.
– Fix POSITIVE_INFINITY.
– Make sure arrays are strongly typed – need this for properties.
– Change in-line array declarations when array is not of type Dynamic.
– Convert “indexOf” function in array.
– Convert “for(a ;b ;c )” to “a; while(b) { … c }”.
– Fix scoping of variables resulting from variables declared inside for statements.
– Add semi-columns to lines that needed them.
– Change constructors to “new”.
– Add static main function to main class, and “addChild” it.
– Call “super()” where required.
– Convert default-arguments to optional-arguments.
– Remove “break” from switch statements.
– Change “is” and “as” operators.

AS3 and haXe are reasonably close and with a consistent coding style, I think the automatic porting is a very viable option. If I had control over both sources, I would have done a few little things to the AS3 to make it slightly easier – ensure “;” on all lines, explicit call to super(), don’t double up on variable names inside for loops and other minor stuff. But the reg-ex engine makes most of these things pretty easy to work around.

Huge speedups for flash9 with haXe 1.15, hxasm investigated.

Due to the great work of Nicolas Cannasse, most of the results below have to be re-written! HaXe now as stong typing in flash9, significantly improving performance. I also have a new machine, so some of the results will not be directly comparable, but you will get the idea. I have also added a new one: inline-grid-while, that uses while loops instead of for loops.

With the new version of haXe comes some very interestesting technology – hxasm. This allows you to use haXe syntax to write flash9 “bytecode”. This gives the possibility of decoupling the “per object” bit of the grid iteration from the looping bit by concatenating chunks of bytecode. In theory, you should be able to achieve optimal performance using this method, since you can write any bytecode you like. However, currently I can’t quite get the performance I think because ultimately the function is called through a “dynamic” interface, rather than a strongly typed one.

Writing hxasm from scratch can be quite difficult. For starters, the flash api requires time to compile the code, so the api involves a callback to complete the compilation. Also, the haXe syntax is not that of a “proper” assembler, so jumps etc take a bit of work. And sometimes it is a bit hard to know where to start. To help with this, I’ve written a tool that takes compiled hx code, via the output of “abcdump”, and converts it to hxasm. You can find this code in abctools.zip.

Examining the hxasm code, you can see the difference between the for and while loops. Interestingly, other “hand optimisations” did not seem to give much better results – I suspect the flash vm is doing some pretty good optimisation as it goes. So I think the way to optimise is probably to change the original hx code, rather than the hxasm code (eg, using while loops instead of for loops). Another optimisation I looked at was to “burn in” runtime values. So rather than using the op code to get a member variable, you can burn this variable in as a constant into the bytecode. I think this gave a small improvement – I could not really tell. Infact, this last optimisation is really the only performace increment to be gained from runtime compilation – the rest could in theory be done in the production of the swf file. However, it does present a very interesting solution the the code decoupling!

The source code can be found in src2.zip. Unfortunately, this breaks the ability to compile for neko. Also, it requires a small mod to hxasm 1.03, using an additional offset of -4 on the “backwardJump” call in Context.hx.

Method Time (ms/frame) Pros Cons
Object List 8.1 Easy to understand/debug. Slowest. Causes stutter while garbage collection runs
HaXe Iterator 10.1 Improved performace over Object List.
Direct “drop in” replacement for Object List.
Decoupled data.
Slightly complex to write. Slightly slower than most.
While Iterator 7.1 Slightly faster than for-iterator. Slightly easier to write Slightly more complex to use.
Closure/Callback 13.9 Slightly faster than for-iterator. Decoupled. Interesting way of writing code. Interesting way of writing code.
Member Callback 6.0 Faster than anonymous callback. Member function name is explicit in code.
Inline GOB 6.4 Faster. Couples GOB code to grid implementation. Requires separate code for each function
Inline Grid – for 4.5 Fast. Easy to understand/debug. Not as badly coupled as Inline GOB. Couples Grid code to GOB implementation. Requires separate code for each function
Inline Grid – while 4.0 Fastest. Same as “for” loop, but slightly faster, and slightly more verbose. Couples Grid code to GOB implementation. Requires separate code for each function
HxASM inline code 5.1 Fast and decoupled. Requires writing “raw” hxasm callback. 2-phase setup

Out of all this, the conclusion is pretty similar – the tighter coupling creates faster code – but all the code is faster now, which is great. The inline hxasm is very interesting, and while probably not appropriate for this application, shows some promise for certain applications.

Iteration/looping

The following discussion is based on the source code :1000OgresOource.zip. This code uses the “xinf” haxelib module to provide support for cross platform (browser, downloadable) structures.

The Ogre demo uses a grid to check for collisions between objects. So,rather than checking 1000 sprites against 1000 others, requiring 1000000 checks per frame, each sprite only checks sprites in the local viscinity, running much faster. The 2D grid is independent of the tile grid, and its spacing can be optimised based on object size and density etc.

The code deals, in part, with “GOB”s (Game OBjects) and the GOBGrid. I tried to decouple the grid from the objects, but I could not, because the haXe template system is not powerful enough. The coding issue I’m going to talk about here is how to best separate the task of examining objects in the local visinity, from how the objects are stored in the grid. In other words, iterators.

The algorithm I’m going to talk about is something like the following pseudo code fragment:

GOB::Move()
{
   x += velocity_x
   y += velocity_y

   for_all_nearby_objects_in_grid
     if (obj_is_close_to_me)
        -> dont move.

The question is, what does the “for\_all\_nearby\_objects\_in\_grid” look like. I have tried the following:

Object List. Here, the GOBGrid produces an Array of candidate objects. The GOB then iterates over these, checking distances between the potential move position and these candidate objects. An important point to note is that the following:

   var objs = mGrid.GetCloseObjs(x,y);
   for(obj in objs)
      ...

was *much* slower than:

   var objs = mGrid.GetCloseObjs(x,y);
   for(i in 0...objs.length)
   {
      var obj = objs[i];
      ...

this should be considered when writing high-performance code.

HaXe Iterator. Writing the iterator was slightly tricky, because you need to think in a slightly different way than you would normally. Here I have made the assumption that “getNext” will be called exactly once after each successful “hasNext” call. I’m pretty sure this is right. This assumption places all the logic in “hasNext” and makes “getNext” trivial. The big advantage of the iterator is that it is syntactically identical to the object list code above (first example), eg:

   var objs = mGrid.GetCloseObjs(x,y);
   for(obj in objs)
      ...

and runs much faster. This leaves open the possibility of staring with a list and then moving to an iterator if the performace is required. The iterator code looks like this:

class GOBIterator
{
   var mGrid:GOBsList;
   var mGridPos:Int;
   var mGridEnd : Int;
   var mYStep:Int;
   var mWidth:Int;

   var mCurrentList : GOBs;
   var mListPos : Int;
   var mX:Int;

   var mNext : GOB;

   public function new(inGrid:GOBsList,
            inX0:Int,inY0:Int, inX1:Int,inY1:Int, inWidth:Int)
   {
      mGrid = inGrid;
      mWidth = inX1-inX0;
      mYStep = inWidth - mWidth + 1;
      mX = 0;
      mGridPos = inY0*inWidth + inX0;
      mGridEnd = (inY1-1)*inWidth + inX1;
      mCurrentList = mGrid[mGridPos];
      mListPos = 0;
   }

   // Haxe iterator interface
   public function hasNext()
   {
      if (mGridPos >= mGridEnd)
         return false;

      while(true)
      {
         if (mListPos=mGridEnd)
               return false;
         }
         else
         {
            mGridPos++;
         }
         mCurrentList = mGrid[mGridPos];
         mListPos = 0;
      }
      return false;
   }

   public function next() : GOB
   {
      return mNext;
   }

}

The mGrid is an Array of cells, each of which is an array of GOBs that are centred in that cell. To go from (x,y) coordinate to cell, the x and y are first quantised and then an index is calculated using cell=y*xcells + x. Another possiblity would be to have a 2D array of cells. I have not tried this, and it may be better or worse, I don’t know.

HaXe while loop. This is very similar to the above code, except that the getNext and hasNext code are combined, and return “null” at the end. The code is simiar- it uses the same constuctor and the function:

   // This combines hasNext with next, and returns null when done.
   public function getNext() : GOB
   {
      if (mGridPos >= mGridEnd)
         return null;

      while(true)
      {
         //var n = mWidth + mYStep - 1;
         //trace( "[" + (mGridPos%n) + "," + Math.floor( mGridPos/n ) + "]" );
         if (mListPos=mGridEnd)
               return null;
         }
         else
         {
            mGridPos++;
         }
         mCurrentList = mGrid[mGridPos];
         mListPos = 0;
      }
      return null;
   }

The problem with this is that you have to use the “while” loop, rather than the “for”, taking 3 lines instead of 1.

Closure/Callback. This method keeps the grid and GOB decoupled by asking the grid to iterate over the neadby objects, calling a callback function for each candidate object.

      var self = this;

      return mGrid.VisitCloseClosure( mMoveX, mMoveY, m2Rad,
                 function(inObj:GOB)
                 {
                    var obj:GOB = inObj;
                    if (obj==self) return true;

                    var dx = self.mMoveX-obj.mX;
                    var dy = self.mMoveY-obj.mY;
                    return dx*dx+dy*dy >= 2;
                 } );

This type of inline-function definition is just the sort of thing I’ve been craving in C++ for years. It takes a bit to get your brain around, but it does provide a very elegant way of decoupling code.

The above 4 methods are attractive because there is a large decoupling between the grid and the objects it stores. The grid could quite easily deal simply with dynamic objects, and the GOB need only know that the grid returns some kind of logical list. Unfortunately, they are not the fastest methods. The following methods introduce tighter coupling between the grid and the GOB in order to improve speed.

Visitor Callback. This method is very similar the the callback method above, except that the grid is passed an object of known type and calls a particular member function on it, rather than a anonymous function, for each candidate object. The problem is that this can only call one particular function, and thus can’t be adapted to a different function.

Inline GOB. This method, the GOB knows everything about the grid implementation and iterates over the elements directly. While it is not *too* much code in this case, this may soon grow unweildly if we consider such things as multi-resolution grids. This does not allow us the change the grid implementation without changing the GOB code too.

Inline Grid. Here the grid knows about GOB collisions and interrogates the objects directly. This binds part of the GOB implementation to the Grid and is also specialised for one particular function (eg, “collision detection”). However, it does let us change the grid implementation without changing the GOB code.

Results

The results are summarised in the following table.

Method Time (ms/frame) Pros Cons
Object List 31.8 Easy to understand/debug. Slowest. Causes stutter while garbage collection runs
HaXe Iterator 21.0 Improved performace over Object List.
Direct “drop in” replacement for Object List.
Decoupled data.
Slightly complex to write. Slightly slower than most.
While Iterator 20.0 Slightly faster than for-iterator. Slightly easier to write Slightly more complex to use.
Closure/Callback 20.7 Slightly faster than for-iterator. Decoupled. Interesting way of writing code. Interesting way of writing code.
Member Callback 16.4 Faster than anonymous callback. Member function name is explicit in code.
Inline GOB 17.6 Faster. Couples GOB code to grid implementation. Requires separate code for each function
Inline Grid 14.8 Fastest. Easy to understand/debug. Not as badly coupled as Inline GOB. Couples Grid code to GOB implementation. Requires separate code for each function

So, there you have it. *No definitive answers!*. Decoupling is sacraficed for performance in most cases. Except perhaps that the grid should loop over the objects, rather than the other way around. I think I will use the Inline Grid method for collision detection.

However, if I need to write code like “All ogres run away from all skeletons” then I will need one of the first 4 generic ways of iterating. The iterator methods may get too complex if I have “multi-resolution” grids, in which case, the anonymous function callback may be the way to go. There may also be a way to bring the anonymous function performace up to match the member-function performace – this would be the best of all worlds (fully customisable, and only slightly slower than Inline Grid). Any ideas anyone?

You can download the code and comment/uncomment these various options in GOB.hx.