Cross compile from Mac to Linux with hxcpp

It is possible to use the hxcpp build tool to compile linux 32/64 binaries from a Mac Box. “Why would you want to do this?” I hear you ask. Well, I use this so I can run all my builds from a mac-mini, with windows running in VirtualBox. I know there are other ways, but having a single little box doing all the work is pretty easy to manage.

To do this, you just need to setup the HXCPP_XLINUX variables. There are a number of ways you can do this, but the ~/.hxcpp_config.xml file is probably the cleanest place.

Having installed the cross-compilers from http://crossgcc.rts-software.org/doku.php?id=compiling_for_linux, is it just a matter of pointing to the executables using some xml entries:

<set name="HXCPP_XLINUX64_CXX" value="/usr/local/gcc-4.8.1-for-linux64/bin/x86_64-pc-linux-g++" />
<set name="HXCPP_XLINUX64_STRIP" value="/usr/local/gcc-4.8.1-for-linux64/bin/x86_64-pc-linux-strip" />
<set name="HXCPP_XLINUX64_RANLIB" value="/usr/local/gcc-4.8.1-for-linux64/bin/x86_64-pc-linux-ranlib" />
<set name="HXCPP_XLINUX64_AR" value="/usr/local/gcc-4.8.1-for-linux64/bin/x86_64-pc-linux-ar" />

<set name="HXCPP_XLINUX32_CXX" value="/usr/local/gcc-4.8.1-for-linux32/bin/i586-pc-linux-g++" />
<set name="HXCPP_XLINUX32_STRIP" value="/usr/local/gcc-4.8.1-for-linux32/bin/i586-pc-linux-strip" />
<set name="HXCPP_XLINUX32_RANLIB" value="/usr/local/gcc-4.8.1-for-linux32/bin/i586-pc-linux-ranlib" />
<set name="HXCPP_XLINUX32_AR" value="/usr/local/gcc-4.8.1-for-linux32/bin/i586-pc-linux-ar" />

Then you can build from a “build.n” (such as in $HXCPP/project) file using:

neko build.n linux

FastCGI For Neko On Share Hosting

In my previous post, I described you could setup neko web services on a shared host, using CGI. This method is not as efficient as it might be because a separate process is required for each request. However it is possible to extend this to “Fast CGI” (FGCI), which starts a single process, and keeps it alive. Apache talks to this over a socket, sending requests and receiving data and a very efficient manner.

If you got CGI working, and your server supports FCGI, then the transition on pretty simple.

The first thing to do is to download the new “fastcgi” haxelib. From a shell use:

haxelib install fastcgi

If haxelib asks you for a project directory, the following discussion assumes you specify your “haxeneko/lib” directory.
There is one bit of housekeeping you should do at this time – copy the “nekoapi.dso” object from “~/haxeneko/lib/fastcgi/0,1/ndll/Linux” into your “~/haxeneko” directory. This ensures that this dso will be found when neko is run by the web server.

Now it is time to change the cgi script. The code is very similar, except the extension should be “.fcgi”. Here is the script I used Site.fcgi:

#!/bin/sh
export HAXENEKO=~/haxeneko
export LD\_LIBRARY\_PATH=$HAXENEKO
export PATH=$PATH:$HAXENEKO
export NEKOPATH=$HAXENEKO
export HAXE\_LIBRARY\_PATH=$HAXENEKO/std
cd ../../site
exec neko SiteFCGI.n

Note the final “exec” call to ensure the pipes are all correctly plumbed.
And the obvious change to the .htaccess file (.fcgi extension):

RewriteEngine on
RewriteRule \\.(css|jpe?g|gif|png)$ - [L]
RewriteRule ^(.*)?$ cgi-bin/Site.fcgi [L]

Finally, compile the “Test.hx” file that came with the fastcgi lib. I have a slightly altered version here:

class Test
{
   static var processed = 0;
   static public function main()
   {
      // Called in single threaded mode...
      fastcgi.Request.init("");
      // This can be called multi-threaded...
      var req = new fastcgi.Request();
      while( req.nextRequest() )
      {
         req.write( "Content-type: text/html\r\n" +
            "\r\n" +
            "<title>Neko FastCGI<title/>" +
            "<h1>Fast CGI</h1> Requests processed here: " + (processed++) );
         req.write( "\n page = " + req.getParam("REQUEST\_URI") );
         req.close();
      }
   }
}

This version prints the request uri too. To compile, use:

haxe -main Test -neko SiteFCGI.n -lib fastcgi

And that should be that! When you visit your web page, you should see the “processed” counter increase, verifying that it is the same process that is running.

Currently the system does not support easily killing the FCGI process, which is something that you must do when you update the “.n” neko file. The only way at the moment is to use the shell to do “ps -x” to identify the process number, and then “kill -9 number”, where number is the process number of the neko executable.

Neko on Shared Hosting

I started to think about using neko web technology, but since I have shared hosting, it was not obvious now this could be done. Currently I’m with site5.com, which is very cheap for running multiple websites, but since it is shared hosting, you don’t get to install anything. However, it does have few features that made getting a neko site up and running quite possible.
The key features are:

  1. Shell access – not really required if you can copy files to the site (eg, via ftp) but very useful for debugging and getting things going. The shell I have is “jailshell”, which I think prevents directory listings outsite your home directory, but otherwise is pretty functional (based on bash).
  2. gcc access – again, not really required once things work, but as you will see, pretty much required if things go wrong. And also good if you want to compile a c++ target!
  3. CGI access. Since we can’t modify the apache installation, the only way we can get our code to “run” is via and external process – this is what cgi is for. I will talk a bit about “fast-cgi” later (once I get it going).

First thing is to check you have cgi access. When I first set up the site, I have nothing but an empty “cgi-bin” directory. To test this create a file “test.cgi” in there containing:

#!/bin/sh
echo "Content-type: text/plain"
echo
echo "Hello from CGI!"
set


Now to enter this code, I used old-school remote ssh shell (using putty) & vi. You may choose to ftp it on use filezilla or similar. You will also need to add executable permission (chmod a+x test.cgi for ssh, not sure how to do this via ftp). You can then test it with yoursite.com/cgi-bin/test.cgi. With any luck, you should see the expected greeting, plus the “set” command should dump all the environment variables available to your application.

If you get a “500 – server error” at this stage, it must be fixed. The error is spectacularly unhelpful – not sure where to find the additional error info. Start by trying to run the file from the command line, ie type “~/www/cgi-bin/test.cgi” (assuming this is where your script is located). You should see the output, or perhaps a better error message. Also check for “execute” permission for “all”, as the apache server will run this script with limited privileges. Finally, make sure you have specified a “Content-type” and an additional blank line in the output.

Ok, now we have cgi working! Next step is neko – and haxe too since I will be doing some compiling on the server to help with testing. Haxe is not strictly required if you are deploying pre-compiled solutions.

The hard way

As I said before, you can’t “install” anything on the shared host (no package managers, so it all has to go in your home directory. First thing I did was to download the linux binary distro from nekovm.org. This is easy with the magic “wget” shell command. With your desktop browser, go to the download page and find the link – right click and “copy” the link address. Then go the the shell (putty) window and then paste the link in so you get something like “wget http://nekovm.org/_media/neko-1.8.1-linux.tar.gz?id=download” – hey presto a gzipped-tar file (may have a funny name -that’s ok. Try to use “tab” for tab-complete the filename to save typing). Make a suitable directory and “tar xvzf file” the file to extract the neko files. Now go to the directory and try to run neko. (ie, “./neko”).

You will probably get an error like “libneko.so not found”. But it’s right there, wtf? So you need to set you LD\_LIBRARY\_PATH (“export LD\_LIBRARY\_PATH=~/dir/neko-1.8.1-linux”).

Ok, now you get libgc.so.1 is missing, which indeed it is. The easiest way I found to fix this was to use wget to download the source from “http://www.hpl.hp.com/personal/Hans\_Boehm/gc/gc\_source/gc.tar.gz”, unpack it, “./configure” it and “make” it. You end up with the required libgc.so.1 file in the “.libs” directory, which I then copied to be next to the neko executable. And now “./neko” works – apparently. I will save you the suspense – you also need to do the same thing with “libcpre” from “http://sourceforge.net/projects/pcre/files/pcre/7.9/pcre-7.9.tar.gz/download” – I use the 7.9 version, not sure if 8.0 works. This is required for haxelib later.

See, I told you that compiler access would come in handy.

Ok, neko done, time for haxe. Again the installer is not much use, so I downloaded the binaries from “http://haxe.org/file/haxe-2.04-linux.tar.gz”, however when I went to run this, I found the “tls” library required a GCC 2.4 runtime, which I did not have, and could not up grade. So – you guessed it linux fans, compile from source. One small hump to get over first, haxe requires ocaml to compile. Of course, ocaml is not installed, but if you are still with me at this stage you know the answer – compile from source. So “wget” it, and here is the trick – make a ocaml directory in your home directory (or somewhere under it), extract the source and use “./configure -prefix your\_ocaml\_dir” – this provides the “install” directory, since ocaml can’t be used without installing it. The the make is 3-phase “make world opt install”, and now you should have a ocaml install. You will need to put this in your executable path before you can think about compiling haxe.

The online doco suggests that you download and run “install.ml”. I tried this, but the cvs timed out. So I ran this on my windows box (already had ocaml installed!), tarred up the result and ftp-ed it over to my site. Painful, but it worked. One thing is that this uses the cvs “head” – anyone know where to get the 2.0.4 source tar-ball? Once I had the source, I commented out the “download” call in install.ml and “ocaml”ed it. And haxe was built. The haxe distro has a “tools” directory under it, and you can build “haxelib” if you have neko setup correctly.

Getting the paths right is a bit tricky, so I decided to simplify things. I made a directory “haxeneko” in my home directory and “cp -r *” the files from the neko distro (including the new gc and pcre libraries) into this new directory. Also, I copied the bin/haxe built executable in there, and haxelib too (once it was built). Finally, I copied (“-r”) the “haxe/std” files from the haxe distro into this directory too. Now I have everything required in the one spot – and you can too!

The easy way

I have saved you the pain, and you can simply download the files from haxeneko-1.0.tgz. So you should be able to “wget” this, untar it and be almost ready. You may run into problems if there is some incompatible library somewhere – in which case, back to the hard way for you!

Finally, we need to set up the paths. Because my hosting provides the “bash” shell, this setup goes in ~/.bashrc. The required “install” is:

export HAXENEKO=~/haxeneko
export LD\_LIBRARY\_PATH=$HAXENEKO
export PATH=$PATH:$HAXENEKO
export NEKOPATH=$PATH:$HAXENEKO
export HAXE\_LIBRARY\_PATH=$HAXENEKO/std

You may need to login again for this to work (or you could paste it directly to your command-line), but now you should be ready to compile some code!

Start by creating your site-code in a directory that is not under you www (public_html) folder \- I have called mine “site”. And here is a simple example haxe file:

class Site
{
   public function out(inString:String)
   {
      neko.Lib.print(inString);
   }
   public function new()
   {
      out("Content-type: text/plain\n\n");
      out("Hello World!\n");
      out("Page : " + neko.Sys.getEnv("REQUEST\_URI") + "\n" );
   }

   static public function main() { return new Site(); }
}

which can now be compiled with “haxe -main Site -neko Site.n”, and tested with “neko Site.n” to give:

Content-type: text/plain

Hello World!
Page : null

Alright – I think you can see where I’m going here, but we are not quite there yet. The problem is that the setup variables in the .bashrc file are not used by the apache server. Apparently, you can use “SetEnv” in a .htaccess file to get this to work, but I could not get it to (maybe the module was not enabled). But all is not lost. You can simply use a script to launch neko. Back in the cgi-bin directory, you can replace the “test.cgi” script with a “Site.cgi” script containing:

#!/bin/sh
export HAXENEKO=~/haxeneko
export LD\_LIBRARY\_PATH=$HAXENEKO
export PATH=$PATH:$HAXENEKO
export NEKOPATH=$HAXENEKO
cd ../../site
neko Site.n

Now point your browser at http://yoursite.com/cgi-bin/Site.cgi, and you should see the glorious neko output:

Hello World!
Page : /cgi-bin/Site.cgi

Now creating a bunch of cgi files is painful, and you do not want users to see this kind of implementation details, so we use one more trick \- the almighty “mod_rewrite”.

In your base “public_html” (www) directory, create a file called “.htaccess”, and add the following lines:

RewriteEngine on
RewriteRule \.(css|jpe?g|gif|png)$ - [L]
RewriteRule ^(.*)?$ cgi-bin/Site.cgi [L]

This leaves the css and image files in the www directory, but it redirects all other URLs your neko script, where they show up in your REQUEST\_URI. So now if you use the URL “http://yoursite.com/some_dir/file.html?param=abc&other=xyz”, you get the output:

Hello World!
Page : /some_dir/file.html?param=abc&other=xyz

Now the world (wide web) is your oyster \- you can parse the URL anyway you like, and generate any output you like.

This certainly gets you up and running with neko on a shared-hosting web server. One problem is that 2 processes are created for every request. I have done a some initial work with the “fast-cgi” interface, and think I should be able to get this going, in which case there should be a big boost in efficiency.

There should also be no reason why you could not compile the site to a c++ native executable. However, this may reduce your ability to use the neko “.n” template system.

Hxcpp 0.4, NME 0.9, Neash 0.9 Released!

What the flash?

What is Hxcpp? Hxcpp is the c++ backend for haxe. This means you can compile haxe code to c++ code, and then compile this to a native executable, for Windows, Linux or Mac.

What is NME? NME is the “Neko Media” library that wraps SDL, providing gaming interfaces for neko, and now native compiled haxe code.

What is Neash? Neash is a compatability layer that presents the flash API to haxe code running on other systems, such as js, neko or c++ native code.

Together these allows you to write code to target flash SWF files, and also cross compile to native code for Windows, Linux or Mac.

Hxcpp on haxelib

I have finally packaged up a bunch of changes into offical haxelib releases. Hxcpp is now on haxelib, which means you can get it with “haxelib install hxcpp”. This effectively creates a whole separate install of haxe, which can be run side-by-side so you can test it out without risk.

The cpp backend now supports Mac(intel) and Linux as well as the original Windows platform.

The main change to hxcpp is the packaging – moving towards a the final installation form. Currently there are a whole bunch of files distibuted in this release that should become redundant once the c++ backend is merged into the main branch. Also, the library coverage has been expanded a bit, but it is still not complete.

Usage

Firstly, you will need to run “haxecpp” instead of “haxe”. This executable is found in the appropriate bin subdirectory. I’m not sure if the “executable” flag will survive the compression, so you may need to “chmod a+x” the file.

It is probably best to place the appropriate bin directory in your executable path. On windows, this will also solve the problem finding the dynamic link library, hxcpp.dll. And on all systems, this will allow you to use the “make\_cpp” command from the hxml files. On Linux systems, you will have to allow the executable to find the hxcpp.dso. This is most easily done by setting LD\_LIBRARY\_PATH to the bin/Linux directory, or copying this file into an existing library path. Similarly on Mac, you should set DYLD\_LIBRARY\_PATH.

To build haxe code, use “haxecpp” inplace of “haxe”, with a target specified by “-cpp directory”.
This will place source code and a makefile in the given directory. Then you need to do a “make” on linux/Mac, or “nmake” on Windows to build the executable. You may need to set the environment variable “HXCPP” to point the the directory that contains this file. On windows, this will be something like: c:\Progra~1\Motion-Twin\haxe\lib\hxcpp\0,4\

As a shortcut, if you are using a hxml file, you can use “-cmd make_cpp” which will do the build for you assuming you used the “-cpp cpp” directory.

Neash/NME

The big changes for NME is that it now supports Linux and Mac(intel) for neko ac c++ targets. There have been a few bug fixes as well as a few new features:

  • Bitmap class
  • Expanded and optimised TileRenderer for render scaled and rotated sub-rects from a surface
  • A few smarts for finding fonts, if no ttf is supplied
  • Some blend modes have been added
  • Added scale9Rect
  • Added drawTriangles, with perspective correct textures

ToDo

There is still plenty to do, including, but not limited to:

Hxcpp:

  • Proper coverage of all APIs.
  • Resolve the order-of-operation problem: In c++ f(x++,x++) is ambiguous as to what order the increments are performed. Or perhaps agree to live with it.

NME:

  • Add all blend modes
  • Add all filters
  • Discuss with experts the merits of static vs dynamic linking Mac and Linux.

Neash:

  • Sound is a big ommision
  • Loader code
  • Unit testing of supported APIs.

Despite these issues, I think there is a useful core of functionality here.

Let me know what you think.

Porting NME to linux

Linux Port
So, I decided to try porting NME to Linux. The first thing you do is find an empty hard drive, download an ISO, and off you go, right? Wrong. I did this – but I wanted to keep my windows partition, without risk of rooting it over, so I didn’t over write my MBR. Then, I could not actually boot my linux partition. Eventually I got around this by booting from a rescue CD (I do not, and never will again, own a working floppy driver) and specifying my /dev/sd partition in a bit of a round about sort of way. Anyhow, I got Linux booting (with a little bit of pain per boot – but I could live with this), but then the wireless network did not work. This I could not live with, so I though about resurrecting some crappy old hardware I have lying around, but in the end I sought, and found, an infinitely better solution.

The answer is virtualization! I can’t overstate how much easier this was than actually getting the hardware together. First, I downloaded the free version of [vmware’s “player” product](http://vmware.com/products/player/), and then the Ubuntu 7.1 virtual machine, that included a dev environment [http://jars.de/linux/ubuntu-710-vmware-image-download-english](http://jars.de/linux/ubuntu-710-vmware-image-download-english). The reference to “jars” seems to be a java reference, but we wont hold that against them. It “just worked”. I edited the xorg.conf file to change the keyboard to US, (so I could use the ‘|’ key), and that was about it. With only mimimal clicking, the network was fully working, *and* I could switch to my windows emails etc. This was what I wanted – much better than dual boot. Compiling NME was just a matter of adding the “dev” version of packages (vim, subversion, SDL, SDL\_miver, SDL\_image, TTF) with the GUI “Synaptic Package Manager”, which also “just worked”.

The download from code.google, via svn, worked easily once I added the “subversion” package.

Then a little [makefile](http://nekonme.googlecode.com/svn/trunk/project/makefile) to bring it all together. I had to make some code hacks. I don’t know if this is a gcc “bug” or “feature”, but when I tried to call a class member function, that was templated on an int, eg:

int x = Class.Member<4>(10);

the gcc compiler thought I was using “operator<", which is a bit of a shame. I also had a quick go at static linking, via the ".a"s, rather than the ".so"s, but the linker told me I needed a "version section" for one of the SDL symbols, so I gave up. The image you see is in the top left is of a window, on my windows desktop, running a virtual machine, running ubuntu, running the neko virtual machine, running a neko script linking to the Linux version of NME. And running it very well, I might add. The only problems at the moment, are text, when a font is not supplied, the "mod" music in the "blox" demo, and OpenGL on my setup. The PNG, JPG, WAV and OGGs stuff all worked first time. I think I might be able to get the music going, if I get the right plugin dso. Also, I'm not sure about what ".so" files need to be distributed with the program to resolve all the dependencies. Any SDL linux guru have any comments on this?