Aros/Platforms/Support for *nix

Windows
The mingw32 port is particularly interesting. Its a hosted port to Windows, and in essence uses the OS threading system to implement a minimal virtual machine, all within kernel.resource. It has a small bootloader that loads an ELF kernel, making it so that stock AROS i386 code can be used even on Windows which doesn't use ELF itself. The other thing it does is neatly split modules into host-side and AROS-side parts. The AROS parts are handled as normal modules, but in their initialisation they call into hostlib.resource (which is now contained within kernel.resource) to load and link the host-side part. These are standard shared libraries (ie DLLs) which can bring in any library dependencies they need, neatly avoiding the problem contained within the X11 and SDL drivers in that its kinda painful to find the needed libraries at runtime. This way, you just find what you need at link time.

Introduction
Best option is to use one of the virtual machines - see more here

Download latest mingw32-i386-system from here

Confirmed working with Windows XP SP2 (2010), ?

x86 32bit discussion thread here.

Experimental x86_64 bit build discussed here.

At the moment there is no network under windows hosted, I suggest virtual machine to test AROS network under Windows until work on hostio.hidd is done. It's a platform-independent remake of unixio.hidd which can work also on Windows (using overlapped I/O). As soon as it's done the task of porting Windows TAP becomes trivial. P.S. It will be even possible to port X11 display driver to Windows :)

Exec
See arch/all-mingw32/host_scheduler.c,  task  exception  handling. I have to  pass  argument  to  exception handler via register because i can't  modify  stack  inside Windows exception since Windows exception handler is running on the same stack as main code. Also note  that  i moved some code back to exec.library/Switch and exec.library/Dispatch. Now  kernel.resource   does  not  have  to #include exec's private stuff (etask.h).

There are issues...

But it's  not scheduling itself, It's some internal exec state maintenance, which is always the same. This idea  came to be after examining UNIX-hosted code. I know it is old, but it uses Dispatch for state change, and i liked the idea. I am working on fully functional kernel.resource for UNIX. I believe this follows AmigaOS philosophy to have these entry points patchable  in  order  to  be  able  to  install some kind of CPU usage monitor. Switch is called when some task loses CPU and Dispatch is called when another task gets CPU. The moved  code is always the same, it resets exec state. It's up to kernel.resource when  to  call  them  (and  it's  a  matter  of  task scheduling).
 * It spreads the scheduler among several modules. Part of scheduling is done by kernel.resource, part by exec.library. The only win is getting rid of etask.h include.

you do in that case? Doesn't different schedulers mean only core_Schedule replacement? It can  also  be  moved  to  exec. This will automatically  make  it  working on all architectures. Of course if we provide some  kind of access to CPU time. BTW, may be it can be based on timer.device's EClock? kernel.resource then will just call Switch and Dispatch in order to  notify  exec about task switching (note that kernel.resource still does the actual switching itself, so there is no arch-specific code in exec). Exec would do status update and accounting then.
 * Currently we support only RR scheduler. What, if we will need to add some more? Having Switch and Dispatch back in exec means, we need to implement different schedulers in exec, too.
 * Some schedulers (ppc efika, sam440, soon also the x86) access the etask.h in order to store some statistics there. Currently, the task re-dispatch  timestamp and total CPU time spend in the task are stored there. What would

If this file is mingw32 specific I think the path should be aros/mingw32/irq.h (or aros/windows/irq.h). This will allow to still cross-compile from anywhere and have in the end one SDK for all CPUs and archs.

Well, i can do it. But here is my last argument for this location... In fact  the location is $prefix/i386-mingw32/include/aros/irq.h. If you are working on AROS and you are installing a mingw cross-compiler, the    crosscompiler's   default   paths   will   also   end   up   in $prefix/i386-mingw32/include  and  $prefix/i386-mingw32/lib. This way the file appears right where it is expected to be.

And yes, there is libaroskernel.a which goes to i386-mingw32/lib, but it needs  mingw32  compiler  to  be build, so it is built only during windows-hosted   build. It just  can't  be  built  when  doing  e.g. Linux-hosted build.

So far, the thing is really half-done. If you still don't like it, i can make  this  include installed only for Windows-hosted build. This way we'll get this only in "mingw32-sdk" target, but together with its linklib. May be its even better (anyway you need libaroskernel.lib to use it)

All hosted ports. On non-Linux OSes it's impossible to acquire address 4 at all. After hitting problem with entry point i introduced a new macro, AROS_ENTRY, to mitigate this.

Debugging
Previously we always had a serial port. Currently we don't. And this is a pain. Theoretically we  could  substitute it with some other device (Ethernet,  USB,  etc.). These things are much more complex and require drivers to  function. I leave  out  necessary  protocols since with for example Ethernet we could use some simple raw protocol, just to be able to read messages.

So comes the first idea: we should have something that can be called "debug   channels". Every  debug   channel   can  be identified for example by name, supplied by the device driver.

This device driver needs: 1. Early initialization (while booting up the system) 2. Some way to supply its output channel to the system.

(1) is  solved  by  modular  kernels. Now about (2). We already have KrnPutChar and  KrnMayGetChar in kernel.resource. For simplicity, let's talk  only about KrnPutChar at the moment. We could have some additional function, like:

KrnRegisterDebugChannel(STRPTR name, APTR function)

Where name  is  channel  name and function is callback function like MyCallback(char c).

What happens when it is called? Let's remember that we have debug=XXX kernel argument  which  actually specifies where to direct the output (we  may run several device drivers providing debug output but we need only one of them). XXX could be some sting in the form:



For example, for serial port No2 on 19200 bps the argument could be:

debug=serial:2@19200

The argument  is  processed at early init, kernel remembers it. When something (driver) calls KrnRegisterDebugChannel("serial", SerOutput), the channel name is compared with what is given by debug= argument. If there's a  match, the kernel starts using supplied function for debug output from now on. What's with  parameters? Only the driver itself knows how to decode them. Obviously, before  starting  using the channel, it needs to be initialized. For this  purpose  the  driver  should provide one more callback,   like  MyInit(char  *params). So registration  takes  the following form:

KrnRegisterDebugChannel(STRPTR name, APTR InitCallback, APTR OutputCallback)

But what  before  the driver plugs in? Well, we have our screen. The only bad  thing:  we  don't know how to drive it. But, if we are more intelligent...

Generally there is either text or graphical mode framebuffer. At boot time is  known  to  us,  it's  either  text  mode  or  graphical VESA framebuffer. The only problem is warm reboot, where the screen is left by driver in some state, unknown to us. There are options:

a) Driver  should install a reset callback where it should reset the card back into text mode.

b) Driver  could  have  some  another  callback  which  would return VESA-alike  description of current state, so that existing display can be picked up during reboot.

By the  way,  the  same  thing  would let us to display friendly guru screens   instead   of  simple  rebooting. Just this  routine in the display driver should be crash-proof (it may not rely on interrupts, other libraries, etc., even on exec itself). And again we need some way to register this routine.

Can anyone  suggest  some framework for this? I'm exhausted on ideas here, especially taking into account that:

a) Display drivers can be unloaded in some situations.

b) There can be several displays, how to select the correct one (some displays may be not physically connected, for example unused connector on dual-headed card).

Now let's  remember  about  KrnMayGetChar. Currently AROS features built-in SAD  debugger. I picked it up from i386-native version and adapted  a little bit to the new environment. Now it perfectly runs on Windows-hosted, i  can run it using C:Debug command. In fact on other ports you  also  may  run it, you'll see a prompt in debug output but won't be  able to talk to it - KrnMayGetChar is implemented only in Windows version.

The current SAD  doesn't do much, but it just needs some attention  from developers. It can become a useful tool. Especially if we invent some way to run it when guru happens (like it was on classic Amiga(tm)).

So, back  to the topic. First, not all devices can be bidirectional. Second, why  not  for  example use own machine's keyboard? The driver then needs  the  following  functions:  InitCallback,  InputCallback, ReleaseCallback. Release function could be called when for example you leave the SAD, to give the device back to normal OS usage.

So far, we already have five functions (assuming that debug output is never released). If it somehow is, then we have six functions. May be  we should not have two registration functions, and have just one, taglist-based?

How to pair input and output channels? Assuming that devices are not bidirectional? Or may be we should not pair them at all, just allowing the user to specify the second argument, like "debuginput=ps2kbd"?

Introduction
Download from here

or

Would recommend installing git on your MacOS X machine and cloning the AROS sources via git:

$ git clone git://repo.or.cz/AROS.git

Then, you can use ./configure to set up your cross complication environment.

$ mkdir AROS.darwin-x86_64 $ cd AROS.darwin-x86_64 $ ../AROS/configure --target=darwin-x86_64 $ make -s   # This is a lot less verbose [get a coffee. Or five.] Then, to run AROS: $ cd bin/darwin-x86_64/AROS $ boot/AROSBootstrap

Some thing you might like to do:

1) Modify bin/darwin-x86_64/AROS/S/Startup-Sequence to run 'NewCLI' instead of 'WANDERER:WANDERER' - that will at least get you a full-screen text console.

2) Use the attached patch to create a 'C:RawEcho' command (build this with 'make -s workbench-c-quick') that you can use to emit debugging messages to the Mac OS text console that ran boot/AROSBootstrap

You can safely ignore this warning. Been running in this mode for the last 2 years. Confirmed it works with x86 10.5.8 Leopard 10.6.2 and 10.6.4 Snow Leopard. Don't forget to install x11 XQuartz), it's on the installation dvd. There should be an X11.app in /Applications/Utilities. If not, there's your problem.
 * | Your X Server seems to have backing store disabled!

When the bounty was created the mac was still PPC only, from recent svn logs it looks like it should also run hosted on Darwin PPC.

You don't need SDL, but you do need a X-Server for graphics. Newer versions of OSX usually have one installed already. Dunno if SDL would also be possible, on iOS it won't. Cocoa SDL needs special preparations in program's main to work.

Please check arch/all-darwin/README.txt for build instructions. Don't try to build contrib, some packages will fail, SDL_mixer and gmake IRC. Perhaps you forgot to add—target=darwin-i386 to configure? But, really strange, it doesn't need it on my machine.

Yep, the darwin 32 bit set. I had to move the binaries by hand into the path with names like i386-aros-gcc to make configure happy.

Isn't /usr/local/bin  in  your path already? The toolchain was built with—prefix=/usr/local. It should  work  fine  if  you  extract these two archives into your /usr/local.

There has been no time to complete GPT partition table handling  (it's  currently read-only, you can install AROS on a GPT  partition  but  you  have to use 3rd party tools to partition the drive). Writing GPT is currently enabled, but DO NOT TRY THIS. It WILL destroy  your  partition  table!!! CRC and  backup  table  are  not updated!!!

1. If you get hangups, perhaps AROS crashes behind the scenes. If you use VESA mode, you can see the debug log if you add 'vesahack' to the command  line. This will set up split-screen mode. In the upper half you'll see AROS screen, in the bottom - debug log.

2. If  you  get  crashes  at  early boot, try adding 'NOACPI' to the command line. ACPI stuff is very poorly tested because discovery fails on Mac (different ROM).

Self-build
Need to download XCode or install MacPorts. Warning: macports doesn't work with xcode 4.3. Use older version...

Run "./configure --target=darwin-ppc --disable-crosstools" and send the results if it fails (config.log and the terminal output)

If it succeeds... then any build errors 'make -s' generates.

If *that* succeeds... then any fatal errors 'cd bin/darwin-ppc/AROS; boot/AROSBootstrap' generates.

If *that* succeeds... it works!

If 'gcc -v' and 'make -v' work, you are 90% of the way there.

If you get a build error for missing libpng and netpnm, you can resolve that with Fink. Even better, go with Homebrew, there's a PPC fork which works fine on my old PowerBook :).

Once installed, it will allow you to get additional development packages that don't come with X-Code, ie:

$ sudo apt-get install libpng3

Building darwin hosted port try to follow the guide in arch/all-darwin/README.txt. There's no need to build cross-tools with—enable-crosstools switch (which may not work at all), because you should install Sonics prebuilt cross compilers.

The crosstools build process it wrong in fact. It relies on already existing link  libraries  and  startup  code, which can be built only using wrapper scripts. Of course on non-ELF hosts wrappers won't work. It should be done in quite another way, involving more steps:


 * Build binutils, they don't have any prerequisites.
 * Build collect-aros.  Its  env.h.cross file contains instructions how.
 * Unpack gcc,  configure  it  and execute 'make all-gcc' and 'make target-libgcc'. This will produce barebone C compiler.
 * Using this compiler, build AROS linklibs.
 * After this complete building gcc using 'make'. It will build also g++ with its libraries.

Of course  AROS includes should be made too before building gcc, but gcc itself isn't needed for this.

On Darwin  AROS is built using a preinstalled crosstools. Jason, your new configure  stopped supporting it and seems to use host's compiler with wrapper. This is impossible on Darwin. Then the darwin build needs to set—with-toolchain=... and—with-kernel-tool-prefix=ppc-aros-

The same  with Windows. I guess similar issue affects Android (there it uses Android crosscompiler as $KERNEL_CC and wrapper as $TARGET_CC. It's impossible  to  compile  Android binaries using Linux gcc, their ABIs are a bit different!

specify—with-kernel-toolchain-prefix=i386-aros- make query still prints /usr/bin/gcc for the kernel cc.

If you're compiling for darwin-hosted on darwin, you *don't* use—with-kernel-toolchain-prefix at all.

Or just use '--with-kernel-toolchain-prefix='.

Remember that 'kernel' is used to compile the darwin side of the AROS Bootstrap.

And everything that has compiler=kernel specified in the build macros like arch/all-unix/kernel.

There's no  more  compiler=kernel for AROS modules. It's impossible to link  together  different  objects on non-ELF systems (Windows and Darwin). compiler=kernel is used for:
 * 1) Bootstraps
 * 2) Host-side DLLs (intensively used in Windows-hosted).
 * 3) Support linklibs like libarosbootstrap.a

AROS modules directly interfacing with host OS now use another trick. It's still  $TARGET_CC,  but with -nostdinc -isystem $GENINGDIR. This allows to  produce  AROS  ELF  objects  which still adhere to correct host-side ABIs. If we  want  to know what will be our underlying host, we explicitly add  -DAROS_HOST_$(AROS_HOST_OS)  flags. This is  because there's no __linux__  (__darwin__,  __win32,  whatever) in AROS code, it's always __AROS__.

This can't  be  elf-wrapper  for hosted. In order to build something that runs  under  an  OS,  you  need  an  SDK  for  that  OS. In fact elf-wrapper is  the  same  aros-gcc,  but  produces statically linked (ET_EXEC)  binaries. This is  suitable  only  for  building  native bootstraps (which are running on naked hardware and self-contained).

Building Darwin-hosted under Darwin? configure's bug. Actually these architectures are Darwin, Windows and  Android. Their KERNEL_CC can't be used for building AROS code even  with  wrapper. So these  three  arches must enforce AROS crosstools, either built or preinstalled.

Support
How to substitute rmb with keyboard on laptop? CTRL doesn't do the trick. Go to System Preferences -> Trackpad. Check the "Secondary Click" box and select "Bottom right corner" from the drop down. Then you can press in the bottom right corner of track pad for right click.

BTW to access the AROS file-structure from OS X, just right-click and 'Show Package Contents'

For hosted systems we have driver that wraps AHI to the open sound system, see arch/all-unix/libs/oss. Probably it is possible to write a similar thing for darwin hosted.

In order to resolve the async I/O issue wonder why AROS can't handle this by means of AROS internal threading. This is exactly how asyncio.library does it as well. Threading could be inside 68k AROS or outside, i.e. Linux host threads.

From host's point of view AROS is just one thread. So if some of AROS processes calls  blocking  I/O,  it  blocks  the  whole AROS. No task switching will be done since no signals will be delivered. So this is related. AROS would have to be sub-threaded, with those threads being mapped to native threads (this is by the way something that is sometimes done in JVMs, also under Linux).

As to host threads, there's another large problem. Tried this with Android-hosted port  and  it  failed. When SIGARLM arrives, you can't tell which thread is interrupted. This completely screws up AROS' task switcher. Maybe it's worth checking how this is solved in common JVM implementations.

There is  already  a solution in the form of unixio.hidd. However it operates  only   on  file  descriptor. AFAIK eSounD  provides  only library-based  API  with blocking functions. There's no some form of a UNIX socket/pipe/whatever. You might want to try eSound: There's  one   problem   with  esound. It lacks  asynchronous  I/O capabilities, providing only blocking calls. You can  not  use  blocking  I/O  from  within AROS. This will cause blocking of  the  whole  AROS, including interrupts, and will provide very negative user experience.

Of course it's possible to implement oss.library which would work on top Core Audio. But i think it would be much cleaner solution to write self-contained AHI  driver  without  any extra API wrappers. It would better use CoreAudio possibilities and would not be restricted only to what oss.library has. Not saying that oss.library is a quickly hacked crappy thing.

to favour PortAudio over eSound - I don't know if the mentioned I/O issues reflect on that one as well or not. In general however direct OSS access does not seem to be the best solution, as eSound - and probably also PortAudio - as additional abstraction layer does ensure that in turn AROS is not blocking the host audio but proper mixing is done. In order to resolve the async I/O issue wonder why AROS can't handle this by means of AROS internal threading. This is exactly how asyncio.library does it as well. Threading could be inside 68k AROS or outside, i.e. Linux host threads.

Talking about sockets... maybe a native Linux audio client which is instructed by AROS over localhost IP connection would be a workaround. Mean, asyncio and SIGURG could work then...

Here's an example how some guys did develop a babyphone client/server (i.e. sender/receiver) using OSS and UDP sockets:

Of course it gets more complex if you want it connection based, with more reliability and upfront negotiation of audio settings - possibly could be done using TCP instead, with an appropriate header.

Debugging
Sashimi

Several days  before  i  ran  it on Linux PPC (where i wrote initial version of emul_dir.c), worked fine too. Is it a new build? It was a complete rebuild. The problem only happened with—enable-debug because the ASSERT_VALID_PTR was called during AllocateExt which was called during PrepareExecBase.

PrepareExecBase is really tricky. If you want to debug it, you can temporarily add  statically  linked  calls to KrnBug (copy the stub from kernel_debug.h and use it). Exec debugging is really not up there yet. However this  routine  should  be  simple  enough to not require any debug.

Reverted this and now link against libgcc.a. The problem with the missing symbol did not happen on Rosetta but was able to test it works now on a 10.3 G4 ppc ibook.

Now some debug output from Rosetta with my latest commits, without them you'll get only garbage. As you can see the value of klo is trashed after HostLib_Open. Trashing may differ depending on were you place debug output.

=
====================================================== --- workbench/libs/mesa/src/mesa/mmakefile.src   (revision 35709) +++ workbench/libs/mesa/src/mesa/mmakefile.src   (working copy) @@ -131,6 +131,7 @@            -I$(SRCDIR)/$(CURDIR)/../talloc \ -I$(SRCDIR)/$(CURDIR)/../gallium/include \ -I$(AROS_DEVELOPMENT)/include/gallium \ +           -I$(AROS_DEVELOPMENT)/include \

USER_CFLAGS := -DUSE_MGL_NAMESPACE -ffast-math $(FFIXED) -DMAPI_GLAPI_CURRENT

BTW, should it not be possible to use—with-crosstools? (And isn't that the default for most other archs now?). AFAIK—with-crosstools works incorrectly. Most archs currently use wrapper script around host's gcc. Crosstools are built  only  for MESA and only g++ is used then. Yes, it's really incorrect.

In fact  thinking  about  changing  this. I think that real crosscompiler  should  be  enforced  on  a host basis, not on a target basis. I. e. if we compile on non-ELF host (for example on Windows or Darwin), $cpu-aros-gcc is used. Otherwise wrapper script it used. This will make cross-compiling any port on any build machine quite an easy task.

Looks like you installed  the crosscompiler but didn't install AROS SDK into its directory (/usr/local/i386-aros/include and /usr/local/i386-aros/lib). c++ compiler  is  not wrapped up by AROS build system, so it doesn't supply Development/include to it. Perhaps adding this to mmakefile.src can be an option

Known flaws:

1. Stackswap  tests  crashes  in NewStackSwap, the reason is under investigation. However executables are ran flawlessly. I suggest the problem happens because of swapcontext nesting.

2. X11 driver has somewhat bad performance. Also there are some small problems  with   it. I suggest  they  happen  because  of  missing backingstore. It's possible to enable it, but it's somewhat tricky.

In fact X11 driver needs to be seriously rewritten, so that it will support screen composition and will work without backingstore. Also there are several obvious coding faults in it. Unfortunately i don't know X11 programming well, so i won't take this task for now.

Future:
 * x86-64 Darwin build.
 * ppc-Darwin build (with someone's help to test).
 * iPhone hosted port.

During testing  of  parts  of  Darwin-hosted build i came up to this again. Currently we  have AROS.boot file where we put name of architecture. Is it really needed to have the full name there?

This actually  prevents to have one boot disk for different machines using the same CPU. For example ppc-chrp-efika partition will not boot on ppc-sam440, despite they are the same! Personally i hit this issue when i  build  aros-base  and aros-strap kernel modules on Darwin and then  tried to boot them up on my existing Windows-hosted installation (since there's no aros-bsp-darwin yet).

I would consider two options:

1. Revert to checking C:Shell file. LoadSeg will not load binaries for alien  CPUs. So we  will be able to detect which partitions are bootable on which CPU (i remember this file was provided as a solution for problem of coexisting x86-64 and i386 installations)

2. Check only CPU with this file (e.g. 'i386' or 'x86-64'). This way we solve  both problems. Additionally in future we can be able to add more data to this file (like boot priority). This is the reason why we should keep the file.

A possibility to run AROS hosted on x86-64 MacOS. There's a problem.

Current AROS  executables  use  small code model, so they need to be loaded  into  lower 2GB on address space. This is no problem on native AROS and  on Linux we can use MAP_32BIT flag. However there's no such flag in BSD, and, consequently, on Darwin. Additionally Darwin  reserves  the  whole  low address space for own needs. Userspace begins at 0x1000000000. This means there's no way to get into lower 2 GB. The problem can be solved only by changing code model. This is so for the small code model for x86_64 right? Does there also exists a 'big' code model ? I also would like to get an idea of the implications on code size, stack usage, speed etc. of the different code models. Yes, of course. It imposes no limitations, but has negative impact on the binary size (all references are 64-bit). Small code  model  uses  special form of CPU instructions where all references are still 32-bit. Reference for models.

Darwin itself uses small PIC model for its executables (-fpic option is forced to always on). This allows to override this problem there. I can suggest to use the same on 64-bit AROS. However this means that -fpic option needs to become working on AROS in  general. In order to do this, we need to change linking process a bit. Instead of -r, -q needs to be supplied to ld. This will instruct it to  link a final executable (ET_EXEC) file, but keep relocs in it. Linking final  executable  involves  additional  magic  like  creating global offset table. I think it does not hurt to support different code models in the executables. And I also think that the default one to use for code should be able to run on all ports of AROS. I also think the default code model could be different on different CPUs, e.g. absolute on m68k, i386, PPC; relative in x86_64, PPC64, etc. Of course. On i386  the  default  is  small,  it  still  has  some differences,  i  don't  know which ones. I suggest to make small PIC a default code model on x86-64 AROS. However, for  this, PIC should be made working in general. Currently it doesn't  (try  to compile hello world with -fpic and see what happens).

The question is if it should be implemented now in main trunk or if it would be best to delay it to after the ABI_V1 fork where the current ABI would be branched in a separate branch. You could for example implement your feature now in branches/ABI_V1/trunk-piccodemodel64

I think we should look at how many people have a current x86_64 installation they want to run newer programs. Also how involved would it be to upgrade an existing installation. It's not  a  real  problem  as  the  first  set  of changes is fully backwards compatible. AROS will still load existing executables. I even can add one #ifdef to collect-aros and the change will affect only  64-bit AROS. 32-bit executables will still have old format. Just i don't  like  code  pollution  with  #ifdef's, and additionally if i implement  the change for all CPUs, all of them will get working  -fpic option. This will  not  change  default  code  model for anything other than x86-64. Just ELFs will become real executables (it's quite simple to modify our InternalLoadSeg to support them. I thought we had broken x86_64 binary compatibility several times in the past without a second thought (e.g. changing ULONGs to IPTRs in system structures). IMO it doesn't matter if we do the same with this change.

Anyway I am now quite advanced in the implementation of the split of clib the split is fully done and contrib-gnu compiles and runs mostly. BTW, are you aware of the fact that current arosc.library stores its thread-local  data  in private field of exec's ETask structure? I hope you changed  this  to something that uses only public API (like using AVL trees to associate thread data with process)? I just have a bug left in setenv/getenv and then mostly. So I don't think the waiting for ABI_V1 implementation to start this work would delay it for several months. We also need to align with the m68k people. This was one of the big things I have done in the ABI_V1 branch. The data is stored in an AVL tree. Is this soft of "handled in shared module per caller task storage". If so, could you point me to the codes in SVN?

It can be implemented right now. There will be only one problem: new executables will not run on older AROS. Old executables will still run on new AROS, i'll take care about it. Implementing this  will  allow to move further and implement special support  for  -fPIC  which would enable nice things like moving global variables into library base. Is it okay to go for this change? This implementation should not be the final implementation but keep the possibility to adapt the code model for AROS during the ABI_V1 implementation phase. Code model can be specified using -mcmodel gcc switch. Without any  changes  we can use all code models in non-PIC mode. My change's goal is to make it possible to build PIC binaries. PIC changes  x86-64 small model in such a way that it will work with any address range, limiting only size but not position of the code.

Work on x86-64-Darwin hosted AROS continues. It already boots up  into  Wanderer,  but lots of components crash. This happens because there  are  still many places where pointers are truncated by conversion  to  ULONG. On Linux-x86-64 you won't notice this because AROS memory is allocated in low region (below 2GB limit).

On Darwin  (like  any  other BSD) you don't have such a possibility. Darwin reserves  the  whole  low memory for OS use, and application's memory starts from 0x001000000000. This puts  some restrictions on what you can do in AROS. For example AROS can't  load  bitmap  fonts  and  diskfont.library  crashes. This happens because diskfont.library hits wrong addresses when it tries to process AmigaDOS hunk file loaded a high addresses. In order  to  prevent  loading  these  files  at  high  addresses  i introduced  MEMF_31BIT flag. This flag is set for memory regions whose end address is not larger than 0x7FFFFFFF. This flag is effective only on 64-bit machines. On 32-bit architectures exec.library ignores it, so there's no need to set it there. If someone is working on x86-64-pc port (Alain ?), he should take this into account. Memory initialization routine should make use of this flag and mark the appropriate region with it. On x86-64-Linux  hosted port this flag is set, so this port can load AmigaDOS  hunk files (and use bitmap fonts). This is because bootstrap supplies MAP_32BIT flag to mmap calls. Currently only  InternalLoadSeg_AOS  routine supplies this flag to AllocMem. However i  would say that there are more places where it should be used (for example, drivers for devices with 32-bit PCI DMA). I have  problems  with  implementing PIC support, so currently i use large  code  model to compile Darwin-hosted AROS. I will implement PIC later, this ends up in implementing own BFD backend in binutils. I don't know yet what code model should be used by default for gcc, but  it's definitely not small one. ELFs with small model create problem, because code  model  is not recorded anywhere in ELF, and i don't  know  yet  if it's possible to use some heuristics to guess it (for  example  detect specific  relocation  types). Currently program using small  code  model will  simply  crash  on Darwin-hosted AROS. Programs compiled with large code model (and PIC in future) will run on any x86-64 port. Even if  i  succeed in auto detection, this will not make small code magically run on all ports. Attempting to load it will end up in "File is not executable", nothing more.

An additional feature I would like is that you could decide which variables would go in libbase and which would be global to all libbases; implemented maybe with some AROS_xxx macro around the variable definition.

Additionally i  can  make  use  of ELFOSABI_AROS definition (ELF ABI number reserved for AROS). However this will impose  a requirement to use AROS version of binutils for linking, Linux ld will not do any more. I always would have liked to make AROS programs real ELF programs (with relocation info still present though) but I also think this should be something for ABI V1. Ok, i go for #ifdef in collect-aros then.

Currently AROS  build system does not support having different .conf files  for  different  arch/cpu. If this  is  implemented,  this  is possible. Anyway binary  compatibility  with  PPC  is  a  proposal for future. Perhaps it  will even be a separate AROS version. This is open to the discussion. Some of our leaders dislike MorphOS ABI and would not like to make it a default ABI for PPC AROS.

Anyway i  think  that LVOs should be as similar as possible. In fact this place  should  be the only place where LVO swapping takes place. There seem  to  be no other conflicting extensions between AmigaOS3.9 and MorphOS.

Will the ABI V1 C library allow to malloc memory in one task and free it in another? Not at the moment; I did not reimplement this feature yet as I did not find any code that seemed to use it. This is required at least by MPlayer - I had to add an ugly sharecontextwithchild function to current arosc.library to have this behaviour. So if I would be able to get the source code I can think of the best way to implement it. It does seem to be thread related.


 * http://en.wikibooks.org/wiki/Aros/Developer/ABIv1

If you talk about PIC and GOT, will this also mean we can have shared objects? We may be able to, but please please don't. Don't copy bad design decisions from OS4. OS4 programs using shared objects have the same startup slowness as on other OSes (Linux, Windows, ...). In fact  we can have them right now. Shared objects in AROS does not have to be PIC because every binary is relocatable by definition. However, yes, i agree that load-time linking is not good.

Find PPC Darwin ABI documentation, and compare with Linux ABI. If you carefully check the layout of stack frames you'll notice the differences and what causes the trashing. On ppc darwin called functions are allowed to write into a parameter area of the callers stack frame. I think this can be worked around by some C or Asm stubs around each host os call.

PS3 Port
So i should be in search of graphic drivers and sound and keyboard etc.Am i right to assume that i should search them in the linux version of ps3 ports? There is another problem,linux used to have the otheros option, and now that is no longer present in the PlayStation 3 firmwares upper than 3.21. So i can't use a custom otheros.bld file to do the boot.

Have setup the *psl1ght sdk from *http://psl1ght.com/ that makes it possible to create pkg or elf files that run on ps3 with custom firmwares. Now i am interested to do a new port of AROS to ps3 gameos with the help of that sdk as start .I know that there has been a port of sdl for ps3 gameos. You can use that as a reference on how to do certain things but also:

http://git.kernel.org/?p=linux/kernel/git/geoff/ps3-linux.git;a=summary

http://repo.or.cz/w/freebsd-src.git/tree/HEAD:/sys/powerpc/ps3

http://www.ki.nu/hardware/ps3/

Remember the first one is GPL and even Sony copyrighted, do not copy and paste code from there. I seem to remember that even the last one did use GPL code for display at some point. Not sure if drivers of the projects I've linked might be of help, some of them maybe.

I am not certain how this could be done on the present firmwares.Although we have the advantage of running homebrew code on the gameos. So i guess i need a pkg application file to act as a boot loader for aros.Maybe i should try to find out more about petitboot ?that was a boot loader for ps3 try to make a pkg file out of it? I know that linux boot loaders on ps3 needed the other os function, but since this is for jailbroken machines that have the lv2 peek hack of geohot we have access to anypart of the memory region of ps3 so in theory the lack of otheros option would not prohibit the execution of other os (asbestos comes to mind).

I am trying to see what is needed to port aros to ps3 gameos.What would i need?I know of the ppc linux port,i am trying to make a new configure file to use.

As far as I understand the sdk uses cross compilers so it might be possible to add sections to configure similar to those of the MacOS X hosted ported which also uses prebuilt tool chains or check other native port which are usually cross compiled like sam440 or ppc-efika port.

I would suppose the sdk should provide a lot of the things needed, maybe not yet but over time. To sum it up.We need a boot application that initializes gfx,sound,usb,keyboard,blue ray and opens screen?Is that how aros works on linux?

Not exactly, a hosted port like on linux starts as a normal host program (arch/all-hosted/main.c) which acts as the boot loader then it jumps to the starting address (arch/all-unix/kernel/kernel_startup.c). A native port is similar, but it has to do a lot more work to take control over the whole machine. One part of this are device drivers but there is more. For example the sam440 port has to init some registers, the interrupt controller and mmu very early in startup. You can learn about all this by studying AROS source, after all device drivers have to be done the AROS way, the sdk examples and other gameos hackers code and maybe it's also a good idea to search for some Cell chip documentations at places in the net.

Then you need to set up things in the arch directory so it contains the parts specific to the ps3. Then make the sdk produce a binary that is loaded and started on the ps3. This usually is the boot loader that loads all the needed modules from boot media and then jumps into a starting address of one of those. Probably the boot loader will already have to set up some of the hardware so it can do its job, the rest can be done at a later stage. Additionally, you will need hardware drivers for the display, usb at least for keybord and mouse, harddisk and blue ray drive.

It is possible? What problems will present? I think it is possible, but it's not an easy task, the biggest problem might be not to give up at some point.

Btw. AROS does not need SDL, but we have an SDL display driver for hosted ports.

So i should be in search of graphic drivers and sound and keyboard etc.Am i right to assume that i should search them in the linux version of ps3 ports?

There is another problem,linux used to have the otheros option, and now that is no longer present in the PlayStation 3 firmwares upper than 3.21. So i can't use a custom otheros.bld file to do the boot.

I am not certain how this could be done on the present firmwares.Although we have the advantage of running homebrew code on the gameos.So i guess i need a pkg application file to act as a boot loader for aros.Maybe i should try to find out more about petitboot ?that was a boot loader for ps3 try to make a pkg file out of it?

I know that linux boot loaders on ps3 needed the otheros function, but since this is for jailbroken machines that have the lv2 peek hack of geohot we have access to anypart of the memory region of ps3 so in theory the lack of otheros option would not prohibit the execution of other os (asbestos comes to mind).

To sum it up.We need a boot application that initializes gfx,sound,usb,keyboard,blue ray and opens screen?Is that how aros works on linux?