The goal of this initiative is to move whole codec configuration outside the kernel space. The core of this change is the virtual machine interpreting the special instruction set designed for the C language.
The currect code in the kernel side should be retained (at least until the whole userspace solution is rock stable). The only acceptable solution is to have one source code (C language) which will be shared with the build-in kernel HDA driver and the HDA driver using the user space firmware files (containing the HDA bytecode):
The whole implementation:
The codec can be controlled using a build-in kernel code (module) or using user space firmware files (containing the HDA bytecode with equal functionality). The kernel builder might choose to support only the native HDA kernel modules or only the bytecode firmware files or the support for both solutions.
- users can quickly replace the userspace code (good for testing, updates)
- the kernel code / modules will be reduced
- little overhead (see the #Overhead for the virtual machine)
- acceptance as a standard (virtual machine / software interfaces) - bad example is the em28xx development - http://lwn.net/Articles/306601/
- HDA modem drivers - it seems that LSI develops some closed source solution for their modems which interact with the kernel HDA codec interfaces
IKIB (In-Kernel Interpreted Bytecode)
The "virtual machine" is the core of this solution. The virtual machine interprets IKIB (In-Kernel Interpreted Bytecode) and interacts with the kernel API and the HDA API. The data structures might be shared (except the pointers which must be translated). The data sharing and the bytecode interpretation must be safe in all ways to not damage the kernel. For example, the interpreter must contain some instruction count limits to prevent CPU lockups, verify data structures passed to the kernel functions to avoid bad behaviour etc. etc. etc...
The ELF loader for the bytecode modules (firmware files) has same functionality as the loader for native kernel modules. It takes care about the final code linking (relocations). It means that the firmware can be separated to logical modules. All firmware files must contain the bytecode version and required library version.
Note that IKIB is quite universal and might be used also for other Linux kernel parts (projects).
- tcc (TinyCC) with the IKIB backend producing the ELF object files (unlinked - like native kernel modules)
- simple disassembler
- tcc is quite functional - it can compile all the HDA driver sources including the complicated data structures
- the text sections are about 10-20% bigger than for x86 instruction sets - more optimizations should be applied to the resulted IKIB
- data/bss sections are same as for other platforms (it is quite obvious)
- disassember works
- the ELF bytecode loader/linker works - requires some additions to support multiple modules (firmware files)
- the virtual machine implementation is almost complete (without the library), some easy C test utils can be run
Overhead for the virtual machine
- interpreted bytecode is slower than the native CPU intructions - we need to measure the slowdown factor
- more memory is required - the bytecode interpreter and the bytecode interface library is "extra" which the build-in kernel solution does not have