Sometimes you hate a bug so much that when you fix it you have to blog about it.
Archive for July, 2009
GRUB for Sparc just landed in Debian Sid. I’m looking for
victim users who want to screw thei test it and report how well it worked for them.
I still don’t know if there are any hidden tricks in Microsoft’s Community Promise. Hopefully the SFLC will clarify. In the meantime, I notice some people pointed at a pair of possible problems:
[Microsoft] promises […] to the extent it conforms to one of the Covered Specifications, and is compliant with all of the required parts of the mandatory provisions of that specification
New versions of previously covered specifications will be separately considered for addition to the list.
Now, what I find interesting here is that in both cases they’re NOT necessarily blockers for free software. I repeat, I don’t think they are necessarily a problem. Whether they are not, depends a great deal on the response the Mono community will give.
Is #1 a source of patent liability for companies deploying Mono? It depends. What if the Mono project sticks to a covered version of the standard, and only declares the next one as “stable release” after complete compliance has been archieved?
And #2? It’s only a problem if a version of the standard is implemented and pushed as suitable for production before it has been approved and added to this Community Promise.
I don’t know how is the Mono community going to respond to this. After I’ve read phrases like “patents are worthless” and “we piss on patents” from some of their members, it’s hard to imagine they’re all going to take a constructive attitude. Nevertheless, and much to his credit, that’s what Miguel de Icaza has been doing the last few days:
In the next few months we will be working towards splitting the jumbo Mono source code that includes ECMA + A lot more into two separate source code distributions. One will be ECMA, the other will contain our implementation of ASP.NET, ADO.NET, Winforms and others.
As for Moonlight’s covenant, not only we are on the same page, most importantly, Microsoft is on the same page. But things take time.
Replacing System.Data is a trivial exercise. The code in question is some 200 lines of C# interfaces (a contract that providers implement) that need to be replaced with something else.
So is this just a PR stunt, or is it going to last? I suppose time will tell. If you’re looking for an answer to that question, the existing dependancy Banshee/F-Spot have on System.Data (which is not covered by the ECMA spec) is an interesting place to watch.
But as for now, to each what he deserves. Miguel: admitting that there is a problem doesn’t make you weaker. On the contrary, it’s the first step in solving it. Thank you for doing this.
Most people would agree that the x86 design is full of legacy junk. But to truly understand this, I think one has to dive in and see for himself. I’d like to talk about my little journey of discovery, in which I learnt the horrors of i8086 legacy.
Roughly three weeks ago, I decided it would be a nice experiment to pick GRUB 2 and make an i386 firmware out of it. GRUB can already run as a standalone bootloader and be part of your firmware when you combine it with coreboot (which initializes the motherboard), but I wanted to have an easy way to test this standalone mode in QEMU. The result (which, btw, is packaged in Debian as grub-firmware-qemu) behaves in exactly the same way a coreboot/GRUB would (except, of course, that it will only work in QEMU).
Initially I thought this would be piece of cake. In QEMU there’s no motherboard to initialize, so basically the steps would be:
– Process the VGA rom with a far call.
– Switch to protected (i386) mode.
– Done! Jump to grub_main() and start as usual.
Hah! So far from reality. First of all, we start with code segment 0xf000, offset 0xfff0, which corresponds to virtual address 0xffff0. Our ROM is I/O mapped in the 0xf0000-0x100000 range. So we’re at exactly 16 bytes before the end of our code. With no room for anything, all we can do is jump.
Not so bad, right? Let’s jump to the beginning of our whole ROM image, and put the initialization code there?
No way. The 0xf0000-0x100000 range in which we’re mapped is just 64 kiB in size, and our image might be bigger (we generate it dynamically with grub-mkimage, and can even include an embedded filesystem). Only the high 64 kiB are mapped there. The rest of our code is near the top of virtual memory, which we can’t access yet because we’re still in i8086 mode (and 640 kiB are enough for everybody, remember?).
I opted for creating a small image with entry code, boot.img, using a hardcoded size (512 bytes). This image will later be picked by grub-mkimage and allocated at the end of our ROM. So we do a relative jump to the beginning of this image:
. = GRUB_BOOT_MACHINE_SIZE – 16
. = GRUB_BOOT_MACHINE_SIZE
and proceed with (finally!) processing the VGA rom:
/* Process VGA rom. */
call $0xc000, $0x3
and switching to 32-bit i386 mode:
/* Transition to protected mode. We use pushl to force generation
of a flat return address. */
DATA32 jmp real_to_prot
But before we leave boot.img, we need to figure out where’s the rest of our code. It’s not relative to our current location because, ugh, the beginning of our ROM was truncated.
We know it’s mapped at the top of memory, and for the sake of simplicity (which was greatly missed in this experience), its 32-bit entry point is at the beginning of it. So we only need to substract the ROM size to the 4 GiB barrier. But all this was already known by grub-mkimage when generating our ROM. And it was kind enough to embed this address in a variable:
movl grub_core_entry_addr, %edx
Problem is, our toolchain puts the BSS right after our code, which ends really close to the 4 GiB limit. It might not even fit in memory! There’s a chance that it might do, depending on the size of our module selection (GRUB modules are placed right after the main body of code), but no garantee about it! Isn’t the top of memory a practical location?
So let’s relocate elsewhere. Recipe for relocation: current location, destination address, size. Our destination address is somewhat arbitrary, we just pick whatever we used at link time. We’ve known our size since grub-mkimage generating this ROM, so we arranged to have it embedded in a variable, like we did for boot.img:
Whoops, too bad, we can’t even read it, because… memory access is always absolute, and we don’t know its absolute location, so we need to make this position-independant in some way. Fortunately, we know that ROM size is a multiple of 64 kiB, so we obtain %eip and round it:
/* Relocate to low memory. First we figure out our location.
We will derive the rom start address from it. */
1: popl %esi
/* Rom size is a multiple of 64 kiB. With this we get the
value of `grub_core_entry_addr’ in %esi. */
xorw %si, %si
At last! We can read grub_kernel_image_size:
/* … which allows us to access `grub_kernel_image_size’
before relocation. */
movl (grub_kernel_image_size – _start)(%esi), %ecx
and then proceed to relocate,
movl $_start, %edi
ljmp $GRUB_MEMORY_MACHINE_PROT_MODE_CSEG, $1f
zero the BSS, and jump to grub_main():
* Call the start of main body of C code.
the rest is business as usual.
So, was it so hard to just map the damn thing at a fixed address, say, 0xf0000, without truncating it or using weird memory locations, and use this same address as entry point?
I think I learnt my lesson: never underestimate what 30 years of legacy constraints can do to your sanity. Well, for what is worth, it was a nice learning experience, with a byproduct you might find useful and/or interesting yourself.
Apparently, it must mean something, because I find it being referenced in (supposedly serious) discussions about .NET licensing.
The acronym literally translates as “Reasonable And Non-Discriminatory”. So far so good. Except I don’t have a clue what it means. What does “reasonable” mean when applied to a patent licensing policy? Well, according to my own interpretation of this word, a licensing policy is reasonable when it prevents the patent from being used to impose a tax on any users of any program. But this is just my point of view on what is reasonable. Can you expect patent holders to agree with your point of view on what “reasonable” means when interpreting their own promises?
Obviously, only if you trust them, which beats the point of them issuing promises in the first place. This is why some of us reject the “RAND” term. It’s essentially deceitful, because it implies there’s an agreement on what is reasonable and what isn’t. The proposed alternative, UFO (for “uniform fee only”) has a clear meaning, and it usually corresponds with what patent lawyers mean when they say their policy is “reasonable” (mind you, I don’t consider uniform taxation reasonable at all).
So whenever you read about Microsoft promising a license under “reasonable” terms to anyone who asks for it, as if being reasonable had some sort of standard meaning, don’t fall for the trap. Stop for a while, check what they’re actually delivering (or whether they’re delivering anything at all) and consider what “reasonable” means to you.
I read Richard Stallman’s post in which he expresses his concern about a serious danger with reliing on .NET for free software development. I think Richard makes very good points here, and I do agree that there’s a serious danger, but I don’t think Microsoft would ever bring all .NET implementations underground. If you think that, my opinion is you’re underestimating them.
Microsoft is smarter than that. They are a sworn enemy of free software, they’re ruthless, and they know all the anti-competitive tactics in the IT world. There’s no doubt they want to make our community divided and helpless. And when they look at the free software development ecosystem, they see two big groups:
A- Highly profitable vendors like Red Hat or Sun/Oracle.
B- Non-profit communities like Debian or Ubuntu (technically, Canonical is a for-profit venture, but they operate at loss).
There’s also 3rd parties that sell hardware or services and contribute “collateral” improvements to our codebase. I’ll ignore those for the sake of simplicity.
It would be silly to try harm group B with their patents, since it’s composed of grass-root efforts which can’t be unrepairably injured just by bringing a company out of bussiness. Besides, group B actually helps them promote their patent-encumbered standards. Why attack those who are helping you?
Ah, but as for group A, maybe they could use patents to shut it down? Perhaps, but I think they’re even smarter than that. Sun Tzu said: “When you surround an army, leave an outlet free. Do not press a desperate foe too hard.” If Mono-based applications become a significant competitive advantage (and it is in their agenda that they do), and their competitors are forbidden from using them, they will put all their effort in pushing for alternatives, even at great expense. I really think they know better.
I recently came across this very interesting article, written in 1999, which details the tactics used by Microsoft to fight IBM. They obviously saw OS/2 as a threat. Back then, Windows 95 was the trading token. They could have caused IBM a great deal of harm shall they refused to license it to them, but it seems the idea of subjugating IBM was more appealing. This is how Garry Norris (IBM) put it:
“Microsoft repeatedly said we would suffer in terms of prices, terms, conditions and support programs, as long as we were offering competing products.”
“[Microsoft] insisted that IBM sell 300,000 copies of Windows 95 in the first five months or face a 20 percent price increase”
Nice deal, eh? Make your dependancy on Windows 95 stronger, or else we’ll use your existing dependancy on Windows 95 against you. No surprise IBM abandoned the PC market. Are Red Hat and Sun/Oracle set on the same direction?
Draw your own conclussions. In my point of view, projects like Debian and Ubuntu are completely safe from direct patent threat. Should we care if Red Hat or Sun/Oracle succumb? Perhaps not, after all, what are they doing for us?