Thursday, August 11, 2016

Concerns with Xen PVH / HVMLite boot on Linux x86




I've been helping a bit with streamlining proper upstream support for Xen on x86 Linux. One of the items I have decided to take on is the so called "dead code" concern in theory present on x86 Linux Xen guests largely in part due to the radical way in which old PV Xen x86 Linux guests boot. This topic is a bit complex, so I had previously written two posts about this to help shed some light into these dark corners of the technical Linux universe that only a few really care about:


Xen has evolved over the years, but so has hardware to help with virtualization. Some say and believe KVM is a much better platform for virtualization than Xen since KVM didn't have to deal with the lack of hardware virtualization support. To a certain degree, part of this is true -- the KVM design has an upper hand in that it has not had to deal with implementing any of the legacy complexities in hardware. If you follow the money in terms of investment, you will notice Moshe Bar, who had co-founded XenSource (later acquired by Citrix) then also c-ofounded Qumranet (later acquired by Red Hat) which was the main company originally behind KVM. In these regards KVM is a natural architectural evolution over Xen. Despite the technical leap forward, this is not to say KVM is simply better, or for instance that KVM cannot possibly have dead code though, or that Xen could not do better. There may be less dead code in KVM on the Linux kernel but in analyzing how dead code comes about I've come to the realization that dead code should be a generic concern all around, the Xen design just exacerbated the concern and took the situation to a whole new level. As it turns out there is also a shit ton of dead code possible in qemu... so perhaps some is saved on KVM, but qemu still has to address this very same problem. This is also not to say that KVM does not paravirtualize. Quite the contrary, its had to also learn from the Xen design -- so it has a paravirtualized clock and devices, but it doesn't have a paravirtualized interface for timers and interrupts, it uses an emulated APIC and so you end up with qemu as a requirement for KVM. As hardware virtualization features evolved, Xen has obviously had to provide support for them as well. This has lead to the complex Paravirtualization spectrum described best in this page. The "sweet spot" for paravirtualization then has evolved over the years, and the latest proposal on the Xen front is called HVMLite. A previous incarnation of this is called Xen PVH design, but this old incarnation is going to be ripped out of the Linux kernel completely as it never really took off for production, HVMLite is the proper replacement, but to avoid complexities with branding the same old name PVH will be used. Here forward I refer to PVH as the new shiny HVMLite design, not the old existing code in the kernel now, as of Linux v4.8 days. What interested me the most of the new PVH design was going to be its proposed alternative boot protocol, which should hopefully address most of the concerns folks had from the previous old legacy PV design. Xen PVH will also not use qemu. With these two things in mind, from one perspective one could actually argue that Xen PVH guests may suffer from less possible dead code than KVM guests. The rest of this post covers some basics over this new PVH design with a focus on the boot protocol, a bit of the evolution of the Linux x86 boot protocol, and where we might be going. I really am writing this mostly for my own note taking, and for future reference, only as secondary in the hopes it may be useful to others.

The given up part here is a bit serious and worrisome. Some folks can give two shits over what goes into Xen to the extent that folks are OK with them merging anything so long as it does not interfere or regress Linux in any way shape or form.

Clean well understood semantics for guests are needed early in boot, we should not allow nasty hacks for virtualization in the kernel, understanding why these hacks creep up, and finding proper solutions for them are extremely important.  

I've been told by Xen maintainers that the PVH ABI boot protocol apparently was settled long ago... As someone new to this world, this came as a huge surprise to me given I was not aware of any Linux x86 maintainer having done a thorough evaluation over it, and most importantly if it were an agreed upon acceptable and reasonable protocol this should have been reflected by the fact that those who likely had the biggest concerns over Xen's old boot protocol would have been fans of the new design. That's at least the litmus test I would have used if I would have tried to handle a technical revamp. Unfortunately, as I spoke to different folks, I got the impression most x86 folks simply either had completely given up on Xen or were completely unaware of this new PVH design. The given up part here is a bit serious and worrisome. Some folks can give two shits over what goes into Xen to the extent that folks are OK with them merging anything so long as it does not interfere or regress Linux in any way shape or form. This lost cause attitude has a bit of history, and the PV design I mentioned above is to blame for some of this attitude -- the Xen PV design interfered and regressed Linux often enough it became a burden. The danger in taking a Laissez-faire attitude with Xen in Linux is we are simply not doing our best then, and in doing so users can suffer, and you can only count then on the Xen community to fix things. This... perhaps is the way it should be -- however it also implicates we may not be learning anything from this other than having fear for such type of intrusive technologies in Linux, I believe there is quite a bit to learn from this experience, and there are things we can do better. This later part is the emphasis of my post given that as I'll explain why below, I've also partly given up. There are benefits from taking a proactive approach here, and Xen is not the only one that could benefit from this. It sounds counter intuitive but helping Xen with a clean boot design is not just about addressing a cleaner boot protocol for Xen alone. For instance, consider the loose semantics sprinkled over the kernel for guests which even ended up in a few device drivers -- paravirt_enabled() was one which thanks to some recent efforts by a few is now long gone. This sort of stupid epidemic is not Xen specific -- even KVM has had its own hacks. For instance an audio driver had an "inside_vm" hack for guests, when trying to look for an alternative I was told no possible solution existed, when in fact only 4 days later a completely sensible replacement was found. Clean well understood semantics for guests are needed early in boot, we should not allow nasty hacks for virtualization in the kernel, understanding why these hacks creep up, and finding proper solutions for them are extremely important. Helping review Xen's boot design should help us all avoid seeing cruft land in the kernel long term. It should also pave the way for supporting new radical technologies and architectures using a well streamlined boot protocol.

Let's review the new PVH boot protocol. The last patch set proposal to add PVH to Linux added yet-another-entry-point (TM) by annotating it as an ELF note, this entry was Xen PVH specific. It had some asm code, and finally, it copied boot params and then handed things off to Linux. I was a bit perplexed, I had looked so much into the flaws of the previous PV boot design that I was super paranoid any new entry was simply doomed to be a disaster, so naturally I was extremely suspicious since the very beginning, despite the amount of delta being small and it still using startup_32() and startup_64(). These have become de-facto entry points, grub2 and kexec use them, so another thing using it seems fair. However I learned both that:


  1. Linux Xen ARM guests use Linux' EFI entry to boot when on Xen
  2. Windows guests will rely on Window's EFI entry to boot when on Xen


Naturally, my own first observation was to wonder why we can't use EFI to boot x86 Linux on Xen as well. There are a few reasons for this, but perhaps the best summary of the situation is best described by Matt Fleming, the Linux kernel's EFI maintainer:

"Everyone has a different use case in mind and no one has all of them in mind"

Regular guests are known as domU guests. Guests with special privileges are known in Xen as dom0. So if you boot into Xen,and then a Linux control guest OS that's the dom0, you can then spawn domU guests using dom0.

The first obvious concern over exclusively using EFI is that contrary to Windows, Linux needs to support dom0, so then hypercalls would need to talk to EFI. Xen supports dom0 on Linux ARM guests though, but in that case, as George Dunlap clarifies to me, it then relies on the native ARM (as used by uboot) entry path and relies on completely on device tree for hardware information. x86 Linux supports device tree, and has used it on some odd x86 harware, however there are assumptions made for what type of hardware is present. ACPI can and should be used for ironing out discrepancies, however it remains unclear if this would suffice to support all cases required for x86 Linux guests when supporting dom0.

For domU guests an EFI emulation would need to be provided by Xen somehow. But if Windows requires EFI this should be a shared concern. Upon review with Matt -- if one wanted a minimal EFI environment one could only provide the EFI services really needed, we'd also need a way to distinguish bare metal boot and PVH some way by using EFI, Matt has noted that using the EFI GUID seems to be one way to accomplish filling in the required semantics to pass. If EFI was required for domUs though that would mean Xen unikernels (Linux or not) would need to boot EFI. To be clear unikernels can be Linux based as well, they consist of very slim kernels with a small ramdisk and have a single process running as init. George notes that in these cases even an extra megabyte of guest RAM and extra second of boot time is significant cost to incur on guests. He further notes that using OVMF (which would provide EFI) is an excellent solution for domUs when you boot a full Linux distribution, but that it would impose a significant cost on using Linux in unikernel-style VMs. This seems like a fair concern, however its not a reason for why Linux should not be able to use EFI though. In fact supporting to boot Linux x86 with EFI using OVMF seems like a design goal by Xen, after all that would also allow Xen to boot Windows guests without qemu to emulate devices since OVMF will be able to access the PV devices until the PV drivers come around for Windows. Another concern here over requiring EFI is other open operating systems may not support EFI entry points (does NetBSD and FreeBSD not support EFI boot?). The biggest concerns then are the implications to use EFI for dom0, requiring it for small unikernel guests (Linux or not), and the lack of other guest OS support for EFI.


Even though were supposed to have a good technical session at the last Xen Hackathon in London 2016, when it came down to talking about alternatives to the existing PVH boot ABI -- David Vrabel stonewalled the discussion by indicating the decisions had already been made, and as such found it pointless to discuss the topic. That's the very moment I gave up on helping with this topic for Xen. The rest of the details here and below are due to hallway tracks between me, Matt Fleming, Daniel Kiper, Andrew Cooper, Jürgen Gross, and later Alexander Graf. If you want to help change for the better for Xen PVH on Linux you'll have to coordinate with them. My own personal interest in this has morphed only to the more real long term for Linux.



With regards to using EFI to boot Xen PVH -- the devil is in the details.  Even if we go the EFI route there's a slight discrepancy between how Xen boots Linux and how Linux's 5 first pre-decompression x86 entry points work -- in particular Linux's EFI entry supports and requires decompression to be done as part of the kernel boot code. On the other hand the Xen hypervisor runs domU Linux guests just like any other regular userspace application: paging is enabled. Linux decompression runs in 32-bit mode with paging disabled, and the code relies on this. The hypervisor does not do the decompression for the domU guest, the toolstack does this, so in this regard the toolstack must support each decompression algorithm used by each supported guest. Also, some VT-x hardware can't run the real-mode code, which makes up the 16-bit boot stub. The exception to this is when Xen boots dom0 Linux, in that case, as Andrew Cooper explains, "the hypervisor contains just enough domain builder code in .init to construct dom0, but this code is discarded before dom0 starts to execute". If one were to resolve the EFI boot issue for Linux, it would not only be useful for PVH, old HVM guests could also use it as well, the only difference would be that HVM guests would use qemu for legacy devices.

Can these issues be resolved though? For instance, can we add a decompression algorithm type that simply skips the decompression? Additionally -- even if these are the reasons to have this new boot method used by Xen for the new PVH -- has this really been fully vetted by everyone ? Are there really no issues with it ? One concern expressed by Alexander Graf recently was that without a boot loader (grub2) you loose the ability to boot from an older btrfs snapshot. Directly booting in this light is a bad idea.

It turns out though that if you want to boot Xen you rely on the Multiboot protocol, originally put out by the FSF long ago, the last proposed new PVH boot patches had borrowed ideas from Multiboot to add an entry to Linux, only it was Xen'ified. What would be Multiboot 2 seemed flexible enough to allow all sorts of custom semantics and information stacked into a boot image. The last thought I had over this topic (before giving up)  was-- if we're going to add yet-another-entry (TM) why not add extend Mulitiboot 2 support with the semantics we need to boot any virtual environment and then add Multiboot 2 support entry on Linux? In fact, could such work help unify boot entries over architectures long term in Linux? Is a single unified Linux entry possible?

Using EFI seems to require work and a proof of concept, is there an alternative? For instance -- Alexander Graf wonders why can't the 32-bit entry points be used directly? We would need a PV IO description table -- could merging what we need into ACPI tables suffice to address concerns ? Again, this gets into semantics, as we'd still need to find out if who entered the entry point is a Xen PVH guest or not so we can set up the boot parameters accordingly. One option, for instance is to use CPUID, however CPUID instruction was introduced as of Pentium, so this would fail on i486. Jürgen has noted that we however could probably just detect CPUID support, and this avoid the invalid op code.

In the end talk is cheap. So we need to see code. But hopefully this summarizes enough to understand the issues on both sides. Good luck!

Saturday, February 27, 2016

I'm part of Conservancy's GPL Compliance Project for Linux

I am one of the Linux copyright holders who has signed an agreement for the Software Freedom Conservancy to enforce the GPL on my behalf, as part of the Conservancy's GPL Compliance Project For Linux Developers. I’m also a financial supporter of Conservancy. We're a group of Linux kernel developers that give input and guidance on Conservancy's strategy in dealing with compliance issues on the Linux kernel.


  1. I don't take this lightly
  2. "Don't be evil" is hard
  3. Why things are hairy when it comes to the Linux kernel and GPL enforcement
  4. Why we need GPL enforcement
  5. How can we enforce the GPL responsibly
  6. Evolving copyleft

I don't take this lightly


Joining was not something I took lightly, when I started hacking on Linux I was at ends with arguments over morality on free software put forward by the FSF and simply felt the GPLv2 on Linux was a nice coincidence; I felt I just wanted to hack and be productive. It took me over 10 years in philosophical thought to make a final decision about where I stand with regards to software freedom. I've made my motivation and intent in the community clear before, but its worth reiterating now: work harder always in spirit of what I believe is right, and accept no compromises on shit engineering.

"Don't be evil" is hard


I've been hacking on Linux since I was in college, after doing kernel development in the industry for a while I have learned the hard way that "Don't be evil" or "Do the right thing" is easier said than done for companies, specially with regards to software freedom. I've determined that without a mathematical and economics framework that takes into consideration and appreciates freedom it will take a lot of foresight, or for Free and Open Source software principles to be part of your company DNA in order for a company to appreciate the freedoms behind free software. To help companies embrace copyleft, within the community we really need to figure out how copyleft can affect and help businesses, the complexities it brings about, and work with both the community and companies on helping evolve both copyleft and businesses in amicable ways. Its easier said than done.

Why things are hairy when it comes to the Linux kernel and GPL enforcement





Consider answering these questions in today's business world when contributing to Linux.
  • Who owns the copyrights or patents to the software that Joe Hacker wrote prior to joining Yoyodyne, Inc?
  • Who owns the copyrights or patents to the software that Joe Hacker will write for Yoyodyne Inc?
  • What software projects can Joe Hacker contribute to while at Yoyodyne Inc?
There are four challenges that the above complexities bring about for businesses that affect their capacity to contribute to the Linux kernel and participate in GPL enforcement:

  1. How to replace proprietary solutions
  2. The Linux kernel is licensed under GPLv2 and as such only gets implicit patent grants
  3. These days companies have no option but to address patents considerations
  4. Addressing possible company conflict of interests

I've covered these issues before, what follows is a terse summary. Copyleft obviously is an imminent threat to proprietary software that relies on copylefted software such that the proprietary software is arguably subject to the conditions of the license that the copylefted software is distributed under. An implicit requirement however is that copyright holders of the copylefted software are both willing to and capable of seeking legal remedies against distributors of the proprietary software. In this light, if a business does not know how to phase out proprietary software it can be affected, short term, or long term. Patents can be implicated by some free software licenses. Paying for patent licensing also adds up. Patents can also be used to sue people. If you have signed conflict of interest agreement with business partners, things can get really hairy, and puts the industry at ends when it comes to free and open source software even if you're an "open source company". Since we lack the mathematical and economics framework to tangibly appreciate freedoms over patents and since patents can ultimately be endangered by certain free software licenses its only natural corporate interests will want to undermine certain free software licenses.




As businesses evolve, copyleft evolves. Patents were one of the latest additions to free software licenses, both through the GPLv3 and the Apache 2.0 license. I consider the Apache 2.0 license one of the best legal innovations in our arsenal in the free software world: if you want to really test what seems to be claim of only opposition to copyleft, ask if the Apache 2.0 license can be used instead. Now in the Linux kernel though we have an issue, since its GPLv2 and it only provides an implicit patent grant, and since we can't add GPLv3 or Apache 2.0 licensed material to the Linux kernel -- it still leaves the patent question open for businesses to address. To help with this, its why linux-firmware now also requires an explicit or implicit patent grant requirement. We need to close all the gaps that prevent copyleft evolution.  And sure, we can use permissive licenses on Linux, but that should only be used as a compromise -- not a de-facto practice. For instance getting ZFS relicensed to the ISC license might be a great compromise for all parties involved. Fully permissive licenses without patent provisions should be our last resort and compromise. Since patents are prevalent everywhere this means businesses have to deal with a lot of issues implicitly behind the scenes.

Case in point, as covered recently by lwn, at linux.conf.au 2016 Bradley talked about corporate opposition to copyleft. He explained how corporations will typically not do GPL enforcement in the name of the community, unless of course it fits your business model. He gave the example where Red Hat was sued by a patent troll, and in response Red Hat then alleged GPL infringement against Twin Peaks, with this Red Hat got a patent license but Twin Peaks software remained proprietary. Red Hat is an example that has Open Source software built-in to its business DNA, and even they seem to walk on eggshells when it comes to GPL enforcement. They are not to blame though, doing GPL enforcement for the community responsibly is hard, specially these days in such a complex, technology business sector, where anyone can be your partner and business contracts typically forbid you from engaging in actions that may harm any of your business partners.

Why we need GPL enforcement


Because of the challenges explained above even the best of Free and Open Source companies are walking on egg shells when it comes to GPL enforcement. By now you should have a sense of why some corporate interests may be trying to undermine copyleft licenses to effectively be as good as permissive licenses. We can't let that happen. Evidence shows the number of GPL violations has skyrocketed over the years to the extent that we cannot deal with them. There were only a few community groups dealing with GPL violations as well, this was outside of the Linux kernel, Linux kernel GPL violations remain common and unenforced. For this reason GPL enforcement is critical for the Linux kernel and community.

How can we enforce the GPL responsibly?


To address this Conservancy published a set of principles that should govern GPL enforcementthe primary objective is simply to bring about license compliance. We are not out for money, or blood, simply compliance to the license to strengthen the commons. We give input and guidance on Conservancy's strategy in dealing with compliance issues on the Linux kernel. Responsibly enforcing the GPL for the community, within the community with due care should be of utmost interest to any business contributing to Linux. If you're a Linux developer and would like to chime in and help us with these efforts you should consider joining the Conservancy's GPL Compliance Project For Linux Developers, please contact <linux-services@sfconservancy.org> for more details.


Evolving copyleft


On the post where I describe my epiphany which after over 10 years allowed me to finally cope with software freedom philosophy I explained how helping evolve copyleft is important, I'll provide a summary of that in light of the Linux kernel and its GPLv2 license. I believe some of the challenges described above are self inflicted as we were not able to move to GPLv3, given that we have all these patent considerations. I don't necessarily think we should move to GPLv3, but do consider the tensions that arose from those discussions really unfortunate. Lesson learned: we should consider evolving copyleft openly, in the community, with the community. If you'd like to help with that I invite you to take a look at copyleft-next, there is a github tree and mailing list. Copyleft-next is GPLv2 compatible.

Thursday, February 25, 2016

ZFS, Linux, illumos and the ISC license

People are discussing whether or not Canonical including and shipping ZFS as a Linux kernel module of the GPLv2 licensed Linux kernel might be a GPL violation or not. James Bottomley recently posted an interesting opinion in that although it is a technical GPL violation "it’s difficult to develop a theory of harm and thus the combination is allowable" given that you'd need to prove the harm is done to prosecute. Meanwhile just today Conservancy has released a Statement on ZFS and Linux combinations. In it are very important pieces of information on serious incompatibilities which takes this a bit further outside of the scope simply adhering to the GPL compliance standards to make people happy and not harm people. I'll review those but also explain a bit more of the history of why ZFS is under CDDLv1 and why Oracle no longer benefits ZFS being licensed under CDDLv1. We should be focusing more on the illumos community, the BSD community, what their goals are and thinking about what they can do and why they should do anything anyway. If we want a middle ground where we can all benefit, including the proprietary folks, we should all just lobby for the ISC license as a reasonable compromise for ZFS community. I'll explain why.

You can currently only use CDDLv1 for ZFS



CDDLv1 says that if you redistribute any binaries the software must be distributed only under the CDDLv1. There are a series of issues with this. The easiest to grok is that modules can be built-in, and that the kernel as whole is GPLv2. Canonical will ensure ZFS might live as a Linux kernel module only though it seems, however there are a series of serious issues with this as well. I won't list them all and I'll purposely be vague about it as I do not want to do anyone's dirty homework, but I'll at least describe one item that you can find discussed on archives today. We have only:

MODULE_LICENSE("Dual BSD/GPL")

We do not have:

MODULE_LICENSE("GPL-Compatible")

This is on purpose. I know because I actually proposed such a change years ago! I did this because at that time I was on the hippie bandwagon wanting to help Linux and the BSD camp sing kumbaya together on the 802.11 front. The "Dual BSD/GPL" thing was added for historical purposes to account for old BSD incompatibility, but for all intents and purposes all upstream Linux kernel modules currently using the dual declaration might as well just outright be declared as:

MODULE_LICENSE("GPL")

This hasn't been done and we keep the dual thing just to avoid confusion, but its perfectly possible to use the GPL declaration on even only permissively licensed Linux kernel modules. Another just utterly stupid issue with this incompatibility is you can't hack on ZFS unless you use the CDDLv1 license. As I'll describe below perhaps this might have been a good thing for Sun, but as things stand now even or Oracle -- this is not really a good thing.

When shipping binaries the GPLv2 applies


CDDLv1 prohibits you to abide by this. This is perhaps one of the more obscure incompatibilities, but I've tried to summarize it as best as possible with the above statement.

CDDLv1 was not purposely incompatible with GPLv2


CDDLv1 was not just the license of ZFS, it was the license chosen for OpenSolaris. Some ex-Sun employees have claimed that the CDDLv1 was purposely made incompatible with GPLv but according to Bryan M. Cantrill, one of the Sun employees who actually ended up staying even after Oracle acquired Sun, at USENIX Lisa XXV clarified this is not true. He clarified (starting at video 22:00) that part of the incompatibilities came from the fact that although they wanted copyleft they needed a form of copyleft that enabled proprietary drivers such as drivers for partners such as EMC and Veritas. This shows that even if you have great intentions and want to use copyleft, if you are have any proprietary strings attached, you'll be affected and can only produce GPL incompatible solutions.

Oracle does not benefit from CDDLv1 ZFS anymore



To understand this we'll have to review a bit of history. ZFS was just part of OpenSolaris. Let's consider the original motivation at Sun to be enable them to keep proprietary drivers, how this aligns to Sun's old business model and then review Oracle's current business model for "Solaris", and obviously what remains from the OpenSolaris effort and how this can impact in any way Oracle's business.

First credit where due. Bryan credits Jonathan Schwartz for making it a priority to open source the operating system, he mentions that OpenSolaris started in January of 2005 when DTrace became the first of the system to be open sourced, and that the rest of the OS was open sourced in June 2005. Sun was bought out by Oracle in 2009, the acquisition closed on February 2010. Ben stayed at Oracle until July 25, 2010.
On August 3, 2010 illumos was born, not as a fork but rather an entirely open downstream repository of OpenSolaris with all the proprietary pieces rewritten from scratch or ported from BSD. On Friday August 13, 2010 however an internal memo was circulated by the new Solaris leadership to say that they will no longer distribute source code for the entirety of the Solaris Operating System in real-time as it is developed. It seems this was never publicly announced, and that updates just stopped on August 18th, 2010. Solaris 11 was released on November 9, 2011 and there was no source code released to go with it.

That marked the end of OpenSolaris...

Oracle decided to keep Solaris proprietary then, and they were able to do this as OpenSolaris development required copyright assignment. Although OpenSolaris died, the illumos project continued to chug on, independent of Oracle, with a striking difference, copyright assignment was not required. This means Oracle does not own copyright on the illumos project and its new innovations. Oracle cannot use illumos versions of ZFS unless they also release their own Oracle Solaris under the copyleft CDDLv1. Oracle Solaris cannot reap benefit of the illumos version of ZFS, unless they open source their own source code again, and the reason is that the little pieces of GPLv2 incompatibility require them to use the CDDLv1.

illumos innovations can never be part of proprietary Oracle Solaris






illumos has seen critical innovations and bug fixes to ZFS, Dtrace, Zones and other core technologies. The real kernel architects behind ZFS have left Oracle, are not in favor or Oracle's idea to stop OpenSolaris, and have gone to great lengths to ensure that Oracle play by the archaic copyleft CDDLv1 license. Examples of a features added to illumos ZFS are SPA versioning that allows disjoint features from different vendors without requiring conflicting versions, UNMAP for STMF, allowing for better ZFS-backed ISCSI LUNs, getting estimates for ZFS send and receive. To top this all off, even if the Linux community made changes to ZFS to fix issues or add new innovations Oracle could not benefit from it. The BSD community would have contributed first to ZFS than the Linux community, but those contributions also could not be used by Oracle.

Why the ISC is a win for all



Are the old reasons for Sun to use CDDLv1, to enable proprietary drivers, still part of illumos' and the BSD community's own goals ? If not can someone confirm if the illumos or BSD community is forever stuck with the CDDLv1 ? If so would they be perfectly happy with that? Is the potential gain of contributing with the Linux community worthy enough for illumos to wish to want a relicense that would make things work for all parties involved ? What would it take for them to relicense? Does the illumos community really want Oracle to release Oracle Solaris under the CDDLv1 ? If Oracle wanted to upkeep the Oracle Solaris solution, help illumos collaborations on the Linux front, enable contributions on Linux to be usable even on proprietary Solaris solutions the ISC license would make a good middle ground for all parties involved. We did this with on the 802.11 front, it should easily apply as a reasonable compromise to ZFS as well, if parties really wanted a good middle ground.

Friday, January 29, 2016

Support software freedom now!


Free Software is in a critical state today. Bradley Kuhn recently has made an urgent call for supports of free software to help a campaign to strengthen both the Free Software Foundation and Software Freedom Conservancy, specially given if you donate before January 31st 2016 as your donation will be matched! I've learned the hard way that without such organizations we could be in for a dark age on user software freedoms. No other entity is doing what they do and they are both of critical importance to the community. Because of this I'm not only contributing now but I've decided to donate to each organization at the very least 1% of my salary each year. If you are employed because of free software I urgently encourage you to consider contributing. If you're in dire straits economically, at least give $20, for fuck's sake its probably just 2 whiskey city shots or a long Uber / Lyft ride.

Thursday, January 28, 2016

Why open hardware must succeed



To the average person open hardware simply sounds like a good idea... They may have heard of this thing called "open source" that some "disruptive" hipster companies may have used and embraced to create new business models, so open source hardware seems like a natural progression. There's more to this though. The average person will not understand why its not just a great idea but also that we are in dire need for open hardware in the industry, the average person will not understand why its vital to the success of the open source movement. The average person will not understand that because open hardware follows a better development model -- the collaborative model -- it will grow very fast but also face a lot of very serious challenges. This post tries to address this gap.

Back in 2013 I wrote a trilogy on the dangers of Free Software, the Free Software patent paradox and even threw in a quip over this topic and their relationship to the cosmos. I wrote this in desperation because, as I saw it at that time, there were really no good prospects in the near sight, it was unclear when we'd see a steady change towards the right direction, my post was meant more as an alarm -- to create consciousness over fundamental issues in our community. The tide is changing though, fast, and for the better. I recall reading about the open hardware summit efforts in 2010, back then I was not impressed though and the prospects seemed fuzzy. The 2015 Open Hardware summit passed a little while ago and upon reading about some of the talks and presentations, its clear now that momentum has built up significantly. This is slightly relieving but its not enough, we really need to create awareness that open hardware is not just about it being cool, fun and trendy, but also:
  • Open Hardware development is a key requirement to the success of the open source community
  • Open Hardware development is very likely where the best evolutionary methodology for the combination of best hardware and software will come about
Ignoring these two principles will belittle any serious disruptive open hardware efforts as side projects.



Statistically there really are only a few who will care about this topic -- those folks should know there is an uphill fight for the success of Open Hardware. Open hardware can be extremely disruptive and the type of changes to be expected from it can have significant economic effects on existing businesses, if existing businesses do not adapt. I've learned the hard way though that businesses not only are hard to change, they simply may not want to, even if you are certain you have a solution for them. Companies may have really good reasons to not change and one cannot not take this personal. You have to really think of the bigger picture, for instance if we shift the conversation of the possible disruption of "open hardware" from impact to existing businesses towards to the possible economic impact though it changes things considerably. Having an impact to a few companies should really be the least of concerns to the Open Hardware movement. The dangers involved with open hardware could mean huge shifts in state economics, but only those companies who could not afford to change or embrace change. In the worst case these days, where a possible Trump president seems sadly statistically possible, that's loose lingo which could easily be twisted by the craziest in America towards the topic of "national security". There always is plenty of work to trying to prevent unexpected huge tidal economic shifts in nation states, the TPP is one, but one should also consider funding in research as well. Although not related to open hardware, I'll mention a recent issue of relevance with an amazing FOSS project: sage. Last year William Stein made some effort to create awareness over issues of funding towards his open source mathematical suite which I'd like to use for some perspective. For a bit of background on how he started Sage read his "Mathematical Software and Me: A Very Personal Recollection". You really do have to ask yourselves then why would the Simmons Foundation in their right mind would pick a proprietary product over an open source project at a funding event which actually listed as a goal to "to investigate what sorts of support would facilitate the development, deployment and maintenance of open-source software used for fundamental research in mathematics". Stein explained in details on his trials over this effort on his post "The Simons Foundation and Open Source Software" (refer to hackers news discussion). One of the only sensible things that comes to mind is the possible impact on economics, disruptive economics to existing proprietary mathematical suites in the United States. Naturally, you should then expect different economic regions with different interests to have different motivations and perhaps more keenly interested in supporting these efforts, that actually happens to be the case, refer to the European "OpenDreamKit: Open Digital Research Environment Toolkit for the Advancement of Mathematics", which will "provide substantial funding to the open source computational mathematics ecosystem, and in particular popular tools such as LinBox, MPIR, SageMath, GAP, Pari/GP, LMFDB, Singular, MathHub, and the IPython/Jupyter interactive computing environment". The point I'm trying to make here using Sage as an example if you do not get much support for open hardware research at your University, don't be surprised, realize what you're up against, the entire evolution of silicon valley and the economics behind that.

Fret not though, I wrote this post also make emphasis on both principles stated above, the second one deals with my own conjecture on that open models will win.What we need is math, tons of fucking math, semantics, grammar, and more precise science behind what we do with open models. If you are not using the scientific method for evaluation of progress / gains / bugs /etc in any way for your own project I highly suggest you consider it. Be pedantic over everything you can measure. Do not get discouraged to find out that the respective proprietary piece you are trying to replace has no such metrics for comparisons. That's not coincidence, its why it survives after all. I'm delighted to report that since my last calling for metrics on FOSS we have had a huge shift, not only are people really meeting up just for this topic alone (FLOSS community metrics meeting in Brussells) but there are companies spawning (bitergia) and dedicating themselves to this end, and I'm also seeing a lot of folks starting to talk about this at conferences I attend.

Academia can also help shape economics for the better. When it stop doing this, academia has failed us. Its fairly understandable that economic pressures may have historically influenced academia, but if business and economics theories do not account for the need for self reinventing and dynamic changes in the market, then those economics and market theories should also be re-considered, specially in a time of age where the speak of exponential growth, evolution and change is becoming standard. Intellectual Property remains a clear challenge, I've been dealing with patent issues since my University study days when learning about polynomiography when I was told I could not release a piece of code as open source to the community as my professor had patents over the subject, but surprisingly its still a lingering issue even for new trendy University efforts such as Singularity University. Open Hardware will suffer because of patents, its why Open Hardware is key to the success of the open source movement: the open source movement has historically faced challenges with patents, but in the worst case the dangers here lie in that free software developers could become a dying breed (for details refer to this trilogy post on the patent paradox). Open source developers include free software developers but free software hackers do it for a cause as well. Patent bloat companies want to hire zombie open source developers, that do not care about these issues, they will do anything in their power to keep the status quo. Because of this Open Hardware development is a key requirement to the success of the open source community, not only will it provide an outlet for free software developers but it also enables open source developers to get better hardware without bullshit restrictions.