tag:blogger.com,1999:blog-296792922024-03-17T00:44:30.960-07:00I am mcgrof's smirking revengeA blog about active living... hacking, and philosophical brain fartsmcgrofhttp://www.blogger.com/profile/06081818694231731816noreply@blogger.comBlogger66125tag:blogger.com,1999:blog-29679292.post-31333827945882119842020-04-26T19:44:00.000-07:002020-04-26T19:44:08.859-07:00COVID19: don't think about tomorrow, it's too smallOut of the few events I enjoy the most in San Francisco is attending the <a href="https://en.wikipedia.org/wiki/Long_Now_Foundation">Long Now seminars</a>. Not all seminars are great, but the push to drive folks to come together and think about the really long term through talks is pretty unique, specially if afterwards you can then discuss the topics with a group of friends or family. After COVID19 kicked our teeth in, I knew I had to eventually take some time to sit and think about what the implications about this pandemic are for the long term. I knew this could transform my perspective and expectations a bit, but quite frankly I also dreaded a bit the philosophical turmoil this might bring on me for a while. My procrastination has proven futile, a conversation with a good friend yesterday set the thought process in motion, and as expected my outlook on this pandemic is quickly shifting for the better. This delayed exercise, has made an aspect about our human evolution come into the light for me, and it is clear now, just as if I were looking at it through a mirror. Once it is there, one cannot ignore it. It is only day 1 since I started trying to look at COVID19 through these lens, and the mental fruits I am collecting from it are rewarding enough, that I wanted to share and encourage this exercise with others. I'll encourage this by going through a few of the brain farts floating around right now. I cannot help but wonder, with optimism, what revelations or ideas you might come up with as well.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgjaZGCPIj2E78YYRRHyJvW663fQ1Sq9MyxA8zao3II0hQi3riAnbrhyphenhyphenaACcXfUfupCHQAX2qUdPFhHhnsJQKe8EKpcxM49N7yubINi3ruHdO3PSRKpy_8IaS5g7sCA334_5Dv8zA/s1600/future-like.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1600" data-original-width="1154" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgjaZGCPIj2E78YYRRHyJvW663fQ1Sq9MyxA8zao3II0hQi3riAnbrhyphenhyphenaACcXfUfupCHQAX2qUdPFhHhnsJQKe8EKpcxM49N7yubINi3ruHdO3PSRKpy_8IaS5g7sCA334_5Dv8zA/s320/future-like.jpg" width="230" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
First, was COVID19 expected? It really depends who you ask. It certainly caught many off guard, but I'm starting to accept it as nothing but a small episode which is part of our chaotic human evolution. There are some human concepts which exist which try to create awareness of highly unexpected events with huge impact. <a href="https://en.wikipedia.org/wiki/The_Black_Swan:_The_Impact_of_the_Highly_Improbable">Black Swan</a> is one, however even the author of the concept, Nassim Nicholas Talib, states this is not a black swan, <a href="https://www.youtube.com/watch?v=Tb2pXXUSzmI">Talib believes that this is a White Swan. He goes on to state that there is no excuse for companies or governments to be unprepared for this</a>, and because of this he <a href="https://www.youtube.com/watch?v=kixi_Ob4hCM">strongly disagrees with the bail outs for companies</a> so far as to call them immoral, specially if CEOs keep rewarding themselves handsomely with bonuses. In terms of the long term, Talib's logic here for immorality is that those companies not prepared should simply suffer their due economic course, and that a bail out is taking tax payer money away from those in need -- there is no relationship between tax payer money and lack of preparation by companies. While I can see arguments be made against the immorality claim here, <i><b>if</b></i> one claims complete ignorance of the possibility of such an event, it is still not a strong argument if one takes a position that CEOs should be well educated and prepared. We could at least probably agree that businesses being bailed out completely failed to prepare for any serious pandemic. In terms of the predictability of this event though, Talib is right, not only did we have movies predicting some of what could happen, we even had governmental operations to simulate this with extreme mind boggling similarity, such as <a href="https://en.wikipedia.org/wiki/Crimson_Contagion">Crimson Contagion</a> in United States. Yet many countries were still shocked by what happened. Countries that had enough intelligence to even let you create a full swing government funded simulation have absolutely no justification to say that they could not have seen this coming. Let me be very clear, United States fucked up, and it fucked up big time. I'm only willing to give countries without intelligence communities predicting such events the benefit of the doubt. And perhaps you might still disagree that this could not have been expected, however, I'd be surprised if you would believe that this can catch us off guard again. <b>Fool me once, shame on you, fool me twice, shame on me</b>. In the long term, this is simply no fucking excuse to be unprepared.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhriCrDoUrGgTd-Jqyo2Q3WfTnmuKMt-Myrdiii40pNbbvSytZxKwsls2uqnrc6j7p1dnJhbcIWOtrMb5rMDZUYI94WILN3J6kATZRrpQq4Yx2x7_C1gm-s-xGIOzxEB5HWYxLppw/s1600/star-wars-nosara.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="539" data-original-width="960" height="179" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhriCrDoUrGgTd-Jqyo2Q3WfTnmuKMt-Myrdiii40pNbbvSytZxKwsls2uqnrc6j7p1dnJhbcIWOtrMb5rMDZUYI94WILN3J6kATZRrpQq4Yx2x7_C1gm-s-xGIOzxEB5HWYxLppw/s320/star-wars-nosara.jpg" width="320" /></a></div>
<br />
The first concept about implications about the long term that came to mind was that despite how amazing modern technology is, it still hasn't been witness to the scale of such a pandemic. This is evident by the knee-jerk reaction technology companies are having by trying to help with applications to do contact tracing, just as well as the concerns which are bubbling up about privacy over this. To me, modern technology was born with the birth of the transistor in 1947. Since then we have seen pandemics such as HIV, Ebola, Swine Flue, but none of these have had such a global drastic impact on health, economics and social life as COVID19 has. Pandemic or not, no event since the birth of the transistor has pushed the entire planet to huddle indoors for months, all at the same time. We are now even having <a href="https://www.nytimes.com/2020/04/24/podcasts/the-daily/coronavirus-deaths-grief.html">successful full video conference funerals</a>. If modern technology could talk, it could easily bark, <i>What The Fuck</i>. Although a lot of what has happened is negative, in light of how new all this is to technology, there are many opportunities to improve things, and there are also many opportunities for paradigms to succeed and flourish. Trends, such as working from home, which many Silicon Valley companies are famous to historically loathe, will have no option but to evolve to accept as norm. Working from home trends can also easily be supported by the observed <a href="https://www.washingtonpost.com/weather/2020/04/09/air-quality-improving-coronavirus/">quantitative gains in improvement of air quality across cities world wide</a>. Companies engaged with open source and decentralized distributed software / hardware development models should thrive without as much disruption other than economic collateral by partners, in comparison to those requiring in-person meetings and a centralized development models. I invite you to consider what things we could do better in technology.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhYQhwuNKWt0M02lafIocg2B-zfncnVUcGjFi1p_bgvdvsqNwkexJIN0UWNlF9hsrPvodKawbGuULveXAqvEvS6uzM0aD0Zqf3JaQY79EaujVPMiw0ecGfuNuSgfW0jXVTuejxOMw/s1600/cartago-star-wars.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="967" data-original-width="1434" height="215" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhYQhwuNKWt0M02lafIocg2B-zfncnVUcGjFi1p_bgvdvsqNwkexJIN0UWNlF9hsrPvodKawbGuULveXAqvEvS6uzM0aD0Zqf3JaQY79EaujVPMiw0ecGfuNuSgfW0jXVTuejxOMw/s320/cartago-star-wars.jpg" width="320" /></a></div>
<br />
I've had conversations with friends who argue that only communism could have prepared countries with such strict measures to successfully quickly flatten the COVID19 curve. I disagreed back then, and now I have stronger evidence. Although United States completely fucked up, there are modern capitalist democracies which will remain in history as shining beacon examples of what countries <i>should have done</i>. Costa Rica is one of them. Although Costa Rica depends heavily on tourism for its economics, it shut down its borders, and all of its national parks, beaches, and heavily restricted vehicle traffic with heavy fines. I am very well aware how much it sucks to not be able to surf during this pandemic, and yet, the majority of Costa Rica surfers, and even surf shop owners lead by example and promoted stay at home best practices. I'm sure that with time some folks will try to study and dissect how Costa Rica was so successful at this. Although I can't claim to be able explain all the factors that played into this, I am confident that at least one large factor should be the amount of money that Costa Rica has been able to re-purpose from not having a military into education and health. The investment in not having a military, and re-purposing it has paid off handsomely today. I'm also sure that the solidarity that Costa Rica's politicians and ministry of health has brought upon its citizens has helped tremendously. The ability for a government to create solidarity throughout its entire country has never been more important than now.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgGd1CaUWzM89U-pU6Hr0pVa6CTv_ErtRS7Ms8fWcUMAXdhr_jskrZTuXSMZfINYI3PxYxNnn6PgHk5nyDvLFCtx1CL9cwZ3FLA3Hitxvh3MvxrM1wlDenXL7yA5Ni7f4FbZS4-lw/s1600/cr-2020-04-26.jpeg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1600" data-original-width="902" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgGd1CaUWzM89U-pU6Hr0pVa6CTv_ErtRS7Ms8fWcUMAXdhr_jskrZTuXSMZfINYI3PxYxNnn6PgHk5nyDvLFCtx1CL9cwZ3FLA3Hitxvh3MvxrM1wlDenXL7yA5Ni7f4FbZS4-lw/s320/cr-2020-04-26.jpeg" width="180" /></a></div>
<br />
Yet strong capitalist countries have suffered deeply. As with technology, has capitalism not suffered a similar tragic event to not learn? Quite the contrary. The Spanish Flu was much worse, it killed more humans than those who died from the First and Second World War, <b>combined</b>. And during the Spanish Flu pandemic the First World War was in full swing, that pushed many governments involved in the war to down play its effects. So why hasn't it evolved? I really don't know. What I do know is as with technology, there is room for a lot of improvement. I've argued before how <a href="https://www.do-not-panic.com/2013/08/evolving-capitalism-with-ethical.html">Capitalism is broken, and how a quantitative ethical capitalism can help</a>. I stand by that statement, and I think that there is more evidence for that now than ever before. The flaws of modern capitalism should reveal metrics and assessments which today we simply do not measure or quantify. There are a lot of opportunities here. The tangible economic impact that drastic stay-at-home measures has had on industrialized agriculture and farming are breeding grounds for quantitative economic evolution and business development. The loss of not having been able to be prepared for such situations, or for <a href="https://www.youtube.com/watch?v=-OoT2OZWCOI&feature=youtu.be">having pushed agrarian and farming industry to the brink, to which COVID19 could be nothing but possible collateral</a>, are areas of huge opportunity for evolution of a new age of capitalism, or <a href="https://orb.binghamton.edu/cgi/viewcontent.cgi?article=1002&context=sociology_fac">Capitalocene</a>. If we had quantitative assessment of risks brought on by certain practices, markets should react negatively to brain dead dangerous practices, which would otherwise sink the global economy. If the concept of Anthropocene is new to you, I highly recommend <a href="https://astrobiology.nasa.gov/news/earth-in-human-hands/">David Grinspoon's Earth in Human Hands</a> book, it introduces the concept from an astrobiology perspective but focusing on Earth instead of what life might be like in far away planets, as astrobiology typically focuses on. Call it what you will, if you want to see this in a positive light, as the awareness of the concept of the age of the Anthropocene spreads, capitalism must evolve to take into consideration <i><b>Anthropocene responsible investing</b></i>. If you want to look at it in a negative light, those companies and markets which do not get their shit together are doom to failure.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiikEwPX6xwfHtKZZP9zpDLtm80wosvqjZqq7TKEchLtsmkZV-t_e3SzQ2E7jc_BNE_0O_zpaiQr52JjtLajPrlZhNURVEgZa2FwEpOy9FOoez0uyjHTz64YHmx9R33PBT5o1LPHA/s1600/tech-drawing.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1600" data-original-width="951" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiikEwPX6xwfHtKZZP9zpDLtm80wosvqjZqq7TKEchLtsmkZV-t_e3SzQ2E7jc_BNE_0O_zpaiQr52JjtLajPrlZhNURVEgZa2FwEpOy9FOoez0uyjHTz64YHmx9R33PBT5o1LPHA/s320/tech-drawing.jpg" width="190" /></a></div>
<br />
This long term exercise on capitalism and government is also revealing to me that countries without their own intelligence community are at ends when it comes to pandemic predictions. This has a few implications. One, is that those which <b>do</b> have the capability for intelligence organizations that can even put together pandemic simulations such as <a href="https://en.wikipedia.org/wiki/Crimson_Contagion">Crimson Contagion</a>, have a <i>global moral responsibility</i> to share awareness about tangible risks to <b>both</b> sovereign nations and markets. Evidence shows that knowledge of risks of such a pandemic paid handsomely to bets made in the stock market to those with access to this information, and which illegally abused this information for personal gain. Second, in lieu of such intelligence organizations existing in many countries, and if governments with them are will not share intelligence on pandemics globally, perhaps countries without enough resources should band together to prepare, as a <i>collaborative effort</i>.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjArMctmparuCNZolk3JU-YT7t7svW7NnkEls7uqan9gt-ztmSRQZDSeBVYtoTblFCwbO3e1O0MrTFOvEjKb-GRTN6e6iIpMT9rBHD1gj9UsL26jIsx2AymUDg7843fUpD0haVAxQ/s1600/heart.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1306" data-original-width="1600" height="261" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjArMctmparuCNZolk3JU-YT7t7svW7NnkEls7uqan9gt-ztmSRQZDSeBVYtoTblFCwbO3e1O0MrTFOvEjKb-GRTN6e6iIpMT9rBHD1gj9UsL26jIsx2AymUDg7843fUpD0haVAxQ/s320/heart.jpg" width="320" /></a></div>
<br />
Despite the philosophical turmoil that thinking about the long term in light of COVID19 has brought, I'm starting to accept that thinking about the long term, instead of just tomorrow, in light of COVID19 is not a luxury, it is our responsibility.mcgrofhttp://www.blogger.com/profile/06081818694231731816noreply@blogger.com1tag:blogger.com,1999:blog-29679292.post-10846437858277894772017-11-22T17:12:00.000-08:002017-11-22T17:26:10.265-08:00To be an encrypted ninja or not to be...Its a debate whether or not <b>foreign powers</b> hacked into Hillary Clinton's private email servers. There is <a href="https://www.washingtonpost.com/world/national-security/fbi-no-evidence-clintons-email-was-hacked-by-foreign-powers-but-it-could-have-been/2016/07/05/93334ba0-42dc-11e6-8856-f26de2537a9d_story.html?utm_term=.0ae3390ad041">consensus however that the private email server was hacked though</a>, and this is precisely how emails can easily get leaked. To solve this sort of problem you can either have Hilary and Hilary's friend's become ninjas at cryptography, have cryptography tools become mainstream and transparent, or have a middle ground solution somehow. For this year's 2017 Hackweek at <span style="color: #274e13;"><b>SUSE</b></span> and <a href="https://www.aaronswartzday.org/sanfrancisco/">Aaron Swartz day</a> I have worked on a middle ground solution as a <b>proof of concept </b>using GPG, forcing all incoming emails to you to be encrypted, even if you use gmail or yahoo to store your emails. On this post I will explain the motivation for such work, and document how to accomplish this for yourself, should you want to implement this for yourself.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgF_zEPlYfKvYAGM2j3pAcJMBJvIjsZDnGK88QuwgAAQsYWOS_wjgsGeNxxhXAJkuphaf_mh2CWYPrUfmuD7TmbnQs2Z8Ul0OL4Gs0ZVMRHvSe5yaOghIUdon9VCnicqU7BAeFC_w/s1600/encrypted-ninja.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="514" data-original-width="521" height="315" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgF_zEPlYfKvYAGM2j3pAcJMBJvIjsZDnGK88QuwgAAQsYWOS_wjgsGeNxxhXAJkuphaf_mh2CWYPrUfmuD7TmbnQs2Z8Ul0OL4Gs0ZVMRHvSe5yaOghIUdon9VCnicqU7BAeFC_w/s320/encrypted-ninja.png" width="320" /></a></div>
<b>Motivation</b><br />
<br />
Emails sent to you when you are using popular email servers such as gmail or yahoo get encrypted only on the wire, as they make their way onto email servers hosted by the companies that provide these services. The emails are however stored unencrypted. Likewise for typical private email servers. You are at left at the whims of the security best practices of these companies, and even if you did have your private email server, to get things done right requires substantial work. In fact, even if you used a good company to store your email, you may still face issues with ensuring your email privacy remains outside of the control of intelligence agencies which may argue they should be able to read everyone's email.<br />
<br />
Reasons for wanting your emails stored with good cryptography vary but here are a few reasons:<br />
<br />
<ul>
<li>You're a politician</li>
<li>You're a therapist</li>
<li>You're a journalist</li>
<li>You're a human rights advocate</li>
<li>You just give a damn about privacy</li>
</ul>
<br />
<br />
For most people's day to day, the below diagram simplifies and reveals how email transactions work, Exhibit-A:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj6UB55QGnNrTq2lVqKLJhCrbBffnM7P6z3yHkUqpcCmmkKVMcwPFt68BZ7Q1wl_jNhYEVkzjfUCVhlB_hmPcihgx5tQZ8QmXuuqPbmOtIxrzSqejT3bG7XRS0flZMsdhfRCZdkmw/s1600/exhibit-a.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="165" data-original-width="885" height="71" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj6UB55QGnNrTq2lVqKLJhCrbBffnM7P6z3yHkUqpcCmmkKVMcwPFt68BZ7Q1wl_jNhYEVkzjfUCVhlB_hmPcihgx5tQZ8QmXuuqPbmOtIxrzSqejT3bG7XRS0flZMsdhfRCZdkmw/s400/exhibit-a.png" width="400" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
One solution to this is to have everyone, for example, use encryption tools when crafting and sending emails, Exhibit-B:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjA8wjbyskFVQiN-IH5u2Zw5ZAa8YwJzP90cxDbp3IkvAVsm8M-x2p3_ZRRpYzO5YC6CuCyX0B7aMBDCDVaJj9iij4k7Z11LI_-454VCSz2qSXxGskgWd9xBBVk_ha6g_2LgvMUxg/s1600/exhibit-b.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="227" data-original-width="767" height="117" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjA8wjbyskFVQiN-IH5u2Zw5ZAa8YwJzP90cxDbp3IkvAVsm8M-x2p3_ZRRpYzO5YC6CuCyX0B7aMBDCDVaJj9iij4k7Z11LI_-454VCSz2qSXxGskgWd9xBBVk_ha6g_2LgvMUxg/s400/exhibit-b.png" width="400" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
This is a bit unrealistic, however for some folks this is possible, for instance if you're a journalist working with very sensitive material. If you fall into one of the categories below you might not be able to get to this point:</div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
</div>
<ul>
<li>You're a human rights watch group worker dealing with folks who can't easily become ninjas... </li>
<li>Your're a therapist, who obviously deals with folks who don't even care about what a crypto ninja is</li>
<li>You're a politician and just want to encrypt everything</li>
<li>You want to<a href="http://www.do-not-panic.com/2013/05/making-your-e-mail-public.html"> open up your email </a>on a certain date and use an escrow to stash your PGP key, such key becomes public after certain date</li>
<li>You want to ask company admins to setup a secure and sensible way to forward some company emails to a public mail server safely (say, a way to get work email on public servers)</li>
<li>You just care about cryptography</li>
<li>You cannot trust your email provider's data store at all</li>
<li>You don't want your data to be scraped by the company hosting it</li>
</ul>
<br />
<div class="separator" style="clear: both; text-align: left;">
Making cryptography more easily accessible is a much better approach. Such good efforts exists, one example I found was <a href="https://flowcrypt.com/">FlowCrypt</a> which lets you uses Public Key Cryptography, however that does mean trying to trust a private key on the plugin store locally. Another effort, which doesn't use Public Key Cryptography is <a href="https://www.streak.com/securegmail">SecureGmail</a> by streak, you encrypt emails using a one way cipher. Both and similar solutions still require some effort or deploying some sort of software on the sender's side.</div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
What I've worked on means as a ninja, or if you have a ninja friend, you get the benefit of having your emails stored on your preferred email server encrypted, provided you can trust a particular middle service provider I'll describe how to set up, and you can get it secured. You end up with the following, Exhibit-C:</div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjyp7RWy1QC83P4qUvu_i93JfEByndbGmCqCORsB3JGPjmcJTi3ZKoK_36bdCM00nJljthOgNc2y6PaD8wuN6dB9QDa5o2F_CAxkjmISu3xrBvNq4Ce_UzoFMCBVjVblZMWLX3nKA/s1600/exhibit-c.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="216" data-original-width="824" height="103" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjyp7RWy1QC83P4qUvu_i93JfEByndbGmCqCORsB3JGPjmcJTi3ZKoK_36bdCM00nJljthOgNc2y6PaD8wuN6dB9QDa5o2F_CAxkjmISu3xrBvNq4Ce_UzoFMCBVjVblZMWLX3nKA/s400/exhibit-c.png" width="400" /></a></div>
<div class="separator" style="clear: both; text-align: left;">
To accomplish this we need a middle end system which does the actual encryption for you using your public key. Email providers such as Google, Yahoo, and others won't do this for us today, and they have some reasons not to. By scraping your email they get the ability to provide search facilities, they get to scrape emails as they might legally see fit, and advertise for you. This is how they make money off of storing our emails for free. Using a middle layer to encrypt your email is reflected in the following diagram, Exhibit-D:</div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgnXX7jmeYzkV5pdOm83Qwe4hlUCGBvFH_6WffM1HGqJuLRVZnMKOXgm89qdiBYmvrD_TMuHJ2pQTdZ8f84OhyphenhyphenWFvMbNgeprvucwiE4O_CWlhWQBrEe6AyTUalHEQpwmAehcOitlg/s1600/exhibit-d.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="190" data-original-width="833" height="90" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgnXX7jmeYzkV5pdOm83Qwe4hlUCGBvFH_6WffM1HGqJuLRVZnMKOXgm89qdiBYmvrD_TMuHJ2pQTdZ8f84OhyphenhyphenWFvMbNgeprvucwiE4O_CWlhWQBrEe6AyTUalHEQpwmAehcOitlg/s400/exhibit-d.png" width="400" /></a></div>
<div class="separator" style="clear: both; text-align: left;">
One must admin that this shifts trust to a particular server admin who sets this server up, and to trust the setup to parse and bounce emails to your preferred email server properly. Your emails are still at risk but they are not stored on the middle server if done propery, they are just being piped through. Also, <b>with unencrypted emails even your old emails are at risk</b>, once an email server is compromised all your emails stored on that server are at risk. With a super simple service such as the one I am describing, it would be fairly easy to monitor against attacks and only protect one thing: receive encrypted emails via TLS, encrypt them write away without writing them to disk, and immediately bounce them. Nothing unencrypted lands on disk or storage.</div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
<b>How do I get this?</b></div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
If you're curious to try it for a few tests cases, you trust me for such tests cases, shoot me an email and I can set you up with an account on my proof of concept email system, <a href="https://encrypted.ninja/">encrypted.ninja</a>. I can give you an account on such system, and if you get an email sent to that email address <b>all emails will be immediately bounced back to you, encrypted with your PGP key</b>.</div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
I would not recommend you to use this setup just as-is though, it'd be best to have spam detection be done on your behalf, otherwise it may be possible your email provided's spam detection tool won't pick up spam, and you end up getting tons of spam.</div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
As such, this is just a <b>proof of concept </b>at this point.</div>
<div class="separator" style="clear: both; text-align: left;">
<b><br /></b></div>
<div class="separator" style="clear: both; text-align: left;">
<b>How do I replicate your setup?</b></div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
Even though this uses PGP keys to encrypt data, you'll need to set up an email server with proper TLS certificates for encryption for communication between senders and bouncing emails to email servers. Fortunately <a href="https://encrypted.ninja/">letsencrypt</a> can give you a free certificate, it must be renewed (easy to do). The same SSL certificate you get for them for your apache setup can be used for email as well. So first thing you should do is get a DNS name, then get a simple website up with an SSL certificate from <a href="https://encrypted.ninja/">letsencrypt</a>.</div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
If you have control over the email server you may not want to give an full shell login account to all users, but just an email alias. I used postfix for my email server, as its easy to setup, and has some hooks we'll use later. So get yourself postfix installed and setup, no need to setup TLS for your first setup. Just get it receiving emails locally first. Once you have that setup, setup the same SSL certificate you used for your apache setup for your postfix configuration. The following is my setup, roughly.</div>
<div class="separator" style="clear: both; text-align: left;">
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span></div>
<div class="separator" style="clear: both; text-align: left;">
<a href="https://gist.github.com/mcgrof/3ca6ff5a005a198c808cdff31d782dbd"><span style="font-family: "courier new" , "courier" , monospace;">/etc/postfix/main.cf</span></a></div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div>
You'll then need to edit /etc/postfix/master.cf and add the following phphook like, and replace your smtp line with the one below as well:</div>
<div>
<br /></div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">pgphook unix - n n - - pipe flags=F user=www-data argv=/opt/bin/mail2pgp.sh ${sender} ${size} ${recipient}<br />smtp inet n - - - - smtpd -o content_filter=pgphook:dummy</span><br />
<div>
<br /></div>
<div>
Then setup virtual aliases, /etc/postfix/address.txt looks like this:</div>
<div>
<br /></div>
<div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">mcgrof@encrypted.ninja FILTER pgphook:dummy</span></div>
<div>
<br /></div>
</div>
<div>
Add more entries per email address you want to add. After updating it you must run:</div>
<div>
<br /></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">postmap /etc/postfix/address.txt</span></div>
<div>
<br /></div>
<div>
Then its all a matter or just one script and one procmailrc file, and ensuring the script, its gpg directory, and keyring are all owned by the user the email server runs as. That's it.</div>
<div>
<br /></div>
<div>
I stashed the script, procmailrc and gpg directory and keyring for the email server in:</div>
<div>
<br /></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">/opt/mail2pgp/</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">mkdir /opt/mail2pgp/.gnupg</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">chmod o-rwx /opt/mail2pgp/.gnupg</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">chmod g-rwx /opt/mail2pgp/.gnupg</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">sudo chown -R www-data /opt/mail2pgp/</span></div>
<div>
<br /></div>
<div>
To create a keyring with keys, or update them later with new keys as you update the alias file, and script provided later:</div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">gpg --search-keys hexkeyid</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">gpg --export --output keyring.gpg</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">cp keyring.gpg /opt/mail2pgp/keyring.gpg</span></div>
<div>
<br /></div>
<div>
The script:</div>
<div>
<a href="https://gist.github.com/mcgrof/5b7e3fa4d6c9a5f176958f43c58b1711"><br /></a></div>
<div>
<a href="https://gist.github.com/mcgrof/5b7e3fa4d6c9a5f176958f43c58b1711"><span style="font-family: "courier new" , "courier" , monospace;">/opt/bin/mail2pgp.sh</span></a></div>
<div>
<br /></div>
<div>
You'll also need a MIME preamble, and postfix:</div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span></div>
<div>
<a href="https://gist.github.com/mcgrof/ef483af303c40ff555b39f4de4d4173a"><span style="font-family: "courier new" , "courier" , monospace;">/opt/mail2pgp/gpg-mime-start</span></a></div>
<div>
<a href="https://gist.github.com/mcgrof/88ec8bd8fcb6ffb4b30eb40b54d9fe52"><span style="font-family: "courier new" , "courier" , monospace;">/opt/mail2pgp/gpg-mime-end</span></a></div>
<div>
<br /></div>
<div>
And finally, the procailrc file:</div>
<div>
<br /></div>
<div>
<a href="https://gist.github.com/mcgrof/3819e61038f3caeab8cc6d03fdd7bbbe"><span style="font-family: "courier new" , "courier" , monospace;">/opt/mail2pgp/.procmailrc</span></a></div>
<div>
<br /></div>
<div>
That's it. In fact, you can use the MIME preamble and postfix and procmailrc file as a template on a system you *don't* have root on to bounce encrypted emails out to you in a much more secure way as well.</div>
<div>
<br /></div>
<div>
Now I'll surely see someone try to hack this server :) and I'm sure they will ;)</div>
mcgrofhttp://www.blogger.com/profile/06081818694231731816noreply@blogger.com76San Francisco, CA 94108, USA37.7909427 -122.4084993999999837.7783947 -122.42866939999998 37.803490700000005 -122.38832939999999tag:blogger.com,1999:blog-29679292.post-45312991157590853662016-08-11T18:30:00.000-07:002016-08-11T18:30:44.281-07:00Concerns with Xen PVH / HVMLite boot on Linux x86<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://4.bp.blogspot.com/-aUi5SDUjeSw/V60iRXfDDRI/AAAAAAAC64k/_gfiIBVTuhod_87qIjR-B81qzcqtm5fvgCPcB/s1600/20160419_002009.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="137" src="https://4.bp.blogspot.com/-aUi5SDUjeSw/V60iRXfDDRI/AAAAAAAC64k/_gfiIBVTuhod_87qIjR-B81qzcqtm5fvgCPcB/s400/20160419_002009.jpg" width="400" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<br />
I've been helping a bit with streamlining proper upstream support for Xen on x86 Linux. One of the items I have decided to take on is the so called "dead code" concern in theory present on x86 Linux Xen guests largely in part due to the radical way in which old PV Xen x86 Linux guests boot. This topic is a bit complex, so I had previously written two posts about this to help shed some light into these dark corners of the technical Linux universe that only a few really care about:<br />
<br />
<ul>
<li><a href="http://www.do-not-panic.com/2015/12/avoiding-dead-code-pvops-not-silver-bullet.html">Avoiding dead code: pv_ops is not the silver bullet</a></li>
<li><a href="http://www.do-not-panic.com/2015/12/xen-and-x86-linux-zero-page.html">Xen and the x86 Linux zero page</a></li>
</ul>
<br />
Xen has evolved over the years, but so has hardware to help with virtualization. Some say and believe KVM is a much better platform for virtualization than Xen since KVM didn't have to deal with the lack of hardware virtualization support. To a certain degree, part of this is true -- the KVM design has an upper hand in that it has not had to deal with implementing any of the legacy complexities in hardware. If you follow the money in terms of investment, you will notice <a href="https://en.wikipedia.org/wiki/Moshe_Bar_(investor)">Moshe Bar</a>, who had co-founded <a href="https://en.wikipedia.org/wiki/Xen">XenSource</a> (later acquired by Citrix) then also c-ofounded <a href="https://en.wikipedia.org/wiki/Qumranet">Qumranet</a> (later acquired by Red Hat) which was the main company originally behind KVM. In these regards KVM is a natural <i>architectural evolution</i> over Xen. Despite the technical leap forward, this is not to say KVM is simply better, or for instance that KVM cannot possibly have dead code though, or that Xen could not do better. There may be less dead code in KVM on the Linux kernel but in analyzing how dead code comes about I've come to the realization that dead code should be a generic concern all around, the Xen design just exacerbated the concern and took the situation to a whole new level. As it turns out there is also a shit ton of dead code possible in qemu... so perhaps some is saved on KVM, but qemu still has to address this very same problem. This is also not to say that KVM does not paravirtualize. Quite the contrary, its had to also learn from the Xen design -- so it has a paravirtualized clock and devices, but it doesn't have a paravirtualized interface for timers and interrupts, it uses an emulated APIC and so you end up with qemu as a requirement for KVM. As hardware virtualization features evolved, Xen has obviously had to provide support for them as well. This has lead to the complex <a href="https://wiki.xen.org/wiki/Virtualization_Spectrum">Paravirtualization spectrum described best in this page</a>. The "sweet spot" for paravirtualization then has evolved over the years, and the latest proposal on the Xen front is called <i>HVMLite</i>. A previous incarnation of this is called <i>Xen PVH</i> design, but this old incarnation is going to be ripped out of the Linux kernel completely as it never really took off for production, HVMLite is the proper replacement, but to avoid complexities with branding the same old name PVH will be used. Here forward I refer to <i>PVH</i> as the new shiny HVMLite design, not the old existing code in the kernel now, as of Linux v4.8 days. What interested me the most of the new PVH design was going to be its proposed alternative boot protocol, which should hopefully address most of the concerns folks had from the previous old legacy PV design. Xen PVH will also not use qemu. With these two things in mind, from one perspective <i>one could actually argue that Xen PVH guests may suffer from less possible dead code than KVM guests</i>. The rest of this post covers some basics over this new PVH design with a focus on the boot protocol, a bit of the evolution of the Linux x86 boot protocol, and where we might be going. I really am writing this mostly for my own note taking, and for future reference, only as secondary in the hopes it may be useful to others.<br />
<br />
<blockquote class="tr_bq">
The <i>given up </i>part here is a bit serious and worrisome. Some folks can give two shits over what goes into Xen to the extent that folks are OK with them merging anything so long as it does not interfere or regress Linux in any way shape or form.</blockquote>
<br />
<blockquote class="tr_bq">
Clean well understood semantics for guests are needed early in boot, we should not allow nasty hacks for virtualization in the kernel, understanding why these hacks creep up, and finding proper solutions for them are extremely important. </blockquote>
<br />
I've been told by Xen maintainers that the PVH ABI boot protocol apparently was settled long ago... As someone new to this world, this came as a huge surprise to me given I was not aware of any Linux x86 maintainer having done a thorough evaluation over it, and most importantly if it were an agreed upon acceptable and reasonable protocol this should have been reflected by the fact that those who likely had the biggest concerns over Xen's old boot protocol would have been fans of the new design. That's at least the litmus test I would have used if I would have tried to handle a technical revamp. Unfortunately, as I spoke to different folks, I got the impression most x86 folks simply either had completely <i>given up</i> on Xen or were completely unaware of this new PVH design. The <i>given up </i>part here is a bit serious and worrisome. Some folks can give two shits over what goes into Xen to the extent that folks are OK with them merging anything so long as it does not interfere or regress Linux in any way shape or form. This <i>lost cause</i> attitude has a bit of history, and the PV design I mentioned above is to blame for some of this attitude -- the Xen PV design interfered and regressed Linux often enough it became a burden. The danger in taking a <a href="https://en.wikipedia.org/wiki/Laissez-faire">Laissez-faire</a> attitude with Xen in Linux is we are simply not doing our best then, and in doing so users can suffer, and you can only count then on the Xen community to fix things. This... <i>perhaps</i> is the way it should be -- however it also implicates we may not be learning anything from this other than having fear for such type of intrusive technologies in Linux, I believe there is quite a bit to learn from this experience, and there are things we can do better. This later part is the emphasis of my post given that as I'll explain why below, <b>I've also partly given up</b>. There are benefits from taking a proactive approach here, and Xen is not the only one that could benefit from this. It sounds counter intuitive but helping Xen with a clean boot design is not just about addressing a cleaner boot protocol for Xen alone. For instance, consider the loose semantics sprinkled over the kernel for guests which even ended up in a few device drivers -- <span style="font-family: Courier New, Courier, monospace;">paravirt_enabled()</span> was one which thanks to some recent efforts by a few is now long gone. This sort of stupid epidemic is not Xen specific -- even KVM has had its own hacks. For instance an audio driver had an <a href="http://lkml.kernel.org/r/s5hvb4151v1.wl-tiwai@suse.de">"inside_vm" hack for guests</a>, when trying to look for an alternative I was told no possible solution existed, when in fact only 4 days later a <a href="https://www.spinics.net/lists/alsa-devel/msg48627.html">completely sensible replacement was found</a>. Clean well understood semantics for guests are needed early in boot, we should not allow nasty hacks for virtualization in the kernel, understanding why these hacks creep up, and finding proper solutions for them are extremely important. Helping review Xen's boot design should help us all avoid seeing cruft land in the kernel long term. It should also pave the way for supporting new radical technologies and architectures using a well streamlined boot protocol.<br />
<br />
Let's review the new PVH boot protocol. The last patch set proposal to add PVH to Linux added <i>yet-another-entry-point</i> (TM) by annotating it as an ELF note, this entry was Xen PVH specific. It had some asm code, and finally, it copied boot params and then handed things off to Linux. I was a bit perplexed, I had looked so much into the flaws of the previous PV boot design that I was <b>super paranoid</b> any new entry was simply doomed to be a disaster, so naturally I was extremely suspicious since the very beginning, despite the amount of delta being small and it still using<span style="font-family: Courier New, Courier, monospace;"> startup_32()</span> and <span style="font-family: Courier New, Courier, monospace;">startup_64()</span>. These have become de-facto entry points, grub2 and kexec use them, so another thing using it seems fair. However I learned both that:<br />
<br />
<br />
<ol>
<li>Linux Xen ARM guests use Linux' EFI entry to boot when on Xen</li>
<li>Windows guests will rely on Window's EFI entry to boot when on Xen</li>
</ol>
<br />
<br />
Naturally, my own first observation was to wonder why we can't use EFI to boot x86 Linux on Xen as well. There are a few reasons for this, but perhaps the best summary of the situation is best described by Matt Fleming, the Linux kernel's EFI maintainer:<br />
<blockquote class="tr_bq">
<br /><i>"Everyone has a different use case in mind and no one has all of them in mind"</i></blockquote>
<br />
Regular guests are known as domU guests. Guests with special privileges are known in Xen as dom0. So if you boot into Xen,and then a Linux control guest OS that's the dom0, you can then spawn domU guests using dom0.<br />
<br />
The first obvious concern over exclusively using EFI is that contrary to Windows, Linux needs to support dom0, so then hypercalls would need to talk to EFI. Xen supports dom0 on Linux ARM guests though, but in that case, as George Dunlap clarifies to me, it then relies on the native ARM (as used by uboot) entry path and relies on completely on device tree for hardware information. x86 Linux supports device tree, and has used it on some odd x86 harware, however there are assumptions made for what type of hardware is present. ACPI can and should be used for ironing out discrepancies, however it remains unclear if this would suffice to support all cases required for x86 Linux guests when supporting dom0.<br />
<br />
For domU guests an EFI emulation would need to be provided by Xen somehow. But if Windows requires EFI this should be a shared concern. Upon review with Matt -- if one wanted a minimal EFI environment one could only provide the EFI services really needed, we'd also need a way to distinguish bare metal boot and PVH some way by using EFI, Matt has noted that using the EFI GUID seems to be one way to accomplish filling in the required semantics to pass. If EFI was required for domUs though that would mean Xen unikernels (Linux or not) would need to boot EFI. To be clear unikernels can be Linux based as well, they consist of very slim kernels with a small ramdisk and have a single process running as init. George notes that in these cases even <a href="http://lkml.kernel.org/r/5710BB74.2060409@citrix.com">an extra megabyte of guest RAM and extra second of boot time is significant cost to incur on guests</a>. He further notes that using OVMF (which would provide EFI) is an excellent solution for domUs when you boot a full Linux distribution, but that it would impose a significant cost on using Linux in unikernel-style VMs. This seems like a fair concern, however its not a reason for why Linux should not be able to use EFI though. In fact supporting to boot Linux x86 with EFI using OVMF seems like a design goal by Xen, after all that would also allow Xen to boot Windows guests without qemu to emulate devices since OVMF will be able to access the PV devices until the PV drivers come around for Windows. Another concern here over requiring EFI is other open operating systems may not support EFI entry points (does NetBSD and FreeBSD not support EFI boot?). The biggest concerns then are the implications to use EFI for dom0, requiring it for small unikernel guests (Linux or not), and the lack of other guest OS support for EFI.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://3.bp.blogspot.com/-aTBdIMayeHE/Vxb9t474iII/AAAAAAACxms/ZYVIFCZdhFQncgXc4pJfEUxIoIe_2hxggCPcB/s1600/16%2B-%2B1" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="https://3.bp.blogspot.com/-aTBdIMayeHE/Vxb9t474iII/AAAAAAACxms/ZYVIFCZdhFQncgXc4pJfEUxIoIe_2hxggCPcB/s400/16%2B-%2B1" width="400" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
Even though were supposed to have a good technical session at the last Xen Hackathon in London 2016, when it came down to talking about alternatives to the existing PVH boot ABI -- David Vrabel stonewalled the discussion by indicating the decisions had already been made, and as such found it pointless to discuss the topic. That's the very moment I gave up on helping with this topic for Xen. The rest of the details here and below are due to hallway tracks between me, Matt Fleming, Daniel Kiper, Andrew Cooper, Jürgen Gross, and later Alexander Graf. If you want to help change for the better for Xen PVH on Linux you'll have to coordinate with them. My own personal interest in this has morphed only to the more real long term for Linux.<br />
<br />
<a href="https://docs.google.com/drawings/d/1pcoG2bcYYY7xKYGMEhDhyUXTLHgjCPrpuk8c9yP_kUg/pub?w=1440&h=1080" imageanchor="1" style="margin-left: 1em; margin-right: 1em; text-align: center;"><img border="0" height="480" src="https://docs.google.com/drawings/d/1pcoG2bcYYY7xKYGMEhDhyUXTLHgjCPrpuk8c9yP_kUg/pub?w=1440&h=1080" width="640" /></a><br />
<br />
With regards to using EFI to boot Xen PVH -- the devil is in the details. Even if we go the EFI route there's a slight discrepancy between how Xen boots Linux and how Linux's 5 first <i>pre-decompression </i>x86 entry points work -- in particular Linux's EFI entry supports and requires decompression to be done as part of the kernel boot code. On the other hand the Xen hypervisor runs domU Linux guests just like any other regular userspace application: paging is enabled. Linux decompression runs in 32-bit mode with paging disabled, and the code relies on this. The hypervisor does not do the decompression for the domU guest, the toolstack does this, so in this regard the toolstack must support each decompression algorithm used by each supported guest. Also, some VT-x hardware can't run the real-mode code, which makes up the 16-bit boot stub. The exception to this is when Xen boots dom0 Linux, in that case, as Andrew Cooper explains, "<i>the hypervisor contains just enough domain builder code in .init to construct dom0, but this code is discarded before dom0 starts to execute</i>". If one were to resolve the EFI boot issue for Linux, it would not only be useful for PVH, old HVM guests could also use it as well, the only difference would be that HVM guests would use qemu for legacy devices.<br />
<br />
Can these issues be resolved though? For instance, can we add a decompression algorithm type that simply skips the decompression? Additionally -- even if these are the reasons to have this new boot method used by Xen for the new PVH -- has this<u><i> really</i></u> been fully vetted by everyone ? Are there really no issues with it ? One concern expressed by Alexander Graf recently was that without a boot loader (grub2) you loose the ability to boot from an older btrfs snapshot. Directly booting in this light is a bad idea.<br />
<br />
It turns out though that if you want to boot Xen you rely on the <a href="https://en.wikipedia.org/wiki/Multiboot_Specification">Multiboot protocol</a>, originally put out by the FSF long ago, the last proposed new PVH boot patches had borrowed ideas from Multiboot to add an entry to Linux, only it was Xen'ified. What would be <b>Multiboot 2</b> seemed flexible enough to allow all sorts of custom semantics and information stacked into a boot image. The last thought I had over this topic (before giving up) was-- if we're going to <i>add yet-another-entry</i> (TM) why not add extend Mulitiboot 2 support with the semantics we need to boot any virtual environment and then add Multiboot 2 support entry on Linux? In fact, could such work help unify boot entries over architectures long term in Linux? Is a single unified Linux entry possible?<br />
<br />
Using EFI seems to require work and a proof of concept, is there an alternative? For instance -- Alexander Graf wonders why can't the 32-bit entry points be used directly? We would need a PV IO description table -- could merging what we need into ACPI tables suffice to address concerns ? Again, this gets into semantics, as we'd still need to find out if who entered the entry point is a Xen PVH guest or not so we can set up the boot parameters accordingly. One option, for instance is to use CPUID, however CPUID instruction was introduced as of Pentium, so this would fail on i486. Jürgen has noted that we however could probably just <a href="https://web.archive.org/web/20110307080258/http://www.intel.com/Assets/PDF/appnote/241618.pdf">detect CPUID support</a>, and this avoid the invalid op code.<br />
<br />
In the end talk is cheap. So we need to see code. But hopefully this summarizes enough to understand the issues on both sides. Good luck!mcgrofhttp://www.blogger.com/profile/06081818694231731816noreply@blogger.com1San Francisco, CA 94108, USA37.7909427 -122.4084993999999837.7783947 -122.42866939999998 37.803490700000005 -122.38832939999999tag:blogger.com,1999:blog-29679292.post-20223733981069952512016-02-27T22:21:00.000-08:002016-03-09T02:16:49.184-08:00I'm part of Conservancy's GPL Compliance Project for LinuxI am one of the Linux copyright holders who has signed an agreement for the Software Freedom Conservancy to enforce the GPL on my behalf, as part of the <a href="https://sfconservancy.org/copyleft-compliance/">Conservancy's GPL Compliance Project For Linux Developers</a>. I’m also a financial supporter of Conservancy. We're a group of Linux kernel developers that give input and guidance on Conservancy's strategy in dealing with compliance issues on the Linux kernel.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgfIIFGnzyKD15RzaF1WDWiEyWikk2ja9ONxAxR73GR25ECzS4_gvLRuKc6NpLBGqWmsLBmq92JkU2FnNynjZBEqSONcDwzckMpiHGqsnogOA2C5pXC8-GCq-wb3eRVpXSFfISqaw/s1600/20151201_123621.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="130" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgfIIFGnzyKD15RzaF1WDWiEyWikk2ja9ONxAxR73GR25ECzS4_gvLRuKc6NpLBGqWmsLBmq92JkU2FnNynjZBEqSONcDwzckMpiHGqsnogOA2C5pXC8-GCq-wb3eRVpXSFfISqaw/s400/20151201_123621.jpg" width="400" /></a></div>
<br />
<ol>
<li>I don't take this lightly</li>
<li>"Don't be evil" is hard</li>
<li>Why things are hairy when it comes to the Linux kernel and GPL enforcement</li>
<li>Why we need GPL enforcement</li>
<li>How can we enforce the GPL responsibly</li>
<li>Evolving copyleft</li>
</ol>
<br />
<h3>
I don't take this lightly</h3>
<br />
Joining was not something I took lightly, when I started hacking on Linux I was at ends with arguments over morality on free software put forward by the FSF and simply felt the GPLv2 on Linux was a nice coincidence; I felt I just wanted to hack and be productive. It <a href="http://www.do-not-panic.com/2012/07/gay-boring-gay-google-and-copyleft-next.html">took me over 10 years in philosophical thought</a> to make a final decision about where I stand with regards to software freedom. I've made <a href="http://www.do-not-panic.com/2012/03/connecting-dots.html">my motivation and intent</a> in the community clear before, but its worth reiterating now: work harder always in spirit of what I believe is right, and accept no compromises on shit engineering.<br />
<br />
<h3>
"Don't be evil" is hard</h3>
<div>
<br /></div>
I've been hacking on Linux since I was in college, after doing kernel development in the industry for a while I have learned the hard way that "<a href="https://en.wikipedia.org/wiki/Don%27t_be_evil">Don't be evil</a>" or "Do the right thing" is easier said than done for companies, specially with regards to software freedom. I've determined that without a <a href="http://www.do-not-panic.com/2013/08/evolving-capitalism-with-ethical.html">mathematical and economics framework that takes into consideration and appreciates freedom</a> it will take a lot of foresight, or for Free and Open Source software principles to be part of your company DNA in order for a company to appreciate the freedoms behind free software. To help companies embrace copyleft, within the community we really need to figure out how copyleft can affect and help businesses, the complexities it brings about, and work with both the community and companies on helping evolve both copyleft and businesses in amicable ways. Its easier said than done.<br />
<br />
<h3>
Why things are hairy when it comes to the Linux kernel and GPL enforcement</h3>
<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj4kVQwXwyM0YazVDfOQ45rRAsovo7TZ5jD2kbSo3ZCIR12zqywgSD7jiFoElC1K4ptft7jdcCcym1hxRrTgYsS1s-xDMGQL3np1TNBPs-qX5Wafd-48_ESsf72fzsIiFuOD4afXQ/s1600/patent-paradox.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em; text-align: center;"><img border="0" height="388" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj4kVQwXwyM0YazVDfOQ45rRAsovo7TZ5jD2kbSo3ZCIR12zqywgSD7jiFoElC1K4ptft7jdcCcym1hxRrTgYsS1s-xDMGQL3np1TNBPs-qX5Wafd-48_ESsf72fzsIiFuOD4afXQ/s400/patent-paradox.png" width="400" /></a><br />
<br />
Consider answering these questions in today's business world when contributing to Linux.<br />
<ul>
<li>Who owns the copyrights or patents to the software that Joe Hacker wrote prior to joining Yoyodyne, Inc?</li>
<li>Who owns the copyrights or patents to the software that Joe Hacker will write for Yoyodyne Inc?</li>
<li>What software projects can Joe Hacker contribute to while at Yoyodyne Inc?</li>
</ul>
There are four challenges that the above complexities bring about for businesses that affect their capacity to contribute to the Linux kernel and participate in GPL enforcement:<br />
<br />
<ol>
<li>How to replace proprietary solutions</li>
<li>The Linux kernel is licensed under GPLv2 and as such only gets implicit patent grants</li>
<li>These days companies have no option but to address patents considerations</li>
<li>Addressing possible company conflict of interests</li>
</ol>
<br />
<a href="http://www.do-not-panic.com/2014/03/free-software-patents-surveillance-and.html">I've covered these issues before</a>, what follows is a terse summary. Copyleft obviously is an imminent threat to proprietary software that relies on copylefted software such that the proprietary software is arguably subject to the conditions of the license that the copylefted software is distributed under. An implicit requirement however is that copyright holders of the copylefted software are both <b>willing to and capable</b> of seeking legal remedies against distributors of the proprietary software. In this light, if a business does not know how to phase out proprietary software it can be affected, short term, or long term. Patents can be implicated by some free software licenses. Paying for patent licensing also adds up. Patents can also be used to sue people. If you have signed conflict of interest agreement with business partners, things can get really hairy, and puts the industry at ends when it comes to free and open source software even if you're an "open source company". Since we lack the <a href="http://www.do-not-panic.com/2013/08/evolving-capitalism-with-ethical.html">mathematical and economics framework to tangibly appreciate freedoms</a> over patents and since patents can ultimately be endangered by certain free software licenses its only natural corporate interests will want to undermine certain free software licenses.<br />
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://docs.google.com/drawings/d/1-LnC6065LlOILu0AnkwXnbAvkR_oTUdJSHMtm8a45MU/pub?w=331&h=230" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em; text-align: center;"><img border="0" height="278" src="https://docs.google.com/drawings/d/1-LnC6065LlOILu0AnkwXnbAvkR_oTUdJSHMtm8a45MU/pub?w=331&h=230" width="400" /></a></div>
<br />
<br />
As businesses evolve, copyleft evolves. Patents were one of the latest additions to free software licenses, both through the GPLv3 and the Apache 2.0 license. I consider the Apache 2.0 license one of the best legal innovations in our arsenal in the free software world: if you want to really test what seems to be claim of only opposition to copyleft, ask if the Apache 2.0 license can be used instead. Now in the Linux kernel though we have an issue, since its GPLv2 and it only provides an implicit patent grant, and since we can't add GPLv3 or Apache 2.0 licensed material to the Linux kernel -- it still leaves the patent question open for businesses to address. To help with this, its why <a href="https://git.kernel.org/cgit/linux/kernel/git/firmware/linux-firmware.git/commit/?id=d3cf09a9765672a7f67991ec4fb64f3d92b387ba">linux-firmware now also requires an explicit or implicit patent grant requirement</a>. We need to close all the gaps that prevent copyleft evolution. And sure, we can use permissive licenses on Linux, but that should only be used as a compromise -- not a de-facto practice. For instance getting <a href="http://www.do-not-panic.com/2016/02/zfs-linux-and-illumos-and-isc-license.html">ZFS relicensed to the ISC license</a> might be a great compromise for all parties involved. <b>Fully permissive licenses without patent provisions should be our last resort and compromise</b>. Since patents are prevalent everywhere this means businesses have to deal with a lot of issues implicitly behind the scenes.<br />
<br />
Case in point, as covered recently by lwn, <a href="https://lwn.net/Articles/675232/">at linux.conf.au 2016 Bradley talked about corporate opposition to copyleft</a>. He explained how corporations will typically not do GPL enforcement in the name of the community, unless of course it fits your business model. He gave the example where Red Hat was sued by a patent troll, and in response Red Hat then alleged GPL infringement against Twin Peaks, with this Red Hat got a patent license but Twin Peaks software remained proprietary. Red Hat is an example that has Open Source software built-in to its business DNA, and even they seem to walk on eggshells when it comes to GPL enforcement. They are not to blame though, doing GPL enforcement for the community responsibly is hard, specially these days in such a complex, technology business sector, where anyone can be your partner and business contracts typically forbid you from engaging in actions that may harm any of your business partners.<br />
<br />
<h3>
Why we need GPL enforcement</h3>
<br />
Because of the challenges explained above even the best of Free and Open Source companies are walking on egg shells when it comes to GPL enforcement. By now you should have a sense of why some corporate interests may be trying to undermine copyleft licenses to effectively be as good as permissive licenses. We can't let that happen. Evidence shows the number of GPL violations has skyrocketed over the years to the extent that we cannot deal with them. There were only a few community groups dealing with GPL violations as well, this was outside of the Linux kernel, Linux kernel GPL violations remain common and unenforced. For this reason GPL enforcement is critical for the Linux kernel and community.<br />
<br />
<h3>
How can we enforce the GPL responsibly?</h3>
<br />
To address this Conservancy published a <a href="https://sfconservancy.org/copyleft-compliance/principles.html">set of principles that should govern GPL enforcement</a>, <b>the primary objective is simply to bring about license compliance</b>. We are not out for money, or blood, simply compliance to the license to strengthen the commons. We give input and guidance on Conservancy's strategy in dealing with compliance issues on the Linux kernel. Responsibly enforcing the GPL <b>for the community, within the community</b> with due care should be of utmost interest to any business contributing to Linux. If you're a Linux developer and would like to chime in and help us with these efforts you should consider joining the <a href="https://sfconservancy.org/copyleft-compliance/">Conservancy's GPL Compliance Project For Linux Developers</a>, please contact <<a href="mailto:linux-services@sfconservancy.org">linux-services@sfconservancy.org</a>> for more details.<br />
<h3>
<br />Evolving copyleft</h3>
<br />
On the post where I describe my epiphany which after over 10 years allowed me to finally cope with software freedom philosophy I explained how <a href="http://www.do-not-panic.com/2012/07/gay-boring-gay-google-and-copyleft-next.html">helping evolve copyleft is important</a>, I'll provide a summary of that in light of the Linux kernel and its GPLv2 license. I believe some of the challenges described above are self inflicted as we were not able to move to GPLv3, given that we have all these patent considerations. I don't necessarily think we should move to GPLv3, but do consider the tensions that arose from those discussions really unfortunate. Lesson learned: we should consider evolving copyleft openly, in the community, with the community. If you'd like to help with that I invite you to take a look at copyleft-next, there is a <a href="https://github.com/richardfontana/copyleft-next">github</a> tree and <a href="https://lists.fedorahosted.org/admin/lists/copyleft-next.lists.fedorahosted.org/">mailing list</a>. Copyleft-next is GPLv2 compatible.mcgrofhttp://www.blogger.com/profile/06081818694231731816noreply@blogger.com1tag:blogger.com,1999:blog-29679292.post-36608473593746852622016-02-25T18:16:00.000-08:002016-02-26T08:37:00.840-08:00ZFS, Linux, illumos and the ISC license<div class="separator" style="clear: both; text-align: left;">
People are discussing whether or not Canonical including and shipping ZFS as a Linux kernel module of the GPLv2 licensed Linux kernel might be a GPL violation or not. James Bottomley recently <a href="http://blog.hansenpartnership.com/are-gplv2-and-cddl-incompatible/">posted an interesting opinion</a> in that although it is a technical GPL violation "it’s difficult to develop a theory of harm and thus the combination is allowable" given that you'd need to prove the harm is done to prosecute. Meanwhile just today Conservancy has released a <a href="https://sfconservancy.org/blog/2016/feb/25/zfs-and-linux/">Statement on ZFS and Linux combinations</a>. In it are very important pieces of information on serious incompatibilities which takes this a bit further outside of the scope simply adhering to the GPL compliance standards to make people happy and not harm people. I'll review those but also explain a bit more of the history of why ZFS is under CDDLv1 and why Oracle no longer benefits ZFS being licensed under CDDLv1. We should be focusing more on the illumos community, the BSD community, what their goals are and thinking about what they can do and why they should do anything anyway. If we want a middle ground where we can all benefit, including the proprietary folks, we should all just lobby for the ISC license as a reasonable compromise for ZFS community. I'll explain why.</div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<h3>
You can currently only use CDDLv1 for ZFS</h3>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi5r7r936k_aFSjxovpE1g1O5S4_t7qHF9O2chyphenhyphen0xfEDwG6edvSWbNrNBVLXxn3sUMYJSO8IW6QE3AF14DjBnn6Z6rMRIoavRX9a1isfuhDsy1gwJ6Xmm0exhjn0RivjoWKazTfMA/s1600/otro+robot.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi5r7r936k_aFSjxovpE1g1O5S4_t7qHF9O2chyphenhyphen0xfEDwG6edvSWbNrNBVLXxn3sUMYJSO8IW6QE3AF14DjBnn6Z6rMRIoavRX9a1isfuhDsy1gwJ6Xmm0exhjn0RivjoWKazTfMA/s400/otro+robot.jpg" width="372" /></a></div>
<br />
CDDLv1 says that if you redistribute any binaries the software must be distributed only under the CDDLv1. There are a series of issues with this. The easiest to grok is that modules can be built-in, and that the kernel as whole is GPLv2. Canonical will ensure ZFS might live as a Linux kernel module only though it seems, however there are a series of serious issues with this as well. I won't list them all and I'll purposely be vague about it as I do not want to do anyone's dirty homework, but I'll at least describe one item that you can find discussed on archives today. We have only:<br />
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span>
<span style="font-family: "courier new" , "courier" , monospace;">MODULE_LICENSE("Dual BSD/GPL")</span><br />
<br />
We do not have:<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;">MODULE_LICENSE("GPL-Compatible")</span><br />
<br />
This is on purpose. I know because <b>I actually proposed such a change years ago!</b> I did this because at that time I was on the hippie bandwagon wanting to help Linux and the BSD camp sing kumbaya together on the 802.11 front. The "Dual BSD/GPL" thing was added for historical purposes to account for old BSD incompatibility, but for all intents and purposes all upstream Linux kernel modules currently using the dual declaration might as well just outright be declared as:<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;">MODULE_LICENSE("GPL")</span><br />
<br />
This hasn't been done and we keep the dual thing just to avoid confusion, but its perfectly possible to use the GPL declaration on even only permissively licensed Linux kernel modules. Another just utterly stupid issue with this incompatibility is you can't hack on ZFS unless you use the CDDLv1 license. As I'll describe below perhaps this might have been a good thing for Sun, but as things stand now even or Oracle -- this is not really a good thing.<br />
<br />
<h3>
When shipping binaries the GPLv2 applies</h3>
<br />
CDDLv1 prohibits you to abide by this. This is perhaps one of the more obscure incompatibilities, but I've tried to summarize it as best as possible with the above statement.<br />
<br />
<h3>
CDDLv1 was not purposely incompatible with GPLv2</h3>
<div>
<br /></div>
CDDLv1 was not just the license of ZFS, it was the license chosen for OpenSolaris. Some ex-Sun employees have claimed that the CDDLv1 was purposely made incompatible with GPLv but according to Bryan M. Cantrill, one of the Sun employees who actually ended up staying even after Oracle acquired Sun, at <a href="http://static.usenix.org/events/lisa11/">USENIX Lisa XXV </a>clarified this is not true. <a href="https://www.youtube.com/watch?v=-zRN7XLCRhc&t=20m00s">He clarified</a> (starting at video 22:00) that part of the incompatibilities came from the fact that although they wanted copyleft they needed <i>a form of copyleft</i> that enabled proprietary drivers such as drivers for partners such as EMC and Veritas. This shows that even if you have great intentions and want to use copyleft,<i> if you are have any proprietary strings attached</i>, you'll be affected and can only produce GPL incompatible solutions.<br />
<br />
<h3>
Oracle does not benefit from CDDLv1 ZFS anymore</h3>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEizlDEp2DiXs4aQ-5KGG6NN4Ue0QYMnTGkYvLAxpyTpbM24SFgcX8X6UTm7P31LGQIowZ1CmsxVQ0K9E_xzIt3EYA5hejgh01bqPMv5DFfDYl_Z1dnZWBzAJI39xhiD1WQOMRj-HA/s1600/Moondancing.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEizlDEp2DiXs4aQ-5KGG6NN4Ue0QYMnTGkYvLAxpyTpbM24SFgcX8X6UTm7P31LGQIowZ1CmsxVQ0K9E_xzIt3EYA5hejgh01bqPMv5DFfDYl_Z1dnZWBzAJI39xhiD1WQOMRj-HA/s400/Moondancing.jpg" width="315" /></a></div>
<div>
<br /></div>
To understand this we'll have to review a bit of history. ZFS was just part of OpenSolaris. Let's consider the original motivation at Sun to be enable them to keep proprietary drivers, how this aligns to Sun's old business model and then review Oracle's current business model for "Solaris", and obviously what remains from the OpenSolaris effort and how this can impact in any way Oracle's business.<br />
<br />
First credit where due. Bryan credits Jonathan Schwartz for making it a priority to open source the operating system, he mentions that OpenSolaris started in January of 2005 when DTrace became the first of the system to be open sourced, and that the rest of the OS was open sourced in June 2005. Sun was bought out by Oracle in 2009, the acquisition closed on February 2010. Ben stayed at Oracle until July 25, 2010.<br />
On August 3, 2010 illumos was born, not as a fork but rather an entirely open downstream repository of OpenSolaris with all the proprietary pieces rewritten from scratch or ported from BSD. On Friday August 13, 2010 however an internal memo was circulated by the new Solaris leadership to say that they will no longer distribute source code for the entirety of the Solaris Operating System in real-time as it is developed. It seems this was never publicly announced, and that updates just stopped on August 18th, 2010. Solaris 11 was released on November 9, 2011 and there was no source code released to go with it.<br />
<br />
<b>That marked the end of OpenSolaris...</b><br />
<br />
<b>Oracle decided to keep Solaris proprietary then</b>, and they were able to do this as OpenSolaris development required copyright assignment. Although OpenSolaris died, the illumos project continued to chug on, independent of Oracle, with a striking difference, copyright assignment was not required. This means Oracle does not own copyright on the illumos project and its new innovations. Oracle cannot use illumos versions of ZFS unless they also release their own Oracle Solaris under the copyleft CDDLv1. Oracle Solaris cannot reap benefit of the illumos version of ZFS, <b>unless they open source their own source code again, </b>and the reason is that<b> the little pieces of GPLv2 incompatibility require them to use the CDDLv1</b>.<br />
<br />
<h3>
<span style="font-family: "courier new" , "courier" , monospace;">illumos innovations can never be part of proprietary Oracle Solaris</span></h3>
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgbboqVtVp21Akv1e4tRUBp7KNklHIGmPucLZpSDFN7vwixavvjKk8wn6HQlfuYde1S90h4t9Ar-Oa-lLobQycivxJDvyHKwsH52NmrgyXUbR7C7u53H87j5jCRCmIGCmuy-TwlkA/s1600/Guepardo+de+H%25C3%25A9ctor.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="290" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgbboqVtVp21Akv1e4tRUBp7KNklHIGmPucLZpSDFN7vwixavvjKk8wn6HQlfuYde1S90h4t9Ar-Oa-lLobQycivxJDvyHKwsH52NmrgyXUbR7C7u53H87j5jCRCmIGCmuy-TwlkA/s400/Guepardo+de+H%25C3%25A9ctor.jpg" width="400" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<br />
<br />
illumos has seen critical innovations and bug fixes to ZFS, Dtrace, Zones and other core technologies. The real kernel architects behind ZFS have left Oracle, are not in favor or Oracle's idea to stop OpenSolaris, and have gone to great lengths to ensure that Oracle play by the archaic copyleft CDDLv1 license. Examples of a features added to illumos ZFS are SPA versioning that allows disjoint features from different vendors without requiring conflicting versions, UNMAP for STMF, allowing for better ZFS-backed ISCSI LUNs, getting estimates for ZFS send and receive. To top this all off, even if the Linux community made changes to ZFS to fix issues or add new innovations Oracle could not benefit from it. The BSD community would have contributed first to ZFS than the Linux community, but those contributions also could not be used by Oracle.<br />
<br />
<h3>
Why the ISC is a win for all</h3>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhvg8rBq8VOWGNfe9ObGdQHzloWWpAbaB8zGLRo0nZiwWtC-7GQAaYKr13m6yFmTptQJrJyuj8nFkc3UbnFVLpRLrXj2681rFEKj3rpr7SUPhNznsjzXf4-uNrS2wOMRc7oaMoFFQ/s1600/20160220_175146.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="225" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhvg8rBq8VOWGNfe9ObGdQHzloWWpAbaB8zGLRo0nZiwWtC-7GQAaYKr13m6yFmTptQJrJyuj8nFkc3UbnFVLpRLrXj2681rFEKj3rpr7SUPhNznsjzXf4-uNrS2wOMRc7oaMoFFQ/s400/20160220_175146.jpg" width="400" /></a></div>
<div>
<br /></div>
Are the old reasons for Sun to use CDDLv1, to enable proprietary drivers, still part of illumos' and the BSD community's own goals ? If not can someone confirm if the illumos or BSD community is forever stuck with the CDDLv1 ? If so would they be perfectly happy with that? Is the potential gain of contributing with the Linux community worthy enough for illumos to wish to want a relicense that would make things work for all parties involved ? What would it take for them to relicense? Does the illumos community really want Oracle to release Oracle Solaris under the CDDLv1 ? If Oracle wanted to upkeep the Oracle Solaris solution, help illumos collaborations on the Linux front, enable contributions on Linux to be usable even on proprietary Solaris solutions the ISC license would make a good middle ground for all parties involved. We did this with on the 802.11 front, it should easily apply as a reasonable compromise to ZFS as well, if parties really wanted a good middle ground.mcgrofhttp://www.blogger.com/profile/06081818694231731816noreply@blogger.com13tag:blogger.com,1999:blog-29679292.post-57842976826561137022016-01-29T11:22:00.003-08:002016-01-29T11:22:56.377-08:00Support software freedom now!<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi20V2QFE4EPVVBlRYegIb0o5gvPfvWtbXT4goFiw0T2f0PSyEjK2y9OF4EEedFNAlUbhu-lPQUihXBkCN5K9BM6vOOWSqnaBcLS-a0GljudBQmEVLY0WfG3zoXmEhJHZ3TpkaDnw/s1600/20130129_155408.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="300" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi20V2QFE4EPVVBlRYegIb0o5gvPfvWtbXT4goFiw0T2f0PSyEjK2y9OF4EEedFNAlUbhu-lPQUihXBkCN5K9BM6vOOWSqnaBcLS-a0GljudBQmEVLY0WfG3zoXmEhJHZ3TpkaDnw/s400/20130129_155408.jpg" width="400" /></a></div>
<br />
Free Software is in a critical state today. Bradley Kuhn recently has made an <a href="https://sfconservancy.org/blog/2016/jan/25/supporter-urgent/">urgent call for supports of free software</a> to help a campaign to strengthen both the Free Software Foundation and Software Freedom Conservancy, specially given if you donate before January 31st 2016 as your donation will be matched! I've learned the hard way that without such organizations we could be in for a dark age on user software freedoms. No other entity is doing what they do and they are both of critical importance to the community. Because of this I'm not only contributing now but I've decided to donate to each organization at the very least 1% of my salary each year. If you are employed because of free software I urgently encourage you to consider contributing. If you're in dire straits economically, at least give $20, for fuck's sake its probably just 2 whiskey city shots or a long Uber / Lyft ride.mcgrofhttp://www.blogger.com/profile/06081818694231731816noreply@blogger.com0tag:blogger.com,1999:blog-29679292.post-19739818419409892212016-01-28T13:26:00.000-08:002016-01-28T13:26:17.545-08:00Why open hardware must succeed<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-kKi0ashfWpc/Vpq_GJrpAiI/AAAAAAACjL0/zPAfVwLN5C4/s1600/GOPR3785.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="300" src="http://3.bp.blogspot.com/-kKi0ashfWpc/Vpq_GJrpAiI/AAAAAAACjL0/zPAfVwLN5C4/s400/GOPR3785.JPG" width="400" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
To the average person open hardware simply sounds like a good idea... They may have heard of this thing called "open source" that some "disruptive" hipster companies may have used and embraced to create new business models, so open source hardware seems like a natural progression. There's more to this though. The average person will not understand why its not just a great idea but also that we are in dire need for open hardware in the industry, the average person will not understand why its vital to the success of the open source movement. The average person will not understand that because open hardware follows a better development model -- the collaborative model -- it will grow very fast but also face a lot of very <b>serious</b> challenges. This post tries to address this gap.<br />
<br />
Back in 2013 I wrote a <a href="http://www.do-not-panic.com/2014/03/free-software-patents-surveillance-and.html">trilogy on the dangers of Free Software, the Free Software patent paradox and even threw in a quip over this topic and their relationship to the cosmos</a>. I wrote this in desperation because, as I saw it at that time, there were really no good prospects in the near sight, it was unclear when we'd see a steady change towards the right direction, my post was meant more as an alarm -- to create consciousness over<b> fundamental issues in our community</b>. The tide is changing though, fast, and for the better. I recall reading about the open hardware summit efforts in 2010, back then I was not impressed though and the prospects seemed fuzzy. The <a href="http://2015.oshwa.org/">2015 Open Hardware summit</a> passed a little while ago and upon reading about some of the talks and presentations, its clear now that momentum has built up significantly. This is slightly relieving but its not enough, we really need to create awareness that open hardware is not just about it being cool, fun and trendy, but also:<br />
<ul>
<li>Open Hardware development is a key requirement to the success of the open source community</li>
<li>Open Hardware development is very likely where the best evolutionary methodology for the combination of best hardware and software will come about</li>
</ul>
<b>Ignoring these two principles will belittle any serious disruptive open hardware efforts as side projects.</b><br />
<br />
<a href="http://3.bp.blogspot.com/-W7ScM8HA_0w/VnrxVlWXDvI/AAAAAAACfaw/ZJJOrDHbK94/s1600/Copy%2Bof%2B20141220_120629.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em; text-align: center;"><img border="0" height="225" src="http://3.bp.blogspot.com/-W7ScM8HA_0w/VnrxVlWXDvI/AAAAAAACfaw/ZJJOrDHbK94/s400/Copy%2Bof%2B20141220_120629.jpg" width="400" /></a><br />
<br />
Statistically there really are only a few who will care about this topic -- those folks should know <b>there is an uphill fight for the success of Open Hardware</b>. Open hardware can be extremely disruptive and the type of changes to be expected from it can have significant economic effects on existing businesses, if existing businesses do not adapt. I've learned the hard way though that businesses not only are hard to change, they simply may not want to, even if you are certain you have a solution for them. Companies may have really good reasons to not change and one cannot not take this personal. You have to really think of the bigger picture, for instance if we shift the conversation of the possible disruption of "open hardware" from impact to existing businesses towards to the possible economic impact though it changes things considerably. Having an impact to a few companies should really be the least of concerns to the Open Hardware movement. The dangers involved with open hardware could mean huge shifts in state economics, <i>but only those companies who could not afford to change or embrace change</i>. In the worst case these days, where a possible Trump president seems sadly statistically possible, that's loose lingo which could easily be twisted by the craziest in America towards the topic of "national security".
There always is plenty of work to trying to prevent unexpected huge tidal economic shifts in nation states, the TPP is one, but one should also consider funding in research as well. Although not related to open hardware, I'll mention a recent issue of relevance with an amazing FOSS project: sage. Last year William Stein made some effort to create awareness over issues of funding towards his open source mathematical suite which I'd like to use for some perspective. For a bit of background on how he started Sage read his "<a href="http://wstein.org/mathsoftbio/history.pdf">Mathematical Software and Me: A Very Personal Recollection</a>". You really do have to ask yourselves then why would the Simmons Foundation in their right mind would pick a <b>proprietary product</b> over an open source project at a funding event which actually listed as a goal to <i>"to investigate what sorts of support would facilitate the development, deployment and maintenance of <b>open-source software</b> used for fundamental research in mathematics"</i>. Stein explained in details on his trials over this effort on his post "<a href="http://sagemath.blogspot.com/2015/09/the-simons-foundation-and-open-source.html">The Simons Foundation and Open Source Software</a>" (refer to <a href="https://news.ycombinator.com/item?id=10175563">hackers news discussion</a>). One of the only sensible things that comes to mind is the possible impact on economics, disruptive economics to existing proprietary mathematical suites in the United States. Naturally, you should then expect different economic regions with different interests to have different motivations and perhaps more keenly interested in supporting these efforts, that actually happens to be the case, refer to the European "<a href="http://opendreamkit.org/">OpenDreamKit: Open Digital Research Environment Toolkit for the Advancement of Mathematics</a>", which will "<i>provide substantial funding to the open source computational mathematics ecosystem, and in particular popular tools such as LinBox, MPIR, <b>SageMath</b>, GAP, Pari/GP, LMFDB, Singular, MathHub, and the IPython/Jupyter interactive computing environment".</i> The point I'm trying to make here using Sage as an example if you do not get much support for open hardware research at your University, don't be surprised, realize what you're up against, <b>the entire evolution of silicon valley and the economics behind that</b>.<br />
<br />
Fret not though, I wrote this post also make emphasis on both principles stated above, the second one deals with my own <a href="http://www.do-not-panic.com/2015/04/god-complex-why-open-models-will-win.html">conjecture on that open models will win</a>.What we need is math, tons of fucking math, semantics, grammar, and more precise science behind what we do with open models. If you are not using the scientific method for evaluation of progress / gains / bugs /etc in any way for your own project I highly suggest you consider it. Be pedantic over everything you can measure. Do not get discouraged to find out that the respective proprietary piece you are trying to replace has no such metrics for comparisons. That's not coincidence, its why it survives after all. I'm delighted to report that since my last calling for metrics on FOSS we have had a huge shift, not only are people really meeting up just for this topic alone (<a href="http://flosscommunitymetrics.org/">FLOSS community metrics meeting in Brussells</a>) but there are companies spawning (<a href="http://bitergia.com/">bitergia</a>) and dedicating themselves to this end, and I'm also seeing a lot of folks starting to talk about this at conferences I attend.<br />
<br />
Academia can also help shape economics for the better. When it stop doing this, academia has failed us. Its fairly understandable that economic pressures may have historically influenced academia, but if business and economics theories do not account for the need for self reinventing and dynamic changes in the market, then those economics and market theories should also be re-considered, specially in a time of age where the speak of exponential growth, evolution and change is becoming standard. Intellectual Property remains a clear challenge, I've been dealing with patent issues since my <a href="http://www.do-not-panic.com/2011/09/educational-issues-with-polynomiography.html">University study days when learning about polynomiography</a> when I was told I could not release a piece of code as open source to the community as my professor had patents over the subject, but surprisingly its still a <a href="http://www.do-not-panic.com/2014/04/open-research-through-collaborative-development.html">lingering issue even for new trendy University efforts such as Singularity University</a>. Open Hardware will suffer because of patents, its why Open Hardware is key to the success of the open source movement: the open source movement has historically faced challenges with patents, but in the worst case the dangers here lie in that free software developers <b>could</b> become a <b><u>dying breed</u></b> (for details refer to this <a href="http://www.do-not-panic.com/2014/03/free-software-patents-surveillance-and.html">trilogy post on the patent paradox</a>). Open source developers include free software developers but free software hackers do it for a cause as well. Patent bloat companies want to hire zombie open source developers, that do not care about these issues, they will do anything in their power to keep the status quo. Because of this Open Hardware development is a key requirement to the success of the open source community, not only will it provide an outlet for free software developers but it also enables open source developers to get better hardware without bullshit restrictions.<br />
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
mcgrofhttp://www.blogger.com/profile/06081818694231731816noreply@blogger.com0tag:blogger.com,1999:blog-29679292.post-35170049255900526802015-12-22T18:14:00.000-08:002016-01-19T14:09:23.989-08:00Linux asynchronous probe - let's try this again<div style="text-align: center;">
<a data-flickr-embed="true" href="https://www.flickr.com/photos/phunk/4962031817/in/photolist-8ytHbH-nFqGrf-cisP7A-C8rz8E-gYLsSt-yp3FVE-7H511n-cujWXU-2Y6Dn-nDpgD5-cv6vYy-da4xbr-nFf3UE-4UK83x-7PbuD9-dxh4H1-s1YF5P-kTKoTL-8vrTx8-roazat-rbC9EU-rnUw1z-7VZQaN-6SkcBk-AAFjoc-75S3Rz-tut3pv-edHwCt-rnq2mL-eXC7W7-coW2id-cPn8GC-8vrSGx-p8no91-afC7Ky-A3g1BZ-rwbzUD-9VNK3H-wa3JZK-pt2XXF-zGiFTa-oqNVXh-eaoxo2-oS83pq-wmCW3B-5eVpRk-db2oZ9-dHmFdg-cpndgb-cxk5ZC" title="Bake, Satyr"><img alt="Bake, Satyr" height="182" src="https://farm5.staticflickr.com/4145/4962031817_8ed4bfcf3e_b.jpg" width="400" /></a><script async="" charset="utf-8" src="//embedr.flickr.com/assets/client-code.js"></script>
</div>
<div>
<br />
<span style="color: #cc0000;"><b>Updated on 2016-01-19</b> with description on issue of how systemd limits the number of devices on a Linux system and references to asynchronous work on memory. Edits reflected in this color.</span><br />
<br />
Hipster and trendy init systems want to boot really fast. As of v4.2 the Linux kernel now sports asynchronous probe support (<a href="http://lkml.kernel.org/r/1450516664-4200-1-git-send-email-mcgrof@do-not-panic.com">this fix posted December 19, 2015 is needed</a> for use of the generic async_probe module parameter). This isn't the first time such type of work has been attempted on Linux though, <a href="https://lwn.net/Articles/611226/">this lwn article</a> claims that a long time ago some folks tried to enable asynchronous probe and that ultimately it was reverted due to large number of issues. Among a few things one major difference with the new solution is its <b>opt-in</b>: userspace or drivers must specifically request for it to be used on a driver, We also support blacklisting of asynchronous behavior by annotating a driver requires synchronous probe. All this enables new shiny hipster userspace, while remaining compatible with old userspace and its expectations. At the 2015 Kernel summit it became apparent a few folks still had questions over this, so I decided to write this to help re-cap why the work was done, caveats, its relationship with using<span style="color: #c27ba0;"> -EPROBE_DEFER</span> on your probe routine for making use of the kernel's <a href="https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/drivers/base/dd.c#n33">deferred probe mechanism</a>, to help testing and productizing with asynchronous probe, and also explain a bit of the short term and long term road-map. This post also collects a bit of history of what gave rise to Linux asynchronous probe which I think we can use as a small educational experience on learning how we can better evolve systemd in the community.<br />
<br />
<div style="text-align: center;">
<a data-flickr-embed="true" href="https://www.flickr.com/photos/colorblindpicaso/3035456441/in/photolist-5CevrT-9R2KUF-8xovfz-faQakH-dKHAt-6hNQcD-9SnZrA-fkdhAq-aavRvw-gFZRGm-7TStts-nWDT9n-obdUX-6RrGRn-jUTTq-3iDzy5-o6vB8-6ue6pg-57Feca-87X2tN-b4Q1mz-7zHxAZ-eiYGsn-8ALqy6-521omE-9U2NDz-axarr6-5CcvXi-u1i1Lb-pkPHsU-Dw5VT-c4H6Ed-7LcTmF-6sHcth-8f5rTQ-aszjZd-ABb849-afDaWN-5qtsgV-9XRkpu-faQa76-obdTQ-mYUwE-5P5LJC-wQVLqR-87PieR-9T5fLt-aaEfvc-9UxiCz-6fcKAk" title="So imagine you are wandering around the jungle and you come up on THIS around a corner. Ok... so he's only about an inch tall, but if you're a cherry tomato or grub that is bad news."><img alt="So imagine you are wandering around the jungle and you come up on THIS around a corner. Ok... so he's only about an inch tall, but if you're a cherry tomato or grub that is bad news." height="300" src="https://farm4.staticflickr.com/3070/3035456441_e346e0bde2_b.jpg" width="400" /></a><script async="" charset="utf-8" src="//embedr.flickr.com/assets/client-code.js"></script>
</div>
<br />
First to be clear --<u> asynchronous probe isn't supposed to magically make your kernel boot faster</u>, <i>it should however <u>help</u></i> if you happen to have any driver which for whatever reason tends to<b> have a lot of work done on a driver's probe routine</b>. Even if that's not the case at times using asynchronous probe can shave down kernel boot time even if minimally. Other times <i>it may have no impact at all</i> or perhaps you may see a small increase for any number of reasons. <span style="color: #cc0000;">A clear but not obvious gain is the increase for the number of devices a device driver can support, this is explained below. </span>Since this is a new feature we simply don't have enough metrics and enough test coverage yet to determine how helpful it can be so widely, or what issues could creep up, however it was clear some folks wanted and needed it. <b>More importantly</b> using it can also get driver developers and subsystem maintainers thinking about different asynchronous behavior considerations in the kernel that <b>long term</b> should help us in the community. An example is how although <i>asynchronous probe should help with long probe</i>s <a href="http://lkml.kernel.org/r/CAB=NE6UBRa0K7=PomJzKxsoj4GzAqkYrkp=O+UfVvu2fwM25pA@mail.gmail.com">we recently determined</a> that <u>you should by no means</u> consider it as a solution for your probe routine if your driver needs to load firmware on probe and you may have experienced some race issues with this and the filesystem being mounted -- that problem need to be resolved separately (see <a href="http://kernelnewbies.org/KernelProjects/firmware-class-enhancements">this firmware_class feature enhancement wiki</a> and this <a href="http://kernelnewbies.org/KernelProjects/common-kernel-loader">common kernel file loader wiki page</a> for more details and ideas). In lieu of concrete bullet proof solutions for that problem you might be <b>tempted</b> to think asynchronous probe could<b> help </b>and you'd be correct <u>but</u> you should be aware of that this is not a rock solid solution to such problems, it'd be a hack, and this is why its incorrect to use asynchronous probe if you're trying to use it to fix that problem. Another example is how this begs the question of <i><b>where else should we be using asynchronous mechanisms</b></i>, and <i><b>how do we resolve any possible run time dependency issues</b></i>?<br />
<br />
Asynchronous probe support was added for a few reasons, <b>the last <span style="color: #cc0000;">three</span></b> listed here being the major driving factors for getting this developed and merged upstream.</div>
<div>
<ul>
<li>Over time there's been a general interest in reducing the kernel's boot time</li>
<li><b>A long time ago in a galaxy far far away...</b> systemd made a <i>really</i> <b>well-intentioned</b> but ultimately incorrect assumption that device driver initialization should take less than 30 seconds, to be more specific the driver's init routine should not take more than 30 seconds. Even as issues started to creep up quite a bit of systemd and kernel developers vocalized strong support for it being a reasonable timeout value. Some users were really upset over this though -- driver loading was being killed after 30 seconds, preventing some drivers from loading completely and in the worst cases if the driver at fault was a storage driver you would not even be able to boot Linux. Because of the strong agreement on both camps there was no exceptions to this rule, and the consensus seemed to be that a lot of drivers should simply be fixed. One puzzle was that issues over drivers being killed due to the timeout were only reported circa 2014, but the timeout was in place systemd for a long time. The reason for this was that commit <a href="https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=786235eeba0e1e85e5cbbb9f97d1087ad03dfa21">786235eeba0 by Tetsuo Handa ("kthread: make kthread_create() killable")</a> enabled kthread_create() to be killed, this was done in particular to enable out of memory killers to kill these type of threads (<a href="https://lwn.net/Articles/611226/">refer to this lwn article for more details</a>). Prior to this kernel change, the 30 second timeout was never an issue for systemd users given that the SIGKILL signal was never actually respected for these types of threads. Even though the Linux kernel now has asynchronous probe support the original systemd 30 second timeout caused enough headaches for users that on July 29, 2014 Hannes Reinecke ended up merging a way to enable Linux distributions to override the timeout through the command line, refer to <a href="https://github.com/systemd/systemd/commit/9719859c07aa13539ed2cd4b31972cd30f678543">systemd commit 9719859c07aa13 ("udevd: add --event-timeout commandline option")</a>. That didn't seem to be enough to help users so on August 30, 2014 Kay Sievers bumped the timeout to 60 seconds via <a href="https://github.com/systemd/systemd/commit/2e92633dbae52f5ac9b7b2e068935990d475d2cd">systemd commit 2e92633dbae ("udev: bump event timeout to 60 seconds"</a>). In the end though, on September 10, 2014 Tom Gundersen modified the default timeout to 180 seconds via <a href="https://github.com/systemd/systemd/commit/b5338a19864ac3f5632aee48069a669479621dca">systemd commit b5338a19864a ("udev: timeout - increase timeout")</a>, the purpose of the timeout, as per the commit log message, now is "<i>to make sure that nothing stays around forever"</i>. To help capture in logs possible faulty drivers (or any jobs dispatched) Tom Gundersen also made systemd spit out a warning after 1/3 of the timeout value before killing it via <a href="https://github.com/systemd/systemd/commit/671174136525ddf208cdbe75d6d6bd159afa961f">systemd commit 671174136525ddf2 ("udev: timeout - warn after a third of the timeout before killing")</a>.</li>
<li><i>It turns out out though that.</i>.. Linux batches calling a driver init routine and immediately after that its probe routine, synchronously, so naturally any delays on probe should contribute to delays as well. So <b>the systemd timeout is in effect for the run time combination of both init and probe of a device driver</b>. If we provide a way for userspace to ask the driver core to detach these and call probe asynchronously we'd be giving systemd what it thought, and a few kernel developers thought, was actually in place.</li>
<li>A delay on your probe means delaying user experience at boot time. If you know off hand your driver might take a while to load preemptively annotating this on your driver can mean giving users a better user experience. Dmitry Torokhov ran into this issue while working on productizing a solution for a popular company where fast boot and a good user experience was critical.</li>
<li><span style="color: #cc0000;">It turns out that... a systemd timeout on kmod loader (loading modules) has effect not only on the combination of init + probe of device drivers, but also since the kernel serially probes all devices in the same code path it means if you probe 2 devices the amount of time taken to load your driver will be init time + (number of devices * probe time for each device). What this means is the systemd timeout also places an upper bound limit restriction on the number of devices you can use on a system, this is bound by its init and probe time, and can be computed as follows:</span></li>
</ul>
<pre class="prettyprint">number_devices = systemd_timeout
-------------------------------------
max known probe time for driver
</pre>
<div>
<span style="color: #cc0000;"><br /></span></div>
<div>
Drivers can be built-in to the kernel or built as modules so you can load them after the kernel boots as independent and self contained objects. It turns out that in practice <i><b>striving towards having all modules be probed asynchronously tends to work pretty well</b></i>, whereas <b>having all built-in drivers will likely crash your kernel with <span style="color: #cc0000;">high degree of certainty</span></b>. This later issue has to do with the fact that as the kernel boots certain assumptions may be made which are not satisfied early on and <b>there's no current easy way</b> to currently order this well. Its similar to why the <a href="https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/drivers/base/dd.c#n33">deferred probe mechanism</a> on the kernel was added -- sometimes the kernel doesn't always have dependency information well sorted out. But fret not, <i>future work</i> should help with this, and such work should help curtail uses of <a href="https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/drivers/base/dd.c#n33">deferred probing</a> and enable more broad asynchronous probe use.<br />
<br /></div>
</div>
<div style="text-align: center;">
<a data-flickr-embed="true" href="https://www.flickr.com/photos/atin800/5668162207/in/photolist-9CSP9B-b1ixn-ag1qWd-rqdsc4-4Cxgd6-6xs6Ei-7N6Mqn-8iSdn9-fYGBjf-afZs1z-oh4NGu-hSivfB-bVNm1L-odXo7s-4DaVbw-6od114-iiizQZ-2hLYq-qGNdnz-bCHvCg-8Nuyjh-7DVH6T-9sZ4G3-9NCSwV-aqzGuJ-ebuGKN-7Kdud8-fvmdqE-amF5QP-b9w2NF-eL5yS8-7SLcW5-mwfZXZ-5RoyVU-bo2eHB-8TiHAy-bo2eHP-8jWH9F-7915Fu-6eqs5u-aqCmLv-nrp1sc-ehoU1u-8jZUoy-4AZFRn-Cb6A2A-C689aB-BLgQ5d-816wLh-bVbsWb" title="Blood Wolves: Engineer"><img alt="Blood Wolves: Engineer" height="320" src="https://farm6.staticflickr.com/5070/5668162207_f495aee756_b.jpg" width="320" /></a><script async="" charset="utf-8" src="//embedr.flickr.com/assets/client-code.js"></script>
</div>
<div>
<br /></div>
<div>
If you are in control of both hardware and software, that is you have engineers you can pay to productize a solution, you could likely engineer a solution to vet and ensure boot will happen properly and in order for both all built-in and modules on your kernel. There is no easy way to do this, and it is difficult to estimate the amount of work required for this for a device but if you want to try it -- you can use this out of tree <a href="http://drvbp1.linux-foundation.org/~mcgrof/patches/2015/12/19/debug-async.patch">debug-async patch</a> and then use the kernel parameters documented there, I summarize them here. Note that using either of these will taint your kernel.<br />
<br />
<ul>
<li><span style="white-space: pre-wrap;"><span style="font-family: "courier new" , "courier" , monospace;"><span style="color: #38761d;">__DEBUG__kernel_force_builtin_async_probe</span> - async probe all built-in drivers</span></span></li>
<li><span style="white-space: pre-wrap;"><span style="font-family: "courier new" , "courier" , monospace;"><span style="color: #38761d;">__DEBUG__kernel_force_modules_async_probe</span> - async probe all modules</span></span></li>
</ul>
If you don't have the luxury of having dedicated hardware and software engineers you could at the very least <b>enable all modules to probe asynchronously and hope for the best and report any issues if found</b>. Its after all what systemd, and what a lot of developers (many kernel developers inclusive), originally thought was happening, so naturally bug reports are welcomed to the driver maintainer if any issues occur. Soon you may see Linux distributions enabling asynchronous probe by default for all modules. The way I'd implement this on systemd is to enable a Linux distribution to opt-in to enable async_probe for specific kernels, given a <a href="http://lkml.kernel.org/r/1450516664-4200-1-git-send-email-mcgrof@do-not-panic.com">fix is needed</a> for using the generic async_probe module parameter though one should only ensure to use it if this fix has been merged. This makes it tricky to detect if the module parameter is properly supported or not, enabling it and booting on an older kernel might obviously cause a crash.<br />
<br />
Getting drivers to load correctly is just one step, remember that prior to asynchronous probe some userspace expected some device functionality to be available immediately after loading a driver. With asynchronous probe that is no longer the case, userspace must be vetted and tested for to ensure they do not rely on synchronous loading of the drivers.</div>
<div style="text-align: center;">
<br /></div>
<br />
<a data-flickr-embed="true" href="https://www.flickr.com/photos/sharif/2423144088/in/photolist-4G8fjm-7PH1jG-s8ZX5E-nWP32m-656BtK-qqC3k6-4XWeYv-Lp4VW-6NuTMt-827KYQ-dTogvP-4ancGz-5FcqiQ-nGgBMj-xi6g46-fwhGBp-2LGEi-7WhM1N-tEKpW1-oygTgA-5kxn9W-4oPBCg-t2TLAj-pkKPd8-7ePGLe-7eTzBC-6kpdHP-p1Njc2-6mri9L-o884kM-ARNnB-52ntSJ-6UvkSj-cSrzts-mrPBBv-bhiAwF-8H5wUM-vaxXZ9-nTLa1w-77suxn-mXNsG8-c13aK5-ba4can-bqW9yE-aoJCxZ-8qYDu-bFZeFZ-fEgdw-ag7D4W-7hxT1e" title="parallel"><img alt="parallel" height="259" src="https://farm3.staticflickr.com/2368/2423144088_cb47aa7b45_b.jpg" width="400" /></a><script async="" charset="utf-8" src="//embedr.flickr.com/assets/client-code.js"></script>
<br />
<br />
<div>
If you're a driver developer and know that your driver takes a while to boot, you should be aware that it can delay boot / user experience, so you likely should consider annotating on your driver that it prefers asynchronous probe in the driver's source. You can do so as follows:<br />
<br />
<br />
<pre class="prettyprint">static struct pci_driver foo_pci_driver = {
...
.driver.probe_type = PROBE_PREFER_ASYNCHRONOUS,
};
</pre>
<br />
<br />
An alternative (<a href="http://lkml.kernel.org/r/1450516664-4200-1-git-send-email-mcgrof@do-not-panic.com">provided you have this fix merged</a>) is to pass the generic "async_probe" module parameter to the module you want to load, for instance:<br />
<br />
<pre class="prettyprint">modprobe cxgb4 async_probe
</pre>
<div style="text-align: center;">
<br /></div>
<div style="text-align: left;">
<a data-flickr-embed="true" href="https://www.flickr.com/photos/lawrence_evil/164135592/in/photolist-fveN1-kpT2vD-pdE77P-b6r4PV-aiXghK-hXHAn-dSHUJB-8YbMM3-pxfFC8-4KufFb-qv1uRf-pYzJ2N-rpJN4-9aZUc6-r5cpkg-pxFwpZ-dJwsoW-BgFko-ojvy7m-4YGSJg-K3z4v-beuYJp-m7AWem-opGAGQ-ognSb1-9eC9an-r1E9Ji-rn1TgC-n6E8sL-5mL4mF-4x9g8p-4tk4SG-rYtqjs-7EFJwx-bVZz89-yHhwE-7Sug1X-kxZy8-jgZbDd-9pUKjF-xikx3T-4GdxHb-dVWcot-4v7H6G-h9JV7B-rmqjRr-q2McXb-eamBGv-4RSmPu-otMmo" style="text-align: center;" title="Broken Family"><img alt="Broken Family" height="200" src="https://farm1.staticflickr.com/68/164135592_5f8e3c1718_b.jpg" width="640" /></a></div>
<br />
Sadly some few drivers cannot work with asynchronous probe at all today, so after testing and if it poops out you should annotate this sort of hard incompatibility. You can do so as follows:<br />
<br />
<pre class="prettyprint">static struct pci_driver foo_pci_driver = {
...
.driver.probe_type = PROBE_FORCE_SYNCHRONOUS,
};</pre>
<br />
It should be made clear that this sort of incompatibility should likely be seen more <span style="color: #990000;">as an issue</span> -- if your driver fails at using asynchronous probe chances are that the issues are some subtle architectural design flaw in the driver or dependencies. Fixing it may not necessarily be easy and its precisely for this reason why we have a such a flag to force synchronous probe. Our hope though is that with time we could phase these issues out.<br />
<br />
<div style="text-align: center;">
<a data-flickr-embed="true" href="https://www.flickr.com/photos/ianduffy/4871825092/in/photolist-8qvnSm-7yKJW9-7owLVP-ibNgGh-8TsCf-9UPZvF-i8g9q5-2X53-8yQxKj-jRXtVM-9HpLgF-9YG2nT-8kc79f-i8g6v7-9XpnFC-JByTr-9CVRhk-i8gfaw-8k6NEf-8dQKGU-8tvDEM-QNv1Z-8k6NA3-4wWrH-i8gvK6-fpVtGe-ijXFP-8k8UMv-bXnXj5-5WEtus-9QoUAA-7wYzkq-as9F5E-3FtN5-azPG8P-stwF-j7SJvg-jjyndw-jx8nVc-azPGmx-4GzxSq-aCrsw2-dkb15k-k8u2wR-Jm5UG-ibNqVh-okyFb4-anXc2u-ibFbjD-9vDXAN" title="Emperor penguin chicks at play"><img alt="Emperor penguin chicks at play" height="206" src="https://farm5.staticflickr.com/4082/4871825092_2b34929d70_b.jpg" width="400" /></a><script async="" charset="utf-8" src="//embedr.flickr.com/assets/client-code.js"></script> </div>
<br />
Even if we manage to get all drivers working with asynchronous probe we cannot remove synchronous probe as old userspace exists which relies on such behavior, removing synchronous probe support would break old userspace. What we we can strive for <b>long term</b> though is to enable <b>new userspace</b> as best as possible and deal with all asynchronous issues as they come up, <b>slowly</b>, this will take time and serious effort. Over time you should be seeing more work in this area across subsystems, internals, and perhaps even architecture work. <span style="color: #cc0000;">Just to give you a taste of and provide you an example of such type of work done, you review the recent asynchronous work by Mel Gorman on on memory on init through commits 1e8ce83cd17fd0f549a7ad145ddd2bfcdd7dfe37..0e1cc95b4cc7293bb7b39175035e7f7e45c90977, please note these also have a few follow-on fixes</span>. Lastly, obviously some systemd design decisions should be taken with a grain of salt, but they seem to be very well-intentioned, we could use a bit more open and objective communication and design review between <b>more</b> kernel developers and systemd developers. The smoother this gets the smoother the experience we provide to users should be.</div>
mcgrofhttp://www.blogger.com/profile/06081818694231731816noreply@blogger.com3San Francisco, CA 94108, USA37.7909427 -122.4084993999999837.7783947 -122.42866939999998 37.803490700000005 -122.38832939999999tag:blogger.com,1999:blog-29679292.post-16049242448707835952015-12-14T16:24:00.000-08:002016-01-19T14:58:00.663-08:00Xen and the x86 Linux zero page<br />
This is part II, for part I - refer to "<a href="http://www.do-not-panic.com/2015/12/avoiding-dead-code-pvops-not-silver-bullet.html">Avoiding dead code: pv_ops is not the silver bullet</a>".<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://upload.wikimedia.org/wikipedia/commons/thumb/e/e8/Xen_project_logo.svg/635px-Xen_project_logo.svg.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="125" src="https://upload.wikimedia.org/wikipedia/commons/thumb/e/e8/Xen_project_logo.svg/635px-Xen_project_logo.svg.png" width="320" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<br />
On x86 Linux the boot sequence is rather complicated, so much so that it has its own dedicated boot protocol. This is documented upstream on <a href="https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/x86/boot.txt">Documentation/x86/boot.txt</a>. The protocol tends to evolve as the x86 architecture evolves, in order to compensate for new features or extensions which could we need to learn about at boot time. Of interest to this post is the "zero page". The first step when loading a Linux kernel is to load the "zero page", this consists of a the structure struct boot_params, defined in <a href="https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/arch/x86/include/uapi/asm/bootparam.h">arch/x86/include/uapi/asm/bootparam.h</a>. Its called zero page as unless you're relocating data around, the the zero page is the first physical page of the operating system. The x86 boot protocol originally only had to support 16-bit boot protocol, to do this it required first to load the real-mode code (boot sector and setup code). For modern bootloaders what needs to be loaded is a bit larger, but new bootloaders must still load the same original real-mode code. The struct boot_params accounts for this evolution in requirements, the real-mode section is what is defined in the struct setup_header. The zero page is not only something which we must load, its also part of the actual bzImage we build on x86. One can therefore read a kernel file's struct boot_params as well to extract some details of the kernel. To try this you can play around with parse-bzimage, part of the <a href="https://github.com/mcgrof/table-init">table-init tree on github</a>. All this sort of stuff is what bootloaders end up working with. Since hypervisors can also boot Linux they must also somehow do the same. This post is about how Xen's zero-page setup design, we'll contrast it to lguest's zero page setup. lguest is a demo 32-bit hypervisor on Linux.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-TeYQmCPR6iY/VkoZFzZZZvI/AAAAAAACdO4/ooOqoohMI5M/s1600/20151115_161220.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="222" src="http://1.bp.blogspot.com/-TeYQmCPR6iY/VkoZFzZZZvI/AAAAAAACdO4/ooOqoohMI5M/s400/20151115_161220.jpg" width="400" /></a></div>
<br />
<br />
If a hypervisor boots Linux it must also set up the zero page. We'll disect Xen's set up of the zero page backwards, from tracing what we see on Linux down to Xen's setup of the zero page. Xen's entry into Linux x86 for PV type guests (PV, PVH) is set up and annotated on the ELF binary as an ELF note, in particular the XEN_ELFNOTE_ENTRY. On Linux this is visible on <a href="https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/arch/x86/xen/xen-head.S#n110">arch/x86/xen/xen-head.S</a> as follows:<br />
<br />
<pre class="prettyprint">ELFNOTE(Xen, XEN_ELFNOTE_ENTRY, _ASM_PTR startup_xen)
</pre>
<div>
<br />
startup_xen is the respective first entry point of code used on Linux by Xen PV guest types, its defined earlier above in the asm code on the same file. Its implementation is rather simple, enough so we can include it here:<br />
<br />
<pre class="prettyprint">ENTRY(startup_xen)
cld
#ifdef CONFIG_X86_32
mov %esi,xen_start_info
mov $init_thread_union+THREAD_SIZE,%esp
#else
mov %rsi,xen_start_info
mov $init_thread_union+THREAD_SIZE,%rsp
#endif
jmp xen_start_kernel
</pre>
<br />
On x86-64 this sets up what was in rsi to xen_start_info, it then uses what was on rsp to set up the stack before jumping to the first C Linux entry point, xen_start_kernel. The Xen hypervisor must have set up rsi and rsp. This is a bit different than what we expected...<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiAzlfx2HsFG2ni36MCyqfvJcaoJOrxJ1CbIsIYaHnuFfBf7RY8we6lncvYqPyujnGvCCydB6s2KT6s9iVowCLOtr2hmPI5b3np7vyI8diwu0t8By2jdw23_LXqZHspVzXXEeGXYA/s1600/20150530_152523.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiAzlfx2HsFG2ni36MCyqfvJcaoJOrxJ1CbIsIYaHnuFfBf7RY8we6lncvYqPyujnGvCCydB6s2KT6s9iVowCLOtr2hmPI5b3np7vyI8diwu0t8By2jdw23_LXqZHspVzXXEeGXYA/s400/20150530_152523.jpg" width="225" /></a></div>
<br />
<br />
Let's backtrack and show you what perhaps a sane Linux kernel developer expected you to set up, to do this let's look at how lguest loads Linux. lguest's launcher is implemented on <a href="https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/tools/lguest/lguest.c">tools/lguest/lguest.c</a>. Of interest to us it parses the file we pass it as a Linux kernel binary and tries to launch it via load_kernel(). Read load_bzimage(), it reads the kernel passed, checks the magic string is present and loads the zero page from the file onto its own memory's zero page, finally returning boot.hdr.code32_start. This later part is used to kick off control into the kernel as its starting entry point. But of importance as well to us is that the zero page was read from the file, and used as a base to set up the "zero page". The lguest zero-page is further customized after load_kernel(), lets see a few entries below.<br />
<span style="background-color: black;"><br /></span>
<br />
<pre class="prettyprint">int main(int argc, char *argv[])
{
...
/* Boot information is stashed at physical address 0 */
boot = from_guest_phys(0);
/*
* Map the initrd image if requested
* (at top of physical memory)
*/
if (initrd_name) {
initrd_size = load_initrd(initrd_name, mem);
/* start and size of the initrd are expected to be found */
boot->hdr.ramdisk_image = mem - initrd_size;
boot->hdr.ramdisk_size = initrd_size;
/* The bootloader type 0xFF means "unknown"; that's OK. */
boot->hdr.type_of_loader = 0xFF;
}
/*
* The Linux boot header contains an "E820" memory
* map: ours is a simple, single region.
*/
boot->e820_entries = 1;
boot->e820_map[0] = ((struct e820entry) { 0, mem, E820_RAM });
/*
* The boot header contains a command line pointer:
* we put the command line after the boot header.
*/
boot->hdr.cmd_line_ptr = to_guest_phys(boot + 1);
/*
* We use a simple helper to copy the arguments
* separated by spaces.
*/
concat((char *)(boot + 1), argv+optind+2);
/* Set kernel alignment to 16M (CONFIG_PHYSICAL_ALIGN) */
boot->hdr.kernel_alignment = 0x1000000;
/*
* Boot protocol version: 2.07 supports the
* fields for lguest.
*/
boot->hdr.version = 0x207;
/*
* The hardware_subarch value of "1" tells the
* Guest it's an lguest.
*/
boot->hdr.hardware_subarch = 1;
</pre>
And that's how sane Linux kernel developers expected you to do Linux kernel loading. Why does Xen's setup look so odd? What's this xen_start_info crap? Let's brace ourselves and dare to have a look at the Xen hypervisor setup code.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-BvMJxx7mq0Y/Vm9c2R7IbSI/AAAAAAACfIM/AphIrV7AUd0/s1600/20151212_170715.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="http://4.bp.blogspot.com/-BvMJxx7mq0Y/Vm9c2R7IbSI/AAAAAAACfIM/AphIrV7AUd0/s400/20151212_170715.jpg" width="371" /></a></div>
<br />
<br />
Xen defines what it ends up putting into the "xen_start_info" through a data structure it calls struct start_info, defined in <a href="http://xenbits.xen.org/gitweb/?p=xen.git;a=blob;f=xen/include/public/xen.h;h=ff5547eb2458905727f460858207b0d0af0face4;hb=HEAD">xen/include/public/xen.h</a>, it refers to this as the "<i>Start-of-day memory layout</i>". Of interest to us is who sets this up, for x86-64 this is done via vcpu_x86_64(), the relevant parts for is are listed below.<br />
<br />
<pre class="prettyprint">
memset(ctxt, 0, sizeof(*ctxt));
ctxt->user_regs.rip = dom->parms.virt_entry;
ctxt->user_regs.rsp = dom->parms.virt_base +
(dom->bootstack_pfn + 1) * PAGE_SIZE_X86;
ctxt->user_regs.rsi = dom->parms.virt_base +
(dom->start_info_pfn) * PAGE_SIZE_X86;
</pre>
<br />
The dom's params are set up via xc_dom_parse_bin_kernel(), as with lguest it has a file parser and uses this to set up some information, and it also extends some information, but <b>it never really sets up the zero-page</b>. Instead it actually sets up its own set of data structures representing the struct start_info. It turns out the setting of the zero-page for PV guests is done once running Linux inside Linux kernel code on the first Xen C entry point for Linux, on xen_start_kernel() on <a href="https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/arch/x86/xen/enlighten.c">arch/x86/xen/enlighten.c</a> !<br />
<br />
<pre class="prettyprint">/* First C function to be called on Xen boot */
asmlinkage __visible void __init xen_start_kernel(void)
{
...
if (!xen_start_info)
return;
...
/* Poke various useful things into boot_params */
boot_params.hdr.type_of_loader = (9 << 4) | 0;
boot_params.hdr.ramdisk_image = initrd_start;
boot_params.hdr.ramdisk_size = xen_start_info->mod_len;
boot_params.hdr.cmd_line_ptr = __pa(xen_start_info->cmd_line);
...
}
</pre>
<br />
Its not documented so I can only infer that the architectural reason for this was to account for the different operating systems that Xen has to support, its perhaps easier to work with a generic data structure, populate that, and then have the kernel specific solution parse it out. While this might have been an original design consideration, it also has implicated a diverging entry point solution for Linux, which as I've highlighted recently in <a href="http://www.do-not-panic.com/2015/12/avoiding-dead-code-pvops-not-silver-bullet.html">my last post on dead code on pv_ops</a>, isn't ideal for Linux. The challenge to any alternative is to not be disruptive and remain compatible, not extend pv_ops, and providing a generic solution which might be useful elsewhere.</div>
mcgrofhttp://www.blogger.com/profile/06081818694231731816noreply@blogger.com0San Francisco, CA, USA37.7749295 -122.4194155000000137.373501499999996 -123.06486250000002 38.1763575 -121.77396850000001tag:blogger.com,1999:blog-29679292.post-73213345217840005792015-12-10T12:09:00.000-08:002016-01-19T14:55:44.681-08:00Avoiding dead code: pv_ops is not the silver bullet<br />
This is part I - for part II - see "<a href="http://www.do-not-panic.com/2015/12/xen-and-x86-linux-zero-page.html">Xen and the Linux x86 zero page</a>"<br />
<br />
<div style="text-align: center;">
<b><span style="font-size: large;">"<i>Code that should not run should never run"</i></span></b></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-7qk8MmAdVz0/VmnW9DDKEmI/AAAAAAACe_Q/XMy_uQFtdbs/s1600/20151102_210833.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="http://1.bp.blogspot.com/-7qk8MmAdVz0/VmnW9DDKEmI/AAAAAAACe_Q/XMy_uQFtdbs/s400/20151102_210833.jpg" width="370" /></a></div>
<br />
<br />
The fact that code that should not run should never run seems like something <b>stupid and obvious</b> but it turns out that its actually easier said than done on very large software projects, particularly on the Linux kernel. One term for this is "<b><i>dead code</i></b>". The amount of <i>dead code</i> on Linux has increased over the years due to the desire by Linux distributions to want <b>a single Linux kernel binary</b> to work on <b>different run time environments</b>. The size and complexity of certain features increases the difficulty of proving that <i>dead code</i> never runs. Using a single kernel binary is desirable given that the alternative is we'd have different Linux kernel binary packages for each major custom run time environment we wish to use and among other things this means testing and validating multiple kernels. A really complex modern example, which this post will focus on, is <i>dead code which is possible as a consequence of how we handle support for different <a href="https://en.wikipedia.org/wiki/Hypervisor">hypervisors</a> on the Linux kernel</i>. The purpose of this post is to <b>create awareness about the problem</b>, clean resolutions to these problems have been already integrated upstream for a few features, and you should be seeing a few more soon.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-57uTGQnXm58/VmnVFVgWVOI/AAAAAAACe-0/KqTqFFIlC5g/s1600/20151029_130135.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="250" src="http://2.bp.blogspot.com/-57uTGQnXm58/VmnVFVgWVOI/AAAAAAACe-0/KqTqFFIlC5g/s400/20151029_130135.jpg" width="400" /></a></div>
<br />
Back in the day you needed a custom kernel binary if you wanted to use the kernel with specific hypervisor support. To solve this the Linux kernel paravirtualization operations, aka paravirt_ops, or even shorter just <a href="https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/virtual/paravirt_ops.txt">pv_ops</a>, was chosen as the<b> mechanism to enable different hypervisor solutions to co-exist with a single kernel binary</b>. Although <a href="https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/virtual/paravirt_ops.txt">pv_ops</a> was welcomed with open arms back in the days as a reasonable compromise, these days just the mention of "<a href="https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/virtual/paravirt_ops.txt">pv_ops</a>" to any kernel developer will cause a cringe. There are a few reasons to hate <a href="https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/virtual/paravirt_ops.txt">pv_ops</a> these days, given the praise over it back in the day its perhaps confusing why people hate them so much now, this deserves some attention. Below are a few key reasons <i>why developers <b>hate</b> <a href="https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/virtual/paravirt_ops.txt">pv_ops</a> today</i>.<br />
<br />
<ul>
<li><a href="https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/virtual/paravirt_ops.txt">pv_ops</a> was designed at a time when <b><a href="https://en.wikipedia.org/wiki/Hardware-assisted_virtualization">hardware assisted virtualization</a> solutions were relatively new</b>, and it remained unclear how fully paravirtualized solutions would compare. KVM is a hypervisor solution that requires <a href="https://en.wikipedia.org/wiki/Hardware-assisted_virtualization">hardware assisted virtualization</a>. These days, even originally <b>fully paravirtualized</b> hypervisors solutions such as the Xen hypervisor have integrated support the hardware virtualization extensions put out by several hardware vendors. This makes it difficult to term hypervisors that no longer are "<b><i>fully paravirtualized</i></b>", the different possibilities of what could be paravirtualized and be dealt with by hardware has given the rise to a slew of <b>different types of paravirtualized guests</b>. For instance, Xen now has PV, HVM, PVH, check out the <a href="http://wiki.xen.org/wiki/Virtualization_Spectrum">virtualization spectrum page</a> for a clarification of how each of these vary. What remains clear though is <a href="https://en.wikipedia.org/wiki/Hardware-assisted_virtualization">hardware assisted virtualization</a> features have been welcomed and in the future you should count on all <b><u>new</u> systems</b> running virtualization to take advantage of them. In the end <b>Xen PHV will provide that sweet spot for the best mixture of "paravirtualization" and hardware virtualization</b>. Architectures which needed hypervisor virtualization support developed after <a href="https://en.wikipedia.org/wiki/Hardware-assisted_virtualization">hardware assisted virtualization</a> solutions were in place can support different hypervisors without <a href="https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/virtual/paravirt_ops.txt">pv_ops</a>. Such is the case for ARM which supports both <a href="http://systems.cs.columbia.edu/projects/kvm-arm/">KVM</a> and <a href="http://wiki.xen.org/wiki/Xen_ARM_with_Virtualization_Extensions">Xen on ARM</a>. In this light, in a way <a href="https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/virtual/paravirt_ops.txt">pv_ops</a> is a thing of the past. If Xen slowly deprecates and finally removes<i> fully paravirtualized</i> PV support from the Linux kernel Konrad has noted that at the very least we could <a href="https://lkml.org/lkml/2013/7/31/294">deprecate pv_ops MMU components</a>.</li>
<li>Although <a href="https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/virtual/paravirt_ops.txt">pv_ops</a> was conceived as an <b>architecture agnostic solution</b> in order to support different hypervisors, since hardware assisted virtualization solutions are common, and since evidence shows you can support different hypervisors cleanly without <a href="https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/virtual/paravirt_ops.txt">pv_ops</a> and Xen support on ia64 was removed and deprecated, and so <a href="https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=e55645ec5725a33eac9d6133f3bce381af1e993d">pv_ops was also removed from ia64</a>, <b>x86 is now the <u>only</u> remaining architecture using <a href="https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/virtual/paravirt_ops.txt">pv_ops</a></b>.</li>
<li>Collateral: changes to <a href="https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/virtual/paravirt_ops.txt">pv_ops</a> can cause regressions and can impact code for all x86-64 kernel solutions, as such kernel developers are extremely cautious on making additions, extensions, and of even adding new users. To what extent do we not want extensions to <a href="https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/virtual/paravirt_ops.txt">pv_ops</a>? Well Rusty Russell wrote the <a href="https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/tools/lguest/lguest.txt">lguest</a> hypervisor and launcher code, he did this to not only demo <a href="https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/virtual/paravirt_ops.txt">pv_ops</a> but also set sanity on how folks should write hypervisors for Linux using <a href="https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/virtual/paravirt_ops.txt">pv_ops</a>. Rusty wrote this with only 32-bit support though. <a href="https://lkml.org/lkml/2013/7/31/161">Although there has been interest in developing 64-bit support on lguest, its simply not welcomed</a>, for at least one of the reasons stated above -- as per hpa: "<i>extending <a href="https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/virtual/paravirt_ops.txt">pv_ops</a> is a permanent tax on future development</i>". With the other reasons listed above, this is even more so. If you want to write a demo hypervisor with 64-bit support on x86 the approach you could take is to try to write it with all the fancy new hardware virtualization support and you should try avoiding <b><a href="https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/virtual/paravirt_ops.txt">pv_ops</a> </b>as much as is humanely possible.</li>
</ul>
<br />
So <a href="https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/virtual/paravirt_ops.txt">pv_ops</a> was originally the solution put in place to help support different hypervisors on Linux through an architecture agnostic solution. These days, provided we can phase out full Xen PV support, we should strive to only keep what we need to provid support for Xen PHV and the other hardware assisted hypervisors.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-Ace13TGVhDo/VmnU4Q1MJ4I/AAAAAAACe-c/Qqclny6owc4/s1600/20151029_131813.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="225" src="http://4.bp.blogspot.com/-Ace13TGVhDo/VmnU4Q1MJ4I/AAAAAAACe-c/Qqclny6owc4/s400/20151029_131813.jpg" width="400" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
The only paravirtualized hypervisors supported upstream on the Linux kernel are Xen for PV guest types (PV, PVH) and the demo <a href="https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/tools/lguest/lguest.txt">lguest</a> hypervisor. <a href="https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/tools/lguest/lguest.txt">lguest</a> is just demo code though, I'd hope no one is using it in production code though... I'd be curious to hear... Assuming no one sane is using lguest as a production hypervisor and we could phase it out, that leaves us with Xen PV solutions as the remaining solution to study to see how we can simplify <a href="https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/virtual/paravirt_ops.txt">pv_ops</a>. A highly distinguishing factor of Xen PV guest types (Xen PV, Xen PVH) are that they have a unique separate entry point into Linux when Linux on x86 boots. <b>Xen PV and Xen PVH guest types share this same entry path</b>. That is, even if we wanted to try to remove as much as possible from <a href="https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/virtual/paravirt_ops.txt">pv_ops</a>, we'd still currently have to take into account that Xen's modern ideal solution with a mixture of "paravirtualization" and hardware virtualization uses this separate entry path. Trying to summarize this without going into much detail, the different entry points and how x86-64 init works can be summarized as follows.<br />
<br />
<pre class="prettyprint">Bare metal, KVM, Xen HVM Xen PV / dom0
startup_64() startup_xen()
\ /
x86_64_start_kernel() xen_start_kernel()
\ /
x86_64_start_reservations()
|
start_kernel()
[ ... ]
[ setup_arch() ]
[ ... ]
init
</pre>
<br />
Although this is a small difference, it actually can have a huge impact on possible "<i>dead code</i>". You see, prior to <a href="https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/virtual/paravirt_ops.txt">pv_ops</a> different binaries were compiled, and features and solutions which you knew you would not need could simply be<b> negated via Kconfig</b>, these negations were not done upstream -- they were only implemented and integrated on SUSE kernels, as SUSE was perhaps the only enterprise Linux distribution fully supporting Xen. Doing these negations ensures that code we determined should never run, never got compiled in. Although this Kconfig solution was never embraced upstream it doesn't mean the issue didn't exist on upstream, quite the contrary, it obviously did, there was just no clean proposed solution to the problem and frankly no one cared too much about resolving it properly. However an implicit consequence of embracing <a href="https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/virtual/paravirt_ops.txt">pv_ops</a> and supporting different hypervisors with one binary is that we're now forced to have large chunks of code always enabled in the Linux kernel, some of which we know should not run once we know what path we're taking on the above tree init path. Code cannot be compiled out, as our differences are now handled at run time. Prior to <a href="https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/virtual/paravirt_ops.txt">pv_ops</a> the Kconfig solution was used to negate feature that should not run when on Xen so issues would come up at compile time and could be resolved this way. This Kconfig solution was in no way a proactive solution, but its how Xen support on SUSE kernels was managed. Using <a href="https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/virtual/paravirt_ops.txt">pv_ops</a> means we need this resolved through alternative upstream friendly means.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-MOMxVERfzSA/VmnTuDaC7TI/AAAAAAACe-Q/amw6OaMs4q4/s1600/20151025_111128.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="292" src="http://3.bp.blogspot.com/-MOMxVERfzSA/VmnTuDaC7TI/AAAAAAACe-Q/amw6OaMs4q4/s400/20151025_111128.jpg" width="400" /></a></div>
<br />
Next are a just <b><u>a few</u> examples of dead code concerns</b> I have looked into but please note that there are more, I also explain a few of these. Towards the end I explain what I'm working on to do about some of these dead code concerns. Since I hope to have convinced you that people hate <a href="https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/virtual/paravirt_ops.txt">pv_ops</a>, <b><i><u>the challenge here</u></i></b> is to come up with a really <b>clean generic solution</b> that 1) does not extend <a href="https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/virtual/paravirt_ops.txt">pv_ops</a>, and 2) could also likely be repurposed for other areas of the kernel.<br />
<ul>
<li><a href="https://en.wikipedia.org/wiki/Memory_type_range_register">MTRR</a></li>
<li>IOMMU - initialization (resolved cleanly), IOMMU API calls, IOMMU multifuction device conflict. exposed IOMMU ACPI tables (Intel VT-d), </li>
<li>Microcode updates - both early init and changes at run time</li>
</ul>
As I've studied some of the <i>dead code</i> concerns for some of the above features I've also identified an issue when the main x86 entry path is modified for x86-64 but the Xen's init path is forgotten. When this happens in the worst case you end up crashing Xen. I list two of these cases, one of which is still an issue for Xen. I call these <b>init mismatch</b> issues.<br />
<ul>
<li>cr4 shadow</li>
<li>KASan</li>
</ul>
<div>
So both <b>dead code</b> concerns, and <b>init mismatch</b> issues can break things, sometimes really really badly. Some of the solutions in place today and some that will be developed are what I like to refer to as <b><u>paravirtualization yielding solutions</u></b>. When reviewing some of these issues below, keep in mind that this is essentially what we're doing, it should help you understand why we're doing what we're doing, or why we need some more work in certain areas of the kernel.</div>
<div>
<br /></div>
<br />
<h3>
Death to MTRR:</h3>
<div>
<br /></div>
<a href="https://en.wikipedia.org/wiki/Memory_type_range_register">MTRR</a> is an example type of code that we know should not run on when we boot Linux for Xen dom0 or as a guest given that on Linux upstream we never implemented a solution to deal with <a href="https://en.wikipedia.org/wiki/Memory_type_range_register">MTRR</a> with the hypervisor. <a href="https://en.wikipedia.org/wiki/Memory_type_range_register">MTRR</a> calls however are a case that in most cases are not fatal if they fail, typically if <a href="https://en.wikipedia.org/wiki/Memory_type_range_register">MTRR</a> calls fail you'd suffer performance. Since <a href="https://en.wikipedia.org/wiki/Memory_type_range_register">MTRR</a> is really old, we had the option to either add <a href="https://en.wikipedia.org/wiki/Memory_type_range_register">MTRR</a> Linux hypervisor call support for Xen, or work on an alternative that avoided <a href="https://en.wikipedia.org/wiki/Memory_type_range_register">MTRR</a> somehow amicably. Fortunately a long time ago <a href="https://twitter.com/amluto">Andy Lutomirski</a> figured we could replace direct <a href="https://en.wikipedia.org/wiki/Memory_type_range_register">MTRR</a> calls with a no-op when on <a href="https://en.wikipedia.org/wiki/Page_attribute_table">PAT</a> capable systems, provided you also used a PAT friendly respective ioremap call. So he added arch_phys_wc_add() to be used in combination with ioremap_wc(). This solved it for write-combining <a href="https://en.wikipedia.org/wiki/Memory_type_range_register">MTRR</a> calls. He did a bit of the driver conversions needed for this work, it however was never fully completed. If you're following my development upstream you may have noticed that among other things for <a href="https://en.wikipedia.org/wiki/Memory_type_range_register">MTRR</a> I completed where Andy left off, replacing all direct users of write combining <a href="https://en.wikipedia.org/wiki/Memory_type_range_register">MTRR</a> calls upstream on Linux with an architecture agnostic write-combining call, arch_phys_wc_add(), in combination with ioremap_wc(). Instead of adding Linux <a href="https://en.wikipedia.org/wiki/Memory_type_range_register">MTRR</a> hypervisor calls we now have a wrapper which will call <a href="https://en.wikipedia.org/wiki/Memory_type_range_register">MTRR</a> only when we know that is needed, and instead <a href="https://en.wikipedia.org/wiki/Page_attribute_table">PAT</a> interfaces are used when available. Addressing write-combining <a href="https://en.wikipedia.org/wiki/Memory_type_range_register">MTRR</a> is just one small example though of what we needed to address, there are other types of MTRRs you could use, and in the worst cases they were being used in incredibly hackish, but functional ways. For instance in one case one driver was using two overlapping MTRRs, in the worst case PCI Bar was of 16 MiB but the MMIO region for the device was in the last 4 KiB of the same PCI BAR. You want to avoid write-combining on MMIO regions, but if we use one MTRR for write-combining without affecting the MMIO region we'd end up with 8 MiB of write-combining and loose out on the rest of graphics memory. Using a 16 MiB write-combining MTRR meant we'd write-combine the MMIO region.. The implemented hacky MTRR solution was to issue a 16 MiB write-combining MTRR followed by 4 KiB UC MTRR. There were also two overlapping ioremap calls for this driver. The resolution, in a PAT friendly way included adding ioremap_uc() upstream, which would set PCD=1, PWT=1 on non-PAT systems and use a PAT value of UC for PAT systems. We used this for the MMIO region, doing this ensures that if you then issue on MTRR on this region the MMIO region would remain unaffected. The framebuffer was also carved out cleanly, and ioremap_wc() used on it. For details refer to:<br />
<br />
<a href="https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=8c7ea50c010b2f1e006ad37c43f98202a31de2cb">x86/mm, asm-generic: Add IOMMU ioremap_uc() variant default</a><br />
<a href="https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=eacd2d542610e55cad0be445966ac8ae79124c6e">drivers/video/fbdev/atyfb: Carve out framebuffer length fudging into a helper</a><br />
<a href="https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=f55de6ec375da89f89f1a76e1b998e5f14878c06">drivers/video/fbdev/atyfb: Clarify ioremap() base and length used</a><br />
<a href="https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=3cc2dac5be3f23414a4efdee0b26d79bed297cac">drivers/video/fbdev/atyfb: Replace MTRR UC hole with strong UC</a><br />
<a href="https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=7d89a3cb159aecb1b363ea50cb14c967ff83b5a6">drivers/video/fbdev/atyfb: Use arch_phys_wc_add() and ioremap_wc()</a><br />
<br />
But that's not all... even if all drivers have been converted over to never issue MTRR calls directly the BIOS might still issue MTRRs on bootup, and the kernel should have to know about that to avoid issues with conflicts with PAT. More work on this front is therefore needed, but at least the crusade to remove direct access to MTRR was completed on Linux as of v4.3.<br />
<h3>
</h3>
<h3>
IOMMU:</h3>
<div>
<br /></div>
A really clean solution to dead code, although it wasn't the only reason for why this went upstream, came from how IOMMU initialization code was handled with IOMMU_INIT macros with struct iommu_table_entry. The solution in place had to account for different dependencies between IOMMU code, this dependency map is best explained by a diagram.<br />
<br />
<pre class="prettyprint">
[xen-swiotlb]
|
+----[swiotlb *]--+
/ | \
/ | \
[GART] [Calgary] [Intel VT-d]
/
/
[AMD-Vi]
</pre>
<br />
Dependencies are annotated, detection routines made available and there's a sort routine which makes this execute in the right order. The full dependency map is handled at run time, to review some of the implementation check out git log -p 0444ad93e..ee1f28, and just check out the code. When this code was proposed <a href="https://marc.info/?l=linux-kernel&m=128285216913266&w=2">hpa had actually suggested that this sort of problem is common enough that perhaps a generic solution could be implemented on Linux</a>, and that the solution developed by the gPXE folks might be a good one to look at. As neat as this was, this still doesn't address all concerns though. Expect to see some possible suggested updates in this area.<br />
<br />
<h3>
Microcode updates:</h3>
<div>
<br /></div>
A CPU often needs software updates, this is known as CPU microcode updates. If using a hypervisor though your hypervisor should take care of these updates for you as a guest should not have to fix real hardware. Additionally if you do enable a guest to do updates on behalf of a full system you may want to be selective about what guests are allowed to do this. Then there are the run time update considerations. Some CPU microcode updates might disable some CPU ops, if you do this on a hypervisor with code already running some code might break as it assumes some CPU ops still are valid. This could cause some unexpected situations for guests. Doing run time CPU microcode updates after a system has booted then should be avoided and only done if you are 100% certain you can do it, and you have full hardware and software vendor support for it. The CPU microcode update must be designed for a run time update. As far as Linux is concerned we avoid enabling CPU microcode updates by bailing out on the CPU microcode init code if pv_enabled() returns true. This works but it turns out this is not an ideal solution, the reason is that pv_enabled() really should probably be renamed to something such as pv_legacy() as this really only returns true if you have a legacy PV solution. Expect some updates on this upstream soon. If folks desire run time CPU microcode updates on Xen work is required on the Xen side to copy the buffer to Xen, scan the buffer for the correct patch, and finally rendezvous all online cpus in an IPI to apply the patch, and keep the processors in until all have completed the patch. <a href="http://www.gossamer-threads.com/lists/xen/devel/364704?page=last">I hacked up a version for the hypervisor which just does queiscing by pausing domains</a>, that obviously needs more work, someone interested should pick up on that. Refer to <a href="http://wiki.xenproject.org/wiki/XenParavirtOps/microcode_update">Xen microcode updates</a> for Xen specific documentation or to read the latest notes on developing this for the Xen hypervisor. At this time, its not clear where KVM keeps this documentation.<br />
<br />
<h3>
Init mismatch issues:</h3>
<br />
We have a dual entry with x86, we have to live with that now, but at times this is overlooked and it can happen to the best of us. For instance, when Andy Lutomirski added support to <a href="https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=1e02ce4cccdcb9688386e5b8d2c9fa4660b45389">shadow the CR4 per CPU on the x86-64 init path</a> he forgot to add a respective call for Xen. This caused a crash on all Xen PV guests and dom0. <a href="https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=5054daa285beaf706f051fbd395dc36c9f0f907f">Boris Ostrovsky fixed this for 64-bit PV(H) guests</a>. I'm told code review is supposed to catch these issues but I'm not satisfied, the fix here was purely reactive. We could and should do better. A perfect example of further complications is when Linux got KASan support, the kernel address sanitizer. Enabling KASan on x86 will crash Xen today, and this issue is not yet fixed. We need a proactive solution. If we could unify init paths, would that help? Would that be welcomed? How could that be possible?<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEge9ZmRnjIxzQRVTCYHPRNiTgcOY8GnFsjF5Zr7FMS3lqGWKkJvYFX01eAIh_NpsaCbforBp21aU3lJpB8CMJsg6RKAFpGDU7dLyShYIbIURASf1wTctFSSDNzTibZ3UD1pHpn8sw/s1600/20151029_132001-EFFECTS.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="225" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEge9ZmRnjIxzQRVTCYHPRNiTgcOY8GnFsjF5Zr7FMS3lqGWKkJvYFX01eAIh_NpsaCbforBp21aU3lJpB8CMJsg6RKAFpGDU7dLyShYIbIURASf1wTctFSSDNzTibZ3UD1pHpn8sw/s400/20151029_132001-EFFECTS.jpg" width="400" /></a></div>
<div>
<br /></div>
<div>
<h3>
What to do</h3>
</div>
<div>
<br /></div>
<div>
The purpose of this post is to <b>create awareness of what dead code is</b>, make you believe its real, its important, and that if we could come up with a <b>clean solution</b> that we could probably <b>re-use it for other purposes</b> -- it should welcomed. I'm putting a lot of emphasis on <b>dead code and init mismatch issues</b> as without this post I probably would not be able to talk to anyone about it and expect them to understand what I'm talking about, let alone have them understand the importance of the issue. The virtualization world is likely not the only place that could use a solution to some of the <b>dead code</b> concern problems. I'll soon be posting RFCs for a possible mechanism to help with this, if you want a taste of what this might look like, you can take a peak at the <a href="https://github.com/mcgrof/table-init.git">userspace table-init mockup solution</a> that I've implemented. In short, its a merge of what the gPXE folks implemented with what Konrad worked on for IOMMU initialization, giving us the best of both worlds.</div>mcgrofhttp://www.blogger.com/profile/06081818694231731816noreply@blogger.com1San Francisco, CA, USA37.7749295 -122.4194155000000137.373501499999996 -123.06486250000002 38.1763575 -121.77396850000001tag:blogger.com,1999:blog-29679292.post-34521820248057615542015-04-01T12:25:00.002-07:002020-05-19T06:25:38.778-07:00God complex - why open models will win<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-tkBjhbzeTgw/VRWpJDaDmvI/AAAAAAAB8qM/H1NPNpskYfw/s1600/20150321_160910.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="225" src="http://3.bp.blogspot.com/-tkBjhbzeTgw/VRWpJDaDmvI/AAAAAAAB8qM/H1NPNpskYfw/s1600/20150321_160910.jpg" width="400" /></a></div>
<b><br /></b>
<b>Engineering and science can never be about religion</b>, they are both about trial and error, empirical evidence supporting trials, precision, and formulating math behind all this. It's really easy to forget this though, specially if you've hired really good engineers / scientists. With good engineers / scientists you might cut corners or simply expect and assume that you'll always have the best answers possible on board. A good thesis can only be good if it really covered all possible known grounds and is providing an in depth analysis that likely was never considered before. See <a href="http://www.do-not-panic.com/2013/06/bubbles-law-and-bubble-bang.html">my article and review of the Big Bang theory</a> for my high bar expectation for what I mean by <i>good science</i>. <b>Because of all this with the rapid pace of change in science and technology, knowledge and information flow I suspect there should be a limit at which closed development models can outpace open development models</b>, although I have no evidence for this I believe the reasoning for this should be relatively trivial to follow. Folks who disagree with this might find it harder to prove the counter, which leaves me content without having to provide a full proof. I have found that this particular issue in Engineering / Science has been best described by Tim Harford in a Ted Talk titled "<a href="http://www.youtube.com/watch?v=K5wCfYujRdE">God Complex</a>" and highly encourage anyone who might have hesitation about the above "open model outpacing closed models" premise to go watch it. I'll use this premise in this post, <b>just an example, </b>to argue that for instance, <b>open hardware development should outpace closed hardware development models --</b> just as open software development models very likely already outpaced closed proprietary software development models (we can't prove this as we don't have math on private development models). I'll go into details of my conjecture next and provide a brief guideline to folks who want to test this conjecture on open hardware development.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-aPbz1_D4wJQ/VRWpJHhP5CI/AAAAAAAB8qM/1xJNWOUhXv0/s1600/20150321_184030.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="225" src="http://4.bp.blogspot.com/-aPbz1_D4wJQ/VRWpJHhP5CI/AAAAAAAB8qM/1xJNWOUhXv0/s1600/20150321_184030.jpg" width="400" /></a></div>
<br />
<br />
Engineering is not supposed to be easy, <b>its fucking hard</b>, and if you have it any other way you're fooling yourself that what you are doing is Engineering. Kernel development is not supposed to be easy, and considering that on Linux we're engaging with the entire planet openly on the <b>largest collaborative development project on the planet</b>, its no surprise that the engineering on Linux has a steeper curve than other average software engineering projects. Even though we've prided ourselves on informality on much of our engineering practices over time our growing pains have taught us a few principles and best practices to help us both scale and to more effectively engineer collaboratively. A few easy to follow examples of this are:<br />
<br />
<ul>
<li>The practice of using Subsystem Maintainers, where components of parts of our software are broken down into components and folks then are in charge to upkeep that component. Linus just pulls all the strings of all maintainers together during the merge window.</li>
<li>The <a href="http://www.do-not-panic.com/2014/02/developer-certificate-of-origin.html">Development of the Developer Certificate of Origin (DCO)</a> whereby after some legal considerations we realized its best to throw in some Signed-off-by / provenance guarantees on software in such a way that it would allow us to upkeep our pace of development.</li>
<li>A <a href="https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/Documentation/CodeOfConflict?id=ddbd2b7ad99a418c60397901a0f3c997d030c65e">Code of Conflict</a> to enable us to deal with unfortunate extreme mishaps on the outright difficult nature of engaging with grumpy overloaded maintainers and community on the open peer review process.</li>
</ul>
Many software projects have learned from Linux. The Subsystem Maintainers model is prevalent, although likely <b>not invented on Linux</b>, but as I've described in <a href="http://www.do-not-panic.com/2014/02/developer-certificate-of-origin.html">a previous post before the DCO is also heavily embraced by other projects already</a> and other projects are encouraged to use it now thanks to our effort to separate it from Linux. Many projects have Code of Conflicts agreements, that is not unique to Linux. There's one aspect about the Code of Conflict that is important to highlight and goes in only as implicit but that I'd like now to make explicit and use as a primary premise for the reason for this post. Here is the language I'd like to highlight:<div>
<blockquote class="tr_bq">
<i>Your code and ideas behind it will be carefully reviewed, often resulting in <b>critique and criticism</b>. The review will <b>almost always require improvements</b> to the code before it can be included in the kernel. Know that this happens because everyone involved wants to see the best possible solution for the overall success of Linux.</i></blockquote>
<div>
I'm going to summarize this as: Engineering is hard as fuck, expect people to call you out on your shit. Deal with it, but if you feel we're unreasonable you can tap out. But most importantly: <b><span style="color: lime;"><u>Expect your first iteration on ideas to likely not be correct and require improvements</u></span></b>. <b>Even the most seasoned developers should expect this</b>. Before working for a purely software company I used to work at a hardware company, <a href="https://en.wikipedia.org/wiki/Qualcomm_Atheros">Atheros</a>, and the role I engaged in was unique given that <a href="https://en.wikipedia.org/wiki/Qualcomm_Atheros">Atheros</a> was providing full ASIC silicon designs on 802.11 technologies without requiring any CPU on the devices themselves. This meant that contrary to most 802.11 devices in the industry we worked without any firmware, all operations of the device were completely transparent to the device driver. Since I worked on an open device driver that meant all 802.11 hardware operations were completely open and transparent to the community whereby device drivers that relied and used on firmware would have hardware operations performed behind the scenes offloaded on the device's own CPU / proprietary firmware. Before I joined <a href="https://en.wikipedia.org/wiki/Qualcomm_Atheros">Atheros</a> I used to believe that <a href="https://en.wikipedia.org/wiki/Qualcomm_Atheros">Atheros</a> had the best 802.11 hardware in the industry. After I joined <a href="https://en.wikipedia.org/wiki/Qualcomm_Atheros">Atheros</a> and particularly, as other peers got hired by other 802.11 silicon companies and we collaborated, I became convinced that it was not just Atheros' unique hardware that made it stand out.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgYxaUI3Vj3bYXpPKh31nliD5Rr-tp3TXr5s9tcdt2zzKLVBhyphenhyphentvnpydRfxww0UaXFj6Z_C4Qhc8LneSb3lKYwgDuPgRX2I5WB5_OE5Gmi3exkmL6E9BmKMZ79rtrsJp9U3Ewym4w/s1600/ath9k-contributions.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="190" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgYxaUI3Vj3bYXpPKh31nliD5Rr-tp3TXr5s9tcdt2zzKLVBhyphenhyphentvnpydRfxww0UaXFj6Z_C4Qhc8LneSb3lKYwgDuPgRX2I5WB5_OE5Gmi3exkmL6E9BmKMZ79rtrsJp9U3Ewym4w/s1600/ath9k-contributions.png" width="400" /></a></div>
<br />
<br />
The success to the quality of support of Atheros' 802.11 devices can also be attributed to:<br />
<br />
<ol>
<li><b>The full ASIC design nature of it (not requiring firmware) and how hardware issues were punted out to the device driver that made the device operate much better than others</b></li>
<li>A strong community commitment / know-how and engagement</li>
</ol>
One thing which I'd like to highlight from the above graph is that at times the community was performing more contributions to the <a href="https://wireless.wiki.kernel.org/en/users/Drivers/ath9k">ath9k</a> device driver than <a href="https://en.wikipedia.org/wiki/Qualcomm_Atheros">Atheros</a> (later known as QCA). Both of the above are instrumental for a healthy openly developed device driver but <b>I cannot stress enough how critical to success it was for not requiring firmware</b>. I told folks repeatedly that we should not feel embarrassed about having hardware bugs. We should accept this as part of the nature of hardware design and silicon development. Its the rate at which you can fix these, even if through software workarounds, which will ultimately really create the best experience for users. If you have firmware the pipeline for fixes requires engaging with a team of engineers inside a company, and the time to fix issues there typically requires a significant amount of time. Without firmware even the community was able to participate in creating fixes for extremely complex issues, and this is extremely important for complex technologies such as 802.11. As we combine more RF technologies and things get more complex we will have no other option to work and engage with the community, thinking anything contrary to this make you fumble and fall into the "God complex" trap.<br />
<br />
At <a href="https://en.wikipedia.org/wiki/Qualcomm_Atheros">Atheros</a>, during the good' ol days, we were able to leverage off of the belief that we'd gain more successful contributions / healthy development model by opening up firmware on other devices where firmware was actually needed, we first tested this with <a href="https://wireless.wiki.kernel.org/en/users/Drivers/carl9170">carl9170</a> and later with <a href="https://wireless.wiki.kernel.org/en/users/Drivers/ath9k_htc">ath9k_htc</a>, both of which did require firmware but for which we managed to open source its firmware for. I believed our efforts to be pivotal, and an engaged open enthusiast reader might wish to perform metrics on <a href="https://wireless.wiki.kernel.org/en/users/Drivers/carl9170">carl9170</a> and <a href="https://wireless.wiki.kernel.org/en/users/Drivers/ath9k_htc">ath9k_htc</a> to help evaluate the impact of quality on software over openness.<br />
<br />
At the last Linux wireless summit that I actively participated in, before <a href="http://www.do-not-panic.com/2013/11/i-quit-qualcomm-today-whoohoo.html">joining SUSE</a>, it was made clear that <b>all manufacturers were moving away from full ASIC designs for 802.11 and that all silicon companies were going to be using proprietary firmware</b>. There are a lot of reasons for this, some of this has to do with the combination of different RF technologies (not just 802.11), but nevertheless the saddest part to me of all this was that the good lessons learned from the success of fully open drivers and open firmware models were not being seriously considered by future 802.11 device drivers and architectures. Part of this is the above arguments for "goodness" has no direct hard science associated with it, its why I ended up <a href="http://www.do-not-panic.com/2013/08/evolving-capitalism-with-ethical.html">working towards a hard science for ethical attributes</a>.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-2HSOVXOloKw/VRWpJKj6lkI/AAAAAAAB8qI/F0G18krRNSA/s1600/20150322_191023-PANO.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="102" src="http://2.bp.blogspot.com/-2HSOVXOloKw/VRWpJKj6lkI/AAAAAAAB8qI/F0G18krRNSA/s1600/20150322_191023-PANO.jpg" width="400" /></a></div>
<br />
Lacking hard science for proof for "goodness" might seem like a bad thing, but its also a chance for great opportunity. New startups and folks designing new hardware who already "get it" and do not have any hard requirements to tie themselves down with legacy archaic business requirements have a full open arena for exploration, <b>this is the best situation to be in</b>. Venture capitalism should be easily able to prove my conjecture by a few simple test cases. At least within the realm of open hardware designs, since <b>existing</b> silicon companies (not startups) might <a href="http://www.do-not-panic.com/2014/03/the-dangers-of-free-software.html">face the dangers of free software</a>, they should consider using hoards of unused / closeted / legacy designs and testing new innovative approaches with the community. And then there's the folks who have been perfecting collaborative development models: companies / organizations which have already been perfecting open collaborative development models have much to bring to the table to new startups / business models which perhaps never had explored such things. There's room for a lot of experimentation and trial and error. I'm happy for my conjecture to be disproved given that all this is not about religion, but rather the best fucking engineering possible. I remain optimistic though.</div>
</div>
mcgrofhttp://www.blogger.com/profile/06081818694231731816noreply@blogger.com0tag:blogger.com,1999:blog-29679292.post-24698856784402246052015-03-05T14:26:00.000-08:002015-03-05T14:26:33.891-08:00VMware law suit and an Apology to the BSD camp<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-hhXok5hB3lU/VPjSc91ENeI/AAAAAAAB4d0/PuQA915QGbk/s1600/20150117_171312.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-hhXok5hB3lU/VPjSc91ENeI/AAAAAAAB4d0/PuQA915QGbk/s1600/20150117_171312.jpg" height="225" width="400" /></a></div>
<br />
I <a href="http://www.do-not-panic.com/2012/03/connecting-dots.html">started hacking on Linux</a> without any consideration for software licensing, I did it more out of <b>pure joy</b> out of getting a kick out of seeing hardware work which didn't before and collaborating with an amazing set of folks. Through my years on working on Linux though I've somehow stumbled on the front lines of licensing debacles both due to reverse engineering, copyright infringement claims, and later patent considerations. The only way I can explain why I kept working on things despite its debacles is that perhaps most people give up and I guess I just don't. Its been years now since working in the community, and in fact for a while even went on a hippy '<a href="http://www.do-not-panic.com/2013/03/killing-proprietary-drivers-for-all.html"><i>FreeBSD / Linux lets work together kumbaya</i>!</a>' with real technical solutions in place (<a href="http://www.do-not-panic.com/2012/04/localizing-gpl.html">part I</a>, <a href="http://www.do-not-panic.com/2013/03/killing-proprietary-drivers-for-all.html">part II</a>) -- on this post I'd like to provide some background and explain why I <b>now</b> fully support the GPL on Linux, believe<b> its critical to enforce GPL on Linux</b> and would like to declare <b>I've given up on working on permissive licensed drivers on Linux</b> and explain why. I write this to also explain in details why I fully support <a href="https://sfconservancy.org/news/2015/mar/05/vmware-lawsuit/">Christoph Hellwig's lawsuit against VMware filed today</a>.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-CdsGlOEknJw/VPjSRtV7KOI/AAAAAAAB4ds/-M0GEnvcU1c/s1600/IMG_20150120_100033.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-CdsGlOEknJw/VPjSRtV7KOI/AAAAAAAB4ds/-M0GEnvcU1c/s1600/IMG_20150120_100033.jpg" height="400" width="400" /></a></div>
<br />
I've gone into details before about how <a href="http://www.do-not-panic.com/2012/03/connecting-dots.html">I first got involved with hacking on Linux</a> just to get my damn wifi to work, later jumped onto the MadWifi project and so began the '<a href="http://www.do-not-panic.com/2012/03/connecting-dots.html">ath5k wars</a>'. Later as we put out the <a href="https://wireless.wiki.kernel.org/en/users/drivers/ath9k">ath9k</a> device driver I also engaged with <a href="http://adrianchadd.blogspot.com/">Adrian Chadd</a> from the FreeBSD camp quite a bit, eventually we ended up becoming coworkers and did our best on <a href="http://www.do-not-panic.com/2013/03/killing-proprietary-drivers-for-all.html">ending proprietary drivers for good</a> by working together somehow. To prefix this I had called out to my Linux peers that we should consider simply <a href="http://www.do-not-panic.com/2012/04/localizing-gpl.html">localizing the GPL</a> and look to work and engage collaboratively with the BSD camp. To this day I stand behind the <b>technical ideas</b> we put out together to share drivers for both <b>BSD and Linux</b> -- in the end the pitfalls however were what really set this effort back. I'll summarize them as follows:<br />
<br />
<ul>
<li>Software teams at companies who do care about proprietary and permissively licensed solutions tend to be super sloppy and in no way motivated to do much work</li>
<li>Compared to the size of the amount of Linux developers the BSD camp stood no chance to help to keep up with what we were doing or putting out</li>
<li>Given the above issues the real folks who stand to gain from a joint venture between <b>BSD and Linux</b> folks working together on device drivers are simply the <b>proprietary vendors selling proprietary solutions</b></li>
<li>Patents are a wild card, and <b>best we have them on our side</b></li>
<li>Proprietary vendors with patent interests will play their cards carefully and you are at their mercy</li>
</ul>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-zb1t9khCRAI/VPjR8--rZ9I/AAAAAAAB4dg/BfcDQCzV4n4/s1600/20150130_173638-MOTION.gif" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-zb1t9khCRAI/VPjR8--rZ9I/AAAAAAAB4dg/BfcDQCzV4n4/s1600/20150130_173638-MOTION.gif" height="400" width="225" /></a></div>
<div>
<br /></div>
<div>
<br /></div>
I've written in a <a href="http://www.do-not-panic.com/2014/03/free-software-patents-surveillance-and.html">trilogy</a> on my reasoning over the real the dangers over Free Software (GPL, Copyleft), the patents problem, and evolution of copyleft and business models. With the above problems and the points I made in the <a href="http://www.do-not-panic.com/2014/03/free-software-patents-surveillance-and.html">trilogy</a> in mind -- here is <b>my apology to the BSD camp</b>: as much as I'd like to help my BSD counterparts I now consider permissive licenses, especially ones that <b>do not consider patents, brutally archaic</b>, and do not see a way forward with them. While there might be some ambiguity with GPLv2 and patents we are at least upholding some more modern collaborative development best practices which should help uplift our community. While evolving copyleft has not been easy (see my notes on <a href="http://www.do-not-panic.com/2012/03/importance-of-gpl.html">why the GPLv3 really failed with kernels folks</a>) we still have the chance to help evolve copyleft in the right way -- openly and with the community. We should be allying ourselves in the community with those companies who are actively engaged on evolving copyleft and the commons for better of the community (hey SUSE's hiring); when and if companies decide to cut corners -- simply quit and seek to try to ensure that they meet their fate in a court of law some day.<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/--2jP1_3uPxw/VPjSvj7NcNI/AAAAAAAB4eI/F7x_JoDknNM/s1600/20150214_124014-EFFECTS.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/--2jP1_3uPxw/VPjSvj7NcNI/AAAAAAAB4eI/F7x_JoDknNM/s1600/20150214_124014-EFFECTS.jpg" height="225" width="400" /></a></div>
<div>
<br /><div>
<br /></div>
<div>
It seems VMware has done <b>nothing</b> in any way like the work I did to help with the use of permissive licensed drivers on Linux, which would likely be the <i><u>minimum</u></i> expected for some coexistence even with proprietary platforms without raising any eyebrows. Trust me, it was not easy work and just above I've declared I've given up on that and consider it pointless. Despite the best efforts by <a href="http://sfconservancy.org/">Conservancy</a> to try to ask nicely to address the problem VMware has decided to opt out and play their cards. It seems VMware is trying to cut corners and reap benefits from our ecosystem on Linux in broad daylight. <b>That's a bloody shame</b>. Best of luck to Christoph, I fully support him on this lawsuit against VMware, if you feel the same I would like to encourage you to <a href="http://sfconservancy.org/linux-compliance/vmware-lawsuit-appeal.html">donate to Conservancy to support the VMware lawsuit</a>, if you are a Linux kernel developer and share your sentiments consider joining the loose knit set of <a href="http://sfconservancy.org/linux-compliance/about.html">kernel developers under Conservancy wishing to seek GPL compliance on Linux</a>, you can email <a href="mailto:compliance@sfconservancy.org">compliance@sfconservancy.org</a> for further information.</div>
</div>
mcgrofhttp://www.blogger.com/profile/06081818694231731816noreply@blogger.com0San Francisco, CA, USA37.7749295 -122.4194155000000137.373502 -123.06486250000002 38.176356999999996 -121.77396850000001tag:blogger.com,1999:blog-29679292.post-24844559638715724482014-11-17T17:19:00.000-08:002014-11-17T17:19:46.062-08:00Automating backport kernel integration supportI cringe when I see a task which could be automated done manually, but complex tasks are not trivially considered possible to be automated -- to even fathom such possibilities on complex tasks at times you have to divide the work into sub tasks and eventually see if its possible to automate a series of them and which ones cannot be automated. I've had a hunch about about the prospects of fully automating Linux kernel backporting for a while now, over the years a set of advances and practices on the <a href="https://backports.wiki.kernel.org/">backports project</a> has increased my confidence of these prospects, one of them was <a href="https://github.com/mcgrof/paper-backports/raw/master/paper.pdf">Increasing Automation in the Backporting of Linux Drivers Using Coccinelle SmPL [paper]</a>. If a long paper is too much to digest, check out the <a href="https://www.youtube.com/watch?v=ZXATzae7eng">Automatically backporting the Linux kernel video presentation</a> (and if you want to learn about Coccinelle SmPL check out <a href="https://www.youtube.com/watch?v=buZrNd6XkEw">Julia Lawall's Coccinelle tutorial at the 2014 SUSE Labs Con</a>) from my presentation at the 2014 SUSE Labs Conference or <a href="http://www.do-not-panic.com/2014/04/automatic-linux-kernel-backporting-with-coccinelle.html">previous blog post about that</a>. Towards the end of my presentation I hint at some further prospects in automation with the possibility of doing <i><b>self programming</b></i> of the shared backports layer targetting collateral evolutions, but I'll now review one features some folks have pestered me for a bit to incorporate: <a href="https://backports.wiki.kernel.org/index.php/Documentation/integration"><b>direct kernel integration with backports</b></a> and which I recently completed during the <b>2014 SUSE Hackweek</b>.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://backports.wiki.kernel.org/images-backports/8/80/Integration-menuconfig-start-3.15.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="215" src="https://backports.wiki.kernel.org/images-backports/8/80/Integration-menuconfig-start-3.15.png" width="400" /></a></div>
<br />
We now have the framework to optimize backporting collateral evolutions with the use of patches, Coccinelle SmPL grammar patches, and a shared layer. The <a href="https://backports.wiki.kernel.org/index.php/Documentation/backports/hacking#Backports_development_flow">development flow we follow</a> helps track linux-next daily, and this reduces the amount of work when we're close to a release made by Torvalds or Greg KH. Although we make both daily linux-next based releases and also stable releases what we provide is a tarball and users and system integrators have no way of making what we provide non-modular. This is a problem for some ecosystems such as Android and ChromeOS which do not like to ship modules. You can technically take such releases, <b>modify them somehow</b>, and then allow integration to be able to build these drivers as <b>built-in</b> and although I know some folks have used this strategy before (ChromeOS was one, OpenWrt has been doing this for years) <b>its not easy to upkeep</b>, and update, and when a new release is made <b>you have to re-do all the work</b>. As of backports-20141114 we now have <a href="https://backports.wiki.kernel.org/index.php/Documentation/integration">backports kernel integration support</a> merged. What this means is that folks that need to stick to older kernels as base can use the <a href="https://backports.wiki.kernel.org/">backports project </a>to do the integration of drivers from future kernels onto their kernel, with full kconfig support. You get what you expect, a new sub menu entry under 'make menuconfig' which lets you enter a submenu that lets you enable either as module / built-in device drivers / subsystems from future kernels to replace your older kernel's drivers / subsystems. <i><b>The work to integrate a backports release is therefore now automated</b></i>.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://backports.wiki.kernel.org/images-backports/e/e3/Integration-menuconfig-drivers-3.15.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="215" src="https://backports.wiki.kernel.org/images-backports/e/e3/Integration-menuconfig-drivers-3.15.png" width="400" /></a></div>
<br />
As you'd expect device drivers from future kernels can only be selected if the respective older driver is disabled. You can opt to compile backported drivers as modular or <b>built-in</b>. <i>The ability to compile in device drivers as <b>built-in</b> also now enables the possibility to add support into backports features and components from the kernel which we were previously not able to backport</i>. Integration support enables a one shot full integration support from a future release to an older release, the way to upgrade then would require simply rebasing your kernel as you bump your base kernel and doing another kernel integration when needed. If you are not rebasing your kernel in order to only upgrade to a new future set of backported drivers you can just drop the old backports/ directory and attempt a new integration with the newer release. This means you should clearly document <b>non-upstream</b> cherry picks on top of a backport integration, cherry pick them out and later merge them back in. This purposely <b>favours upstream development work flow</b>, if your cherry picks are on route upstream when you bump to a new backport you likely will drop most of the cherry picks you carry, in fact if you have policies in place to ensure they are upstream by a future release integration you'd be always striving towards <b>0 delta</b>, and of course, <b>0 delta would imply fully automated backport work then</b>. I hope this alone might encourage some folks to consider their own development work flows a bit, in particular those with over 6 million lines of code delta, and umm, with it taking them over 6 months to complete a rebase ;) ... On a modern laptop running the integration takes about 1-2 minutes to complete. More details are available the on the backports wiki section on <a href="https://backports.wiki.kernel.org/index.php/Documentation/integration">backports kernel integration support</a>. If you have any questions poke on IRC #kernel-backports on freenode or join the <a href="https://backports.wiki.kernel.org/index.php/Mailing_list">backports mailing list</a>.mcgrofhttp://www.blogger.com/profile/06081818694231731816noreply@blogger.com0tag:blogger.com,1999:blog-29679292.post-89118192185724185792014-08-25T21:14:00.000-07:002014-08-25T21:14:24.594-07:00Hacking on systemd with OpenSUSE<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiaxlgr3tA3IHoe0nzI3FuatTdado3cpPbWIySWS-yF3j0j_vbn5hBDfw43zs6XY6XwsgeZP_MC1WGQYtCAZZuK6u1qVCNlVA7oOk2tSG0M5A1nZoswLQdELlfyhuBiYTjFUg1A6g/s1600/bleed.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiaxlgr3tA3IHoe0nzI3FuatTdado3cpPbWIySWS-yF3j0j_vbn5hBDfw43zs6XY6XwsgeZP_MC1WGQYtCAZZuK6u1qVCNlVA7oOk2tSG0M5A1nZoswLQdELlfyhuBiYTjFUg1A6g/s1600/bleed.png" height="318" width="320" /></a></div>
<br />
I had recently had no other option but to hack on systemd <span style="color: yellow;">:*(</span> and found there wasn't any documentation on how to do this on OpenSUSE. Replacing your /sbin/init isn't as simple as it used to be back in the day, eventually I figured things out with a few <span style="color: red;">hiccups</span> but apart from the actual ability to hack and install systemd I also picked up a bit of good <b><span style="color: #274e13;">best practices</span></b> you can use to help while testing, and dealt with installing <a href="http://d-bus.googlecode.com/git/kdbus.txt">kdbus</a> as I was tired of seeing those pesky warnings from systemd without it. My first assumption that things would just work if I installed over my base install proved incorrect, so avoid that ;), I'll cover doing this with containers. While I don't yet have access to edit the freedesktop wiki I figured I'd document my steps here and later move that documentation once and if granted access.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjtTdJ9KuzRY1dDINPKMxDd7kTjqCe-9HfmVgqfu2Oo4uRyC5n1vp1EwZvfRy8fCiC266Mkvc0ARXQ1e5Q6kkBYf6Jc9J0CcO8c46DWG1JKFZeTccydK4RK20TvrfLvHXLTUjrpGg/s1600/chespi.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjtTdJ9KuzRY1dDINPKMxDd7kTjqCe-9HfmVgqfu2Oo4uRyC5n1vp1EwZvfRy8fCiC266Mkvc0ARXQ1e5Q6kkBYf6Jc9J0CcO8c46DWG1JKFZeTccydK4RK20TvrfLvHXLTUjrpGg/s1600/chespi.png" height="320" width="319" /></a></div>
<br />
First you need the equivalent of a deboostrap a la OpenSUSE. Since <a href="http://en.opensuse.org/openSUSE:Factory_installation">OpenSUSE is now a rolling distribution</a> this documentation will focus on using those repositories. Since OpenSUSE embraces <a href="https://btrfs.wiki.kernel.org/">btrfs</a> fully and it has copy-on-write bells and whistles to help you save space this small little guide will also provide instructions on using the <a href="https://btrfs.wiki.kernel.org/index.php/FAQ#What_is_a_snapshot.3F">btrfs snapshot</a> capability of btrfs to help you use a base OpenSUSE install for further "branch" type of hacking. This will let your copies of the original install <b>share</b> the same base size / blocks on the hard drive and only make changes once you've modified the system. If you don't want to use the btrfs snapshot feature just ignore the btrfs commands and create a directory instead of the creation commands. This should let you hack without using up gobs of space. This should be considered a small supplement on <a href="http://www.freedesktop.org/wiki/Software/systemd/VirtualizedTesting/">hacking and testing systemd in a virtualized environment</a>. As of 2014-08-05 the instructions here will create a small container for you that will take about 333 MiB of space.<br />
<br />
First get your repos set up with the latest rolling distribution repo, if using btrfs might as well use the btrfs snapshot feature:<br />
<span style="font-size: small;"><br /></span>
<span style="font-family: "Courier New",Courier,monospace; font-size: small;">$ sudo btrfs sub create /opt/opensuse/<br /># If you don't want to use the snapshot just create the directory<br />$ sudo mkdir -p /opt/opensuse</span><br /> This will let you install package binaries with zypper install <br />
<span style="font-size: small;"><br /></span>
<span style="font-family: "Courier New",Courier,monospace; font-size: small;">$ sudo zypper --root /opt/opensuse/ ar http://download.opensuse.org/factory/repo/oss repo-oss</span><br />
<br />
Quite a bit of packages require /dev/zero to be available.<span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;"> </span></span><br />
<br />
<span style="font-family: "Courier New",Courier,monospace; font-size: small;">$ sudo mkdir /opt/opensuse/dev/</span><br />
<span style="font-family: "Courier New",Courier,monospace; font-size: small;">$ sudo mknod /opt/opensuse/dev/zero c 1 5 </span><br />
<span style="font-family: "Courier New",Courier,monospace; font-size: small;">$ sudo chmod 666 /opt/opensuse/dev/zero</span><br />
<br />
Then install a minimal set for hacking:<br />
<span style="font-size: small;"><br /></span>
<span style="font-family: "Courier New",Courier,monospace; font-size: small;">$ sudo zypper --root /opt/opensuse/ install rpm zypper wget vim sudo</span><br />
<br />
Now get qemu-kvm and load then the kvm module.<br />
<span style="font-size: small;"><br /></span>
<span style="font-family: "Courier New",Courier,monospace; font-size: small;">$ sudo zypper install qemu-kvm</span><br />
<span style="font-family: "Courier New",Courier,monospace; font-size: small;">$ sudo modprobe kvm-intel</span><br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi77mTVcA6BoDh1jXgCAS5EuNHOZZlOSb_tNnkOGomC_Nqv8L6IIEg91S60Bm7a4orfKatYSaS8aYyFVyXiWjCEIl2OUtJojo3EHl3rAmHtexx5licWndR0DJo72lpeX50Mj_hczg/s1600/thorne.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi77mTVcA6BoDh1jXgCAS5EuNHOZZlOSb_tNnkOGomC_Nqv8L6IIEg91S60Bm7a4orfKatYSaS8aYyFVyXiWjCEIl2OUtJojo3EHl3rAmHtexx5licWndR0DJo72lpeX50Mj_hczg/s1600/thorne.png" height="253" width="320" /></a></div>
<br />
Next you should launch systemd-nspawn (the systemd chroot equivalent) and change your root password before booting into it, and enable root login from the console. <br />
<span style="font-family: "Courier New",Courier,monospace;"><span style="font-size: x-small;"><br /><span style="font-size: small;">$ sudo systemd-nspawn -D /opt/opensuse<br />Timezone America/New_York does not exist in container, not updating container timezone.<br />Directory: /root<br />Tue Aug 5 17:39:47 UTC 2014<br />opensuse:~ # passwd<br />New password: <br />Retype new password: <br />passwd: password updated successfully</span></span></span><br />
<span style="font-family: "Courier New",Courier,monospace;"><span style="font-size: x-small;"></span></span><br />
<span style="font-family: "Courier New",Courier,monospace;"><span style="font-size: x-small;"></span></span><br />
<span style="font-family: "Courier New",Courier,monospace;"><span style="font-size: x-small;"></span></span><br />
<span style="font-family: "Courier New",Courier,monospace;"><span style="font-size: x-small;"></span></span><br />
<br />By default OpenSUSE won't let you log in to the console as root, to enable that do:<span style="font-family: "Courier New",Courier,monospace;"><span style="font-size: x-small;"></span></span><br />
<span style="font-size: small;"><br /><span style="font-family: "Courier New",Courier,monospace;">opensuse:~ # echo console >> /etc/securetty</span></span><br />
<span style="font-family: "Courier New",Courier,monospace;"><span style="font-size: x-small;"><span style="font-size: small;">opensuse:~ # sed -i 's/session\s*required\s*pam_loginuid.so/#session required pam_loginuid.so/' /etc/pam.d/login </span></span></span><br />
<br />
<br />
To make it easier to hack on it'd be ideal to also just enable root access without a password, the involves making some PAM changes, and disabling the password for root, this still doesn't work for me so this is incomplete for now, ignore the next steps for now, I leave them here if anyone wants to continue to chug on that route and figure out the other steps.<br />
<span style="font-size: small;"><span style="font-family: "Courier New",Courier,monospace;"><br /></span></span>
<span style="font-size: small;"><span style="font-family: "Courier New",Courier,monospace;">opensuse:~ # sed -i 's/root:.*:\([0-9]*\)::::::/root::\1::::::/' /etc/shadow </span></span><br />
<br />
Now you should be able to just boot into it using a container, shut down the container you were just in:<br />
<br />
<span style="font-family: "Courier New",Courier,monospace;">opensuse:~ # systemctl halt</span><br />
<br />Now give your new container a fresh spin with -b<br />
<pre><span style="font-size: small;"><code>
</code></span></pre>
<span style="font-family: "Courier New",Courier,monospace;">$ sudo systemd-nspawn -bD /opt/opensuse 3</span><pre><span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;"> </span></span></pre>
The -b will tell systemd to run init on the container and the number 3 tells systemd to launch the various services required for the runlevel3.target. A target is a way to group up required services. You should be able to log in using root.<br />
<br />
Eventually you want to list and manage any deployed container, this includes killing them. For that you can use machinectl within your own system, not within the container.<br />
<br />
<span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;">$ machinectl <br />MACHINE CONTAINER SERVICE <br />opensuse container nspawn <br /><br />1 machines listed.</span></span><br />
<br />
To kill the one you just started for example:<br />
<span style="font-size: x-small;"><span style="font-family: Georgia,"Times New Roman",serif;"><br /></span></span>
<span style="font-family: "Courier New",Courier,monospace;"><span style="font-size: x-small;">$ sudo machinectl terminate opensuse</span></span><br />
<span style="font-family: "Courier New",Courier,monospace;"><span style="font-size: x-small;">$ machinectl <br />MACHINE CONTAINER SERVICE <br /><br />0 machines listed.</span></span><br />
<br />
To start hacking create new snapshot based on the original. This will let us easily create new OpenSUSE containers to hack on. Kill the base container first with machinectl before doing this though.<br />
<span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;"><br /></span></span>
<span style="font-family: "Courier New",Courier,monospace;">$ sudo btrfs sub snap /opt/opensuse /opt/opensuse-hack1</span><br />
<br />
And then go at it on /opt/opensuse-hack1 to hack on your stuff. You can now follow <a href="http://www.freedesktop.org/wiki/Software/systemd/VirtualizedTesting/">the instructions on the freedesktop wiki on hacking on systemd on virtualized environment</a> but it doesn't tell you to uninstall the distribution's version of systemd -- this is recommended, at least I ran into issues without doing this. To do that just remove the files the rpm installs. You can do this several ways:<br />
<br />
From within your system, targeting the new container path:<br />
<br />
<span style="font-family: "Courier New",Courier,monospace;">$ rpm -ql --root /opt/opensuse-hack1/ systemd | sed -e 's|\(.*\)|/opt/opensuse-hack1\1|' | xargs rm -f </span><br />
Something a bit more safe if you don't trust the above:<br />
<br />
<span style="font-family: "Courier New",Courier,monospace;">$ CONT="/opt/opensuse-hack1/"<br />$ for i in $(rpm -ql --root $CONT systemd); do if [[ -f $CONT/$i ]]; then sudo rm -f $CONT/$i ; fi ; done</span><br />
And finally another simpler / secure way to do this from within the container, your container will just become useless after this though so you'll have to kill it from your system with machinectl after this.<br />
<br />
<span style="font-family: "Courier New",Courier,monospace;">linux:~ # rpm -ql systemd | xargs rm -f</span><br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgiVgWXPk_ntX15Z3cXN5uud8NzmVkqdDVzoeN4WtlpKf3WZiJXv6h5pZo3CuzC0NqlP7pWiTIprbZXq8kuOm1j-WX2JYxeAAM3xIdjwq3SrwHhpcxuNmkKUpwKeFRSpEHYJIhLjQ/s1600/hippie.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgiVgWXPk_ntX15Z3cXN5uud8NzmVkqdDVzoeN4WtlpKf3WZiJXv6h5pZo3CuzC0NqlP7pWiTIprbZXq8kuOm1j-WX2JYxeAAM3xIdjwq3SrwHhpcxuNmkKUpwKeFRSpEHYJIhLjQ/s1600/hippie.png" height="280" width="320" /></a></div>
<br />
<br />
All you need now is to compile systemd from sources from your system locally and then use DESTDIR=/opt/opensuse-hack1/, but be very sure to also the <span style="color: yellow;">--with-rootprefix=</span> option as by default systemd will leave it blank.<br />
<span style="font-family: "Courier New",Courier,monospace;"><br /></span><span style="font-family: "Courier New",Courier,monospace;">$ ./autogen.sh</span><br />
<span style="font-family: "Courier New",Courier,monospace;">$ ./configure CFLAGS='-g -O0 -ftrapv' --enable-compat-libs --enable-kdbus --sysconfdir=/etc --localstatedir=/var --libdir=/usr/lib64 --enable-gtk-doc <span style="color: yellow;">--with-rootprefix=/usr/</span> --with-rootlibdir=/lib64 </span><br />
<span style="font-family: "Courier New",Courier,monospace;">$ sudo DESTDIR=/opt/opensuse-hack1/ make install</span><br />
<br />
As of 2014-08-05 systemd from source by default will want the shiny new <a href="http://d-bus.googlecode.com/git/kdbus.txt">kdbus</a>. Go read up on the <a href="http://lwn.net/Articles/580194/">lwn kdbus article</a>, then since kdbus is not yet in the kernel you'll want to compile a fresh vanilla kernel (I don't provide instructions here obviously), install that and later compile and install the kdbus as a module form the external repo:<br />
<span style="font-size: small;"><br /></span>
<span style="font-size: small;"><span style="font-family: "Courier New",Courier,monospace;">git clone https://github.com/gregkh/kdbus</span></span><br />
<span style="font-size: small;"><span style="font-family: "Courier New",Courier,monospace;">cd kdbus</span></span><br />
<span style="font-size: small;"><span style="font-family: "Courier New",Courier,monospace;"># Use a known compilable version at least if you're on v3.16.0-rc7</span></span><br />
<span style="font-size: small;"><span style="font-family: "Courier New",Courier,monospace;">git reset --hard 1f63f96686f9398eedde86b4e08581d14c6e403a </span></span><br />
<span style="font-size: small;"><span style="font-family: "Courier New",Courier,monospace;">make</span></span><br />
<span style="font-size: small;"><span style="font-family: "Courier New",Courier,monospace;">sudo make install</span></span><br />
<br />
Finally you can now give your container a spin.<br />
<span style="font-family: "Courier New",Courier,monospace;"><br /></span>
<span style="font-family: "Courier New",Courier,monospace;">$ sudo systemd-nspawn -bD /opt/opensuse-hack1</span><br />
<br />
To be sure you are getting a new systemd you can test the version systemd --version from the container.mcgrofhttp://www.blogger.com/profile/06081818694231731816noreply@blogger.com0tag:blogger.com,1999:blog-29679292.post-67525568971586894732014-07-25T18:04:00.000-07:002014-07-25T18:04:14.496-07:00Colored diffs with mutt<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi2NStwtWsXt9UBqI6u0V-lMFpXnmER9HZoN1ywx35dVuLKqDSX3m1aiByJ4ESIAYEAWwCR_xB-NSDBZ48mQGBa0mNa-XIQBn3U58JDWPJqQseiyeDSPRYbi2UTJNM_e47ZZ_-VLw/s1600/ron-mutt-diff.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi2NStwtWsXt9UBqI6u0V-lMFpXnmER9HZoN1ywx35dVuLKqDSX3m1aiByJ4ESIAYEAWwCR_xB-NSDBZ48mQGBa0mNa-XIQBn3U58JDWPJqQseiyeDSPRYbi2UTJNM_e47ZZ_-VLw/s1600/ron-mutt-diff.png" height="240" width="400" /></a></div>
<br />
I cannot stand reviewing patches with gmail or any GUI e-mail client. <i><b>I use <a href="http://www.mutt.org/">mutt</a></b></i>. On my last post I explained how you can <a href="http://www.do-not-panic.com/2014/07/applying-patches-from-mutt-onto-git.html">apply patches directly from within mutt onto a git tree</a> with a few shortcuts without leaving the terminal. This small post provides the next step to allow you to <span style="color: #f1c232;">grow a mustache...</span> I mean, get you to enjoy your <a href="http://www.mutt.org/">mutt</a> experience even more when reviewing patches by getting you <span style="color: #6aa84f;">colored</span> <span style="color: #6fa8dc;">diffs</span> to match the same colors provided to you by good 'ol 'git diff'. Edit your .muttrc file and add these.<br />
<br />
<span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;"># Patch syntax highlighting <br />color normal white default <br />color body brightwhite default ^[[:space:]].* <br />color body brightwhite default ^(diff).* <br />color body white default ^[\-\-\-].* <br />color body white default ^[\+\+\+].* <br />color body green default ^[\+].* <br />color body red default ^[\-].* <br />color body brightblue default [@@].* <br />color body brightwhite default ^(\s).* <br />color body brightwhite default ^(Signed-off-by).* <br />color body brightwhite default ^(Cc) </span></span><br />
<br />
<br />mcgrofhttp://www.blogger.com/profile/06081818694231731816noreply@blogger.com1tag:blogger.com,1999:blog-29679292.post-36110849417129400802014-07-17T17:56:00.000-07:002014-07-17T17:56:12.466-07:00Applying patches from mutt onto a git tree easily<br />
<br />
<br />
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhL2tHhWMa7yCCnUKOOi8JuhzRDb3VqAWaXXh3gkRTqxbQNlah3Dgkm70925R1JXvEp6zeMFqOpoWsmQb7Z8J_oZjs28vDXPqwyHkYl-Z9ztRbZ4-pKAdvOLoWhM0LXnq1eBl9dwg/s1600/mutt-screen.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhL2tHhWMa7yCCnUKOOi8JuhzRDb3VqAWaXXh3gkRTqxbQNlah3Dgkm70925R1JXvEp6zeMFqOpoWsmQb7Z8J_oZjs28vDXPqwyHkYl-Z9ztRbZ4-pKAdvOLoWhM0LXnq1eBl9dwg/s1600/mutt-screen.png" height="138" width="400" /></a></div>
<br />
This post is for project maintainers using git who wish to merge patches easily into a project directly from mutt. Projects using git vary in size and there many different ways to merge patches from contributors. What strategy you use can depend on whether or not you are expecting to merge hundreds of patches, or just a few. If you happen to be very unfortunate and are forced to use Gerrit a mechanism was chosen for you for review and how patches will get merged / pushed. If you're just using raw git directly you can do whatever you like. For big projects <a href="http://git-scm.com/docs/git-request-pull">git pull requests</a> are commonly used. Small projects can instead live with manual patch application from an mailbox inbox. Even large projects can't realistically expect folks to be submitting every patch with pull requests, and so manual patch application also applies to large projects. How you get your patch out of your inbox and get it merged will vary depending on what software you are using to read your mailbox. Tons of folks are using gmail these days and even there it's not that easy: you'd have to go to the right pane, go to drop down menu and select "Show original", then save that page as a text file, edit it to remove the top junk right before the <b>From:</b> and finally you can then git am that file.<b><br /></b><br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiAsKU60XSFwXeRG0gmM8iIZQKcmuhwBvIM5wGm2PyDABjcYImTjwUYbrc667_bgAdE3VtaRUxfGViJE3SsozcCGTMU1sAIxfOX0UUaG3Kzco1RlKu3CTiHJ1yyFp8kSmuMfiQqaA/s1600/gmail-show-original.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiAsKU60XSFwXeRG0gmM8iIZQKcmuhwBvIM5wGm2PyDABjcYImTjwUYbrc667_bgAdE3VtaRUxfGViJE3SsozcCGTMU1sAIxfOX0UUaG3Kzco1RlKu3CTiHJ1yyFp8kSmuMfiQqaA/s1600/gmail-show-original.png" height="193" width="400" /></a><br />
<br />
This doesn't scale well. A plugin can surely help but bleh, the command line is so much better. For that you can use <a href="http://www.mutt.org/">Mutt</a>. The typical approach on mutt is to use the default hooks to save a file onto disk and then go and 'git am' it. It'd be much easier if we just had hooks to apply patches directly into a git tree though. The following are configuration options you can use and a bit of shell that will allow that. <a href="http://flavioleitner.blogspot.com/2011/03/patch-workflow-with-mutt-and-git.html">Ben Hutchings's blog post on git and mutt</a> in 2011 described a way to extract patches into a directory and then you'd just git am them. Those instructions no longer work on newer versions of mutt, I'll provide updated settings and also extended these hooks to allow you to apply patches without even having to drop down to another shell, while also giving you the option to inspect them manually if you wish.<br />
<br />
Here's what I have on my .muttrc :<br />
<br />
<span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;">macro index (t '<tag-prefix><pipe-entry>~/mailtogit/mail-to-mbox^M' "Dumps tagged patches into ~/incoming/*.mbox"<br />macro index (a '<pipe-entry>~/mailtogit/git-apply-incomming^M' "git am ~/incoming/*.mbox"<br />macro index (g '<tag-prefix><pipe-entry>~/mailtogit/git-apply^M' "git am tagged patches"<br />macro index (r '<pipe-entry>rm -f ~/incoming/*.mbox^M' "Nukes all ~/incoming/" <br />macro index (l '<pipe-entry>ls -ltr ~/incoming/^M' "ls -l ~/incoming/" <br />macro index ,t '<pipe-entry>~/mailtogit/mail-to-mbox^M' "Dumps currently viewed patch into ~/incoming/*.mbox"<br />macro index ,g '<pipe-entry>~/mailtogit/git-apply^M' "git am currently viewed patch"<br />macro index ,a '<pipe-entry>~/mailtogit/git-abort^M' "git am --abort" <br />macro index ,r '<pipe-entry>~/mailtogit/git-reset^M' "git-reset --hard origin" </pipe-entry></pipe-entry></pipe-entry></pipe-entry></pipe-entry></pipe-entry></pipe-entry></tag-prefix></pipe-entry></pipe-entry></tag-prefix></span></span><br /><br />
The first hook <span style="color: #bf9000;">(t</span> allows you to dump patches you tag into an ~/incoming/ directory, mutt will show you what those are. The <span style="color: #bf9000;">(a</span> will apply all the patches that you just took out into that directory. The<span style="color: #bf9000;"> (g</span> hook will merge the two steps into one and just dump the tagged patches and apply them immediately. If you have to clear the ~/incoming/ directory just use the <span style="color: #bf9000;">(r</span> hook. If you'd like to review what's in that directory you can use the <span style="color: #bf9000;">(l </span>hook. With <span style="color: #bf9000;">,t</span> you can dump the currently viewed patch into ~/incoming/, this lets you extract a patch without tagging it. The <span style="color: #bf9000;">,g</span> hook will also skip having to tag a patch and just apply it. If you want to abort a 'git am' operation you can use <span style="color: #bf9000;">,a</span>. Finally to reset your tree to origin, just use the<span style="color: #7f6000;"> ,r</span> hook.<br />
<br />
This all depends on 5 small scripts, the ones that change directory obviously are making these scripts depend on one single projects so the question arises as to <span style="color: #38761d;">how to generalize this</span> so that mutt is aware of the project a patch was sent for and you can apply it to that right tree so that we don't have to stuff mutt with tons of different project specific hooks. There are two approaches that come to mind, one is to have the shell script read the <span style="color: #38761d;">List-ID</span> tag, for example <span style="color: #38761d;">List-ID</span>: <backports .vger.kernel.org="">, and have a mapping of those to git trees. The other is to trust rather the directory the e-mail went in under mutt, which assumes you already had filters for each <span style="color: #38761d;">List-ID</span>. The issue with both of these approaches is that at times a patch may go to multiple lists but in Linux' case, where this does apply, it should be specific to at least one git tree you do care, unless I guess you are maintaining multiple subsystems. Another possibility that comes to mind is to have git format-patch add yet-another-tag into the e-mail that it spits out the e-mails used for submission, perhaps Gid-ID: and the tree? This also has some issues though for many reasons, so for now this is what I have and use. Let me know if you come up with something more generic.</backports><br />
<br />
<span style="color: #38761d;"><span style="font-family: "Courier New",Courier,monospace;">mailtogit/mail-to-mbox </span></span><br />
<span style="font-family: "Courier New",Courier,monospace;"><br /><span style="color: white;">formail -cds ~/mailtogit/procmail -<br />ls -l ~/incoming/</span></span><br />
<br />
<span style="color: #38761d;"><span style="font-family: "Courier New",Courier,monospace;">mailtogit/git-apply-incomming</span></span><br />
<span style="font-family: "Courier New",Courier,monospace;"><br /><span style="color: white;">cd ~/backports<br />git am ~/incoming/*.mbox</span><br /></span><br />
<span style="font-family: "Courier New",Courier,monospace;"><span style="color: #38761d;">mailtogit/git-apply</span><br /> </span><br />
<span style="color: white;"><span style="font-family: "Courier New",Courier,monospace;">rm -f ~/incoming/*<br />~/mailtogit/mail-to-mbox<br />cd ~/backports<br />git am -s ~/incoming/*.mbox</span></span><br />
<span style="font-family: "Courier New",Courier,monospace;"><br /></span>
<span style="font-family: "Courier New",Courier,monospace;"><span style="color: #38761d;">mailtogit/git-abort </span><br /> </span><br />
<span style="color: white;"><span style="font-family: "Courier New",Courier,monospace;">cd ~/backports/<br />git am --abort<br />rm -f ~/incoming/*</span></span><br />
<span style="font-family: "Courier New",Courier,monospace;"><br /></span>
<span style="color: #38761d;"><span style="font-family: "Courier New",Courier,monospace;">mailtogit/git-reset </span></span><br />
<span style="font-family: "Courier New",Courier,monospace;"><br /><span style="color: white;">cd ~/backports/<br />git reset --hard origin<br />rm -f ~/incoming/*</span></span><br /><br />
<br />
<br />mcgrofhttp://www.blogger.com/profile/06081818694231731816noreply@blogger.com1tag:blogger.com,1999:blog-29679292.post-75608500965957155412014-05-27T20:13:00.002-07:002014-08-05T12:05:15.610-07:00Building and booting vanilla Xen on vanilla Linux with systemd<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-eT_UvtqfM6A/U4VPkbUSTcI/AAAAAAABnJc/WN600WEJqUI/s1600/Xen-Panda-Summit-500px.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-eT_UvtqfM6A/U4VPkbUSTcI/AAAAAAABnJc/WN600WEJqUI/s1600/Xen-Panda-Summit-500px.png" height="307" width="320" /></a></div>
<br />
<br />
If you want to do <a href="http://www.xenproject.org/">Xen</a> development you <b>should be</b> working with upstream sources, and you <b>should be</b>
sending your patches upstream, ASAP, that is before they are even in
production. There simply should be no ifs or doubts about this. Doing it
any other way is simply detrimental in the long run. I'm new to
virtualization but from the architectural look of it I consider <a href="http://en.wikipedia.org/wiki/Kernel-based_Virtual_Machine">kvm</a>
a good reaction to virtualization evolution with focus for a clean new
architecture that pairs up best with the latest hardware enhancements
only. The decision to not support new bells and whistles on things that
could be done through software but instead designed with hardware
support eliminates tons of support on the software side, but obviously
it relies on the assumption that folks will upgrade hardware and that
the hardware was designed properly. <a href="http://www.xenproject.org/">Xen</a> however is full of a<i> rich history</i>, <i>experience</i>, and <i>flexibility</i>, and as such its important to realize that there should be no easy decision to claim what is a better solution right now.<br />
<br />
One
thing I'm sure: both solutions at this point have a rich set of
expertise and design goals to be learned from, the one thing I see kvm
doing right is pushing <b><i>Upstream First (TM)</i></b> as a motto. Xen should <i>learn</i>
from that strategy as there are markets and innovative groups who
appreciate this tremendously. With the rapid pace of evolution of the
Linux kernel, <b>there is simply no other way</b>, and because of this Xen development should change to a <b>must be</b> working upstream only model, and join the <b><i>Upstream First (TM)</i></b>
bandwagon. In this post I will dive into the recipes required to get
the latest Xen and vanilla Linux sources and get you started on the <i><b>Upstream First (TM)</b></i> bandwagon with Xen. I provide instructions for getting <u>both</u> <b>Xen</b> and the upstream <b>Linux kernel</b>
configured properly. I will ignore anything not upstream on the Linux
kernel, as what we need to do with that delta is just get it upstream.
Additionally since <a href="http://lwn.net/Articles/585363/">even Debian has casted votes on supporting systemd</a>
as a Linux init replacement I'll also provide instructions on how to
get systemd support on xen with active socket support as it seems that's
the way of the future for all Linux distributions. Both Fedora 20 and
OpenSUSE 13.1 have already jumped on systemd so you'll want proper
systemd support for these, as it stands right now Xen does not have
service unit files as part of its upstream sources, patches are in the
works though and this posts also illustrates some corner cases found
while implementing support, some <a href="https://github.com/mcgrof/funk-systemd/tree/master/src/m4">general systemd autotools library helpers</a> defined to make it easier for others to integrates support for systemd and an <a href="https://github.com/mcgrof/funk-systemd">example code base which makes elaborate use of these helpers</a>.<br />
<br />
Please note that compiling xen with <b>systemd</b> support enables binaries to be used <u>for systems either using legacy init or systemd</u> using the the v5 series of integration patches documented here, systemd support patches are not yet merged upstream, but to help
provide wider coverage support you should enable its support as per the
instructions below and report any issues you have found to me. Since I
wish for as many folks to jump on the upstream bandwagon I'll cover
instructions only for getting the latest xen to run on the latest stable
vanilla kernel over a slew of Linux distributions, this includes the
Linux kernel as well as xen, and resolving all your dependencies. I'll
recommend building and <a href="http://www.do-not-panic.com/2014/04/summary-of-gains-of-xen-oxenstored.html">embracing oxenstored for reasons I've stated before</a>,
after all if you run into issues with the latest systemd series of
patches you can easily revert back to cxenstored by a simple flip on the
configuration file on either /etc/sysconfig/xencommons (rpm based
distributions) or /etc/defaults/xencommons (Debian based distributions)
(Note: this last part still needs to be worked on, right now this
requires a bit more work for systemd).<br />
<br />
I have built tested the below instructions on <a href="http://en.opensuse.org/Portal:Tumbleweed">OpenSUSE Tumbleweed</a>, Debian testing, and Fedora 20. I have only run time tested this on <a href="http://en.opensuse.org/Portal:Tumbleweed">OpenSUSE Tumbleweed</a>
and Debian testing. Reports for any issues on run time on Fedora 20 and
Ubuntu are appreciated. Instructions for other Linux distributions are
welcomed so I can extend the documentation here while systemd support
patches get baked upstream, after that I will move all documentation to
the xen wiki.<br />
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-XcvGkFRYOW0/U4VRBplGXDI/AAAAAAABnJs/_X_ubD4sRbY/s1600/Grub_logo.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-XcvGkFRYOW0/U4VRBplGXDI/AAAAAAABnJs/_X_ubD4sRbY/s1600/Grub_logo.png" /></a></div>
<br />
<br />
<h2>
Getting an updated /sbin/installkernel </h2>
<h3>
</h3>
Linux distributions
shipping with grub2 will need to ensure that their /sbin/installkernel
script, which has to be provided by each Linux distribution, copies the
the kernel configuration upon a custom kernel install time. The
requirement for the config file comes from <a href="http://git.savannah.gnu.org/cgit/grub.git/tree/util/grub.d/20_linux_xen.in">upstream grub2 /etc/grub.d/20_linux_xen</a> which <br />
will only add xen as an instance to your grub.cfg <b>if and only if</b> it finds in your config file either of: <br />
<br />
<span style="font-family: "Courier New",Courier,monospace;">CONFIG_XEN_DOM0=y <br />CONFIG_XEN_PRIVILEGED_GUEST=y </span> <br />
<br />
Without this a user compiling and installing their own kernel with proper support for xen <b>and</b>
with the xen hypervisor present will not get their respective grub2
update script to pick up the xen hypervisor. Debian testing has proper
support for this, OpenSUSE required <a href="https://github.com/openSUSE/mkinitrd/commit/56f8a20e1bf3efa9c822a724cb33f5683818b7ec">this change upstream on mkinitrd</a>, so OpenSUSE folks will want to get the latest /sbin/installkernel hosted on the <a href="https://github.com/openSUSE/mkinitrd">OpenSUSE mkinitrd repository on github</a>.<br />
<br />
<span style="font-family: "Courier New",Courier,monospace;"># If on OpenSUSE update your /sbin/installkernel</span><br />
<span style="font-family: "Courier New",Courier,monospace;">git clone https://github.com/openSUSE/mkinitrd.git</span><br />
<span style="font-family: "Courier New",Courier,monospace;">cd </span><span style="font-family: "Courier New",Courier,monospace;">mkinitrd</span><br />
<span style="font-family: "Courier New",Courier,monospace;">sudo cp sbin/installkernel /sbin/installkernel<span style="font-size: small;"><span style="font-family: inherit;"> </span></span></span><br />
<span style="font-family: "Courier New",Courier,monospace;"><span style="font-size: small;"><span style="font-family: inherit;"><br /></span></span></span>
Fedora might need a similar update. I welcome feedback on confirming this.<br />
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-oiQLSXkVNnc/U4VRwa4r31I/AAAAAAABnJ8/ktaCeh-DYcg/s1600/open_suse.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-oiQLSXkVNnc/U4VRwa4r31I/AAAAAAABnJ8/ktaCeh-DYcg/s1600/open_suse.jpg" /></a></div>
<h2>
<span style="font-size: large;">Xen systemd build dependencies on OpenSUSE</span></h2>
<h4>
<span style="font-size: large;"> </span></h4>
<h4>
<span style="font-size: small;"></span></h4>
<span style="font-family: "Courier New",Courier,monospace;"># If you're now on the latest OpenSUSE you'll note its now a</span><br />
<span style="font-family: "Courier New",Courier,monospace;"># a <a href="http://en.opensuse.org/openSUSE:Factory_installation">rolling distribution base for (and also called Factory)</a> </span><br />
<span style="font-family: "Courier New",Courier,monospace;"># The default instructions do not actually encourage you to</span><br />
<span style="font-family: "Courier New",Courier,monospace;"># install the source repositories, and even if you did</span><br />
<span style="font-family: "Courier New",Courier,monospace;"># install them the instructions disable them by default, so</span><br />
<span style="font-family: "Courier New",Courier,monospace;"># be sure to install them and enable them otherwise</span><br />
<span style="font-family: "Courier New",Courier,monospace;"># the command zypper source-install -d won't work.</span><br />
<span style="font-family: "Courier New",Courier,monospace;"># To enable the required repository if you already had it</span><br />
<span style="font-family: "Courier New",Courier,monospace;"># installed: </span><br />
<span style="font-family: "Courier New",Courier,monospace;">sudo zypper mr -e repo-src-oss </span><br />
<br />
<span style="font-family: "Courier New",Courier,monospace;"># Get the build dependencies for Xen</span><br />
<span style="font-family: "Courier New",Courier,monospace;">sudo zypper source-install -d xen</span><br />
<br />
<span style="font-family: "Courier New",Courier,monospace;"># Things not picked up by the build dependencies </span><br />
<span style="font-family: "Courier New",Courier,monospace;">sudo
zypper install systemd-devel gettext-tools\</span><br />
<span style="font-family: "Courier New",Courier,monospace;">ocaml ocaml-compiler-libs
ocaml-runtime \</span><br />
<span style="font-family: "Courier New",Courier,monospace;">ocaml-ocamldoc ocaml-findlib glibc-devel-32bit make patch</span><br />
<br />
<span style="font-family: "Courier New",Courier,monospace;"># Get build dependencies for Linux</span><br />
<span style="font-family: "Courier New",Courier,monospace;">sudo zypper source-install -d kernel-desktop</span><br />
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<span style="font-family: "Courier New",Courier,monospace;"><a href="http://3.bp.blogspot.com/-tGeNBhK4Vk4/U4VSBTZBFSI/AAAAAAABnKM/GhpsO5Vp7j8/s1600/openlogo-100.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-tGeNBhK4Vk4/U4VSBTZBFSI/AAAAAAABnKM/GhpsO5Vp7j8/s1600/openlogo-100.png" /></a></span></div>
<br />
<h2>
<span style="font-size: large;">Xen systemd build dependencies on Debian testing and maybe Ubuntu</span></h2>
<h4>
<span style="font-size: large;"> </span></h4>
<span style="font-family: "Courier New",Courier,monospace;"><span style="font-size: small;"><span style="font-family: Arial,Helvetica,sans-serif;">Note that these instructions are not to enable systemd as the init process on Debian, although there are some instructions <a href="https://wiki.debian.org/systemd">here</a> to help you with that if you wish to venture into that.</span> </span></span><br />
<br />
<span style="font-family: "Courier New",Courier,monospace;"><span style="font-size: small;">sudo apt-get build-dep xen linux</span></span><br />
<span style="font-family: "Courier New",Courier,monospace;"><span style="font-size: small;">sudo apt-get install git libsystemd-daemon-dev \</span></span><br />
<span style="font-family: "Courier New",Courier,monospace;"><span style="font-size: small;">libpixman-1-dev texinfo</span></span><br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiSklYL0TzUsycIh1ky0O0Km07gospHCVk02COagEj7sXBgm8auCgtKAmqYS0HWc_ycIwB_Y73CojOZXF8M0aQOPj11oEEzRdk-KeoPpc6Q1kG2y2WKFnkpeK5RacpbWUPNGMEKFg/s1600/Logo_fedoralogo.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiSklYL0TzUsycIh1ky0O0Km07gospHCVk02COagEj7sXBgm8auCgtKAmqYS0HWc_ycIwB_Y73CojOZXF8M0aQOPj11oEEzRdk-KeoPpc6Q1kG2y2WKFnkpeK5RacpbWUPNGMEKFg/s1600/Logo_fedoralogo.png" height="97" width="320" /></a></div>
<br />
<h2>
<span style="font-size: large;">Xen systemd build dependencies on Fedora 20 </span></h2>
<h2>
<span style="font-size: small;"> </span></h2>
<span style="font-family: Arial,Helvetica,sans-serif;"><span style="font-size: small;">Fedora
may need an update to /sbin/installkernel as OpenSUSE did for grub2
support, see the notes above for more details on that. Verification on
this is appreciated.</span></span><br />
<br />
<span style="font-family: "Courier New",Courier,monospace;"><span style="font-size: small;"># Get build dependencies for xen </span></span><br />
<span style="font-family: "Courier New",Courier,monospace;"><span style="font-size: small;">sudo yum-builddep xen</span></span><br />
<span style="font-family: "Courier New",Courier,monospace;"><br /></span>
<span style="font-family: "Courier New",Courier,monospace;"># Things not picked up by the build dependencies</span><br />
<span style="font-family: "Courier New",Courier,monospace;"><span style="font-size: small;">sudo yum install glibc-devel.x86_64 systemd-devel.x86_64</span></span><br />
<br />
<span style="font-family: "Courier New",Courier,monospace;"><span style="font-size: small;"># Get build dependencies for Linux</span></span><br />
<span style="font-size: small;"><span style="font-family: "Courier New",Courier,monospace;">sudo yum-builddep kernel </span></span><br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://upload.wikimedia.org/wikipedia/en/d/de/Ccpenguin,_the_ancestor_of_Tux.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://upload.wikimedia.org/wikipedia/en/d/de/Ccpenguin,_the_ancestor_of_Tux.jpg" height="320" width="229" /></a></div>
<br />
<h2>
<span style="font-size: small;"><span style="font-family: "Courier New",Courier,monospace;"><span style="font-family: inherit;"></span><span style="font-family: Verdana,sans-serif;"><span style="font-size: large;">Getting the code</span></span> </span></span></h2>
<br />
<br />
<span style="font-family: Arial,Helvetica,sans-serif;">Next go get Linux and Xen sources.</span><br />
<span style="font-family: "Courier New",Courier,monospace;"><br /></span>
<span style="font-family: "Courier New",Courier,monospace;">git clone git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git</span><br />
<span style="font-family: "Courier New",Courier,monospace;">git clone git://xenbits.xen.org/xen.git</span><br />
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<span style="font-family: "Courier New",Courier,monospace;"><a href="http://4.bp.blogspot.com/-dTjB4kf3k9E/U4VS7GSpWpI/AAAAAAABnKs/BXxWLmSkXsY/s1600/Xen-Panda-Motorcycle-Red-500px.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-dTjB4kf3k9E/U4VS7GSpWpI/AAAAAAABnKs/BXxWLmSkXsY/s1600/Xen-Panda-Motorcycle-Red-500px.png" height="280" width="320" /></a></span></div>
<br />
<h2>
Configuring vanilla Linux with xen support</h2>
<h4>
</h4>
<span style="font-family: "Courier New",Courier,monospace;">cd linux</span><br />
<span style="font-family: "Courier New",Courier,monospace;">wget <a href="http://drvbp1.linux-foundation.org/%7Emcgrof/patches/2014/05/15/linux-xen-defconfig.patch">http://drvbp1.linux-foundation.org/~mcgrof/patches/2014/05/15/linux-xen-defconfig.patch</a></span><br />
<span style="font-family: "Courier New",Courier,monospace;">patch -p1 < linux-xen-defconfig.patch</span><br />
<span style="font-family: "Courier New",Courier,monospace;">cp /boot/config-your-distro-config .config</span><br />
<span style="font-family: "Courier New",Courier,monospace;">make xendom0config</span><br />
<span style="font-family: "Courier New",Courier,monospace;">make -j $(getconf _NPROCESSORS_ONLN)</span><br />
<span style="font-family: "Courier New",Courier,monospace;">sudo make install</span><br />
<h4>
</h4>
<h2>
Configuring xen with oxenstored and systemd support</h2>
<h4>
</h4>
<span style="font-family: "Courier New",Courier,monospace; font-size: x-small;">cd xen</span><br />
<span style="font-family: "Courier New",Courier,monospace; font-size: x-small;">wget <a href="http://drvbp1.linux-foundation.org/%7Emcgrof/patches/2014/05/27/all-v5-series-xen-systemd.patch">http://drvbp1.linux-foundation.org/~mcgrof/patches/2014/05/27/all-v5-series-xen-systemd.patch</a></span><br />
<span style="font-family: "Courier New",Courier,monospace; font-size: x-small;">git reset --hard 86216963fd1d89883bb8120535704fdc79fdad50</span><br />
<span style="font-family: "Courier New",Courier,monospace; font-size: x-small;">git am </span><span style="font-family: "Courier New",Courier,monospace; font-size: x-small;"><a href="http://drvbp1.linux-foundation.org/%7Emcgrof/patches/2014/05/27/all-v5-series-xen-systemd.patch">all-v5-series-xen-systemd.patch</a></span><br />
<span style="font-family: "Courier New",Courier,monospace; font-size: x-small;">./configure --with-xenstored=oxenstored --enable-systemd</span><br />
<span style="font-family: "Courier New",Courier,monospace; font-size: x-small;">make dist -j $(getconf _NPROCESSORS_ONLN)</span><br />
<span style="font-family: "Courier New",Courier,monospace; font-size: x-small;">sudo make install</span><br />
<span style="font-family: "Courier New",Courier,monospace; font-size: x-small;">sudo ldconfig</span><br />
<br />
<span style="font-family: "Courier New",Courier,monospace;"># If on systemd, that is, if you have /run/systemd/system/ </span><br />
<span style="font-family: "Courier New",Courier,monospace;">sudo systemctl daemon-reload</span><br />
<br />
The
last step is to enable the systemd unit services you want, if you want
to test the active socket stuff, just enable xenstored.socket, and after
reboot you can just use netcat as root to tickle the socket as
described below, if you just want to have the xenstored service already
running enable the xenstored.service, which will also enable
xenstored.socket as its a dependency.<br />
<br />
<span style="font-family: "Courier New",Courier,monospace;">sudo systemctl enable xenstored.socket</span><br />
<span style="font-family: "Courier New",Courier,monospace;">sudo systemctl enable xenstored.service</span><br />
<br />
The
last step is to ensure the grub config updated to pick up the xen
hypervisor. This varies depending on Linux distributions. Below we cover
the distributions that I have tested booting on.<br />
<h2>
Updating grub for Xen on OpenSUSE</h2>
<h4>
</h4>
sudo update-bootloader --refresh <br />
<h2>
Updating grub for Xen on Debian and maybe Ubuntu</h2>
<h2>
</h2>
sudo update-grub<br />
<h2>
Reboot and test </h2>
<h4>
</h4>
That's all, reboot and make sure you pick the
right grub entry. Typically grub2 will list regular kernel entries and
hypervisor entries separated, with the option to go into advanced
settings for each one. Entering the advanced settings for the hypervisor
will enable you to pick the exact kernel you want to boot to. If you
have hardware with some virtualization capabilities you'll want to
enable that, this is done on through the BIOS / UEFI menu. Below are
some pictures of enabling the features on a Thinkpad T440p, and then the
flow through grub2.<br />
<br />
<br />
Get into the virtualization menu on the system BIOS / UEFI menu.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-61bSRIK-vGY/U312jyBhb2I/AAAAAAABmsY/eLDDFprpBjE/s1600/IMG_20140515_014449.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-61bSRIK-vGY/U312jyBhb2I/AAAAAAABmsY/eLDDFprpBjE/s1600/IMG_20140515_014449.jpg" height="300" width="400" /></a></div>
<br />
On
Intel hardware this will be labeled as Intel Virtualization Technology
and Intel VT-d Feature. For AMD the name is some other flashy similar
thing.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-8CANQIHnkz4/U312iFH4thI/AAAAAAABmsQ/hNHsSpEXOG8/s1600/IMG_20140515_014457.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-8CANQIHnkz4/U312iFH4thI/AAAAAAABmsQ/hNHsSpEXOG8/s1600/IMG_20140515_014457.jpg" height="300" width="400" /> </a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
Boot
into grub and you should now see an option for your distribution with
the Xen hypervisor, pick that if you want to go with the defaults, but
if instead you want to browse each hypervisor available pick the
advanced options.</div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<br />
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-5I4aSPFryh8/U312lRf9qQI/AAAAAAABmsg/W5aoeZ18oCs/s1600/IMG_20140515_013723.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-5I4aSPFryh8/U312lRf9qQI/AAAAAAABmsg/W5aoeZ18oCs/s1600/IMG_20140515_013723.jpg" height="300" width="400" /></a></div>
<br />
<br />
If
you picked the default hypervisor option you should be booting into the
Xen Hypervisor and that in turn will boot your kernel / distribution.
If you picked the advanced option you'll see the options for the
hypervisor as below. In my case I have only the bleeding edge unstable
version from git of the Xen hypervisor.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-WtBvwvS8HSE/U312qKIsbUI/AAAAAAABmsw/_JBceqXLEpY/s1600/IMG_20140515_013742.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-WtBvwvS8HSE/U312qKIsbUI/AAAAAAABmsw/_JBceqXLEpY/s1600/IMG_20140515_013742.jpg" height="300" width="400" /></a></div>
<br />
Next
it will let you pick the kernel you want to boot your hypervisor with.
All of the kernels with support for Xen will be displayed.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-ptlymda2LNk/U312oVnSgmI/AAAAAAABmso/-0sGJ-tq17s/s1600/IMG_20140515_013755.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-ptlymda2LNk/U312oVnSgmI/AAAAAAABmso/-0sGJ-tq17s/s1600/IMG_20140515_013755.jpg" height="300" width="400" /></a></div>
<br />
<br />
After this you should be booting into the Xen hypervisor and this in turn will boot Linux as dom0.<br />
<h2>
After bootup</h2>
<h2>
</h2>
<h2>
Starting xen with old init</h2>
<h2>
</h2>
First verify you booted into a xen hypervisor first as follows:<br />
<br />
<span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;">mcgrof@garbanzo ~ $ cat /sys/hypervisor/type</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;">xen</span></span><br />
<br />
You're
all set, the next step is to start Xen. On Linux distributions stuck on
old init like Debian right now you just have to spawn the old init
script. This is done as follows:<br />
<br />
<span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;">mcgrof@garbanzo ~ $ sudo /etc/init.d/xencommons start</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;">Starting /usr/local/sbin/oxenstored...</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;">Setting domain 0 name and domid...</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;">Starting xenconsoled...</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;">Starting QEMU as disk backend for dom0</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;"><br /></span></span>
<span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;">mcgrof@garbanzo ~ $ echo $?</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;">0</span></span><br />
<br />
You are ready to start creating guests!<br />
<br />
<h2>
Starting xen with systemd</h2>
<h3>
</h3>
First thing is to ensure your dom0 is now booted on the xen hypervisor. If you have systemd you can do this easily with:<br />
<br />
<span style="font-family: "Courier New",Courier,monospace;">mcgrof@ergon ~ $ sudo systemd-detect-virt </span><br />
<span style="font-family: "Courier New",Courier,monospace;">xen</span><br />
<br />
Under the hood this is the same as the following:<br />
<br />
<span style="font-family: "Courier New",Courier,monospace;">mcgrof@garbanzo ~ $ cat /sys/hypervisor/type</span><br />
<span style="font-family: "Courier New",Courier,monospace;">xen </span><br />
<br />
If you only enabled xenstored.socket you can verify the sockets by:<br />
<br />
<span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;">mcgrof@ergon ~ $ sudo netstat -lpn | grep xen</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;">unix 2 [ ACC ] STREAM LISTENING 13976 1/init /var/run/xenstored/socket</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;">unix 2 [ ACC ] STREAM LISTENING 13979 1/init /var/run/xenstored/socket_ro</span></span><br />
<br />
You can also use systemd: <br />
<span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;"><br /></span></span>
<span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;">mcgrof@ergon ~ $ sudo systemctl list-sockets| grep xen</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;">/var/run/xenstored/socket xenstored.socket xenstored.service</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;">/var/run/xenstored/socket_ro xenstored.socket xenstored.service</span></span><br />
<br />
You can also verify the socket unit:<br />
<br />
<span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;">mcgrof@ergon ~ $ sudo systemctl status xenstored.socket</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;">xenstored.socket - Xen xenstored / oxenstored Activation Socket</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;"> Loaded: loaded (/usr/local/lib/systemd/system/xenstored.socket; enabled)</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;"> Active: active (listening) since Thu 2014-05-15 01:12:53 PDT; 16min ago</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;"> Listen: /var/run/xenstored/socket (Stream)</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;"> /var/run/xenstored/socket_ro (Stream)</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;"><br /></span></span>
<span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;">May 15 01:12:53 ergon systemd[1]: Starting Xen xenstored / oxenstored Activation Socket.</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;">May 15 01:12:53 ergon systemd[1]: Listening on Xen xenstored / oxenstored Activation Socket.</span></span><br />
<br />
Next,
you can check to see if xenstored.service is running, it should not be
if you didn't enable it and only enabled xenstored.socket:<br />
<span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;"><br /></span></span>
<span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;">mcgrof@ergon ~ $ sudo systemctl status xenstored.service</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;">xenstored.service - Xenstored - daemon managing xenstore file system</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;"> Loaded: loaded (/usr/local/lib/systemd/system/xenstored.service; disabled)</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;"> Active: inactive (dead)</span></span><br />
<br />
Next
to see the active socket magic trigger you can just use netcat to
tickle any of the sockets. Since the permissions are only to grant
access to the root user you'll need root to tickle the socket.<br />
<br />
<span style="font-family: "Courier New",Courier,monospace;">mcgrof@ergon ~ $ sudo nc -w 1 -U /var/run/xenstored/socket_ro</span><br />
<span style="font-family: "Courier New",Courier,monospace;">mcgrof@ergon ~ $ echo $?</span><br />
<span style="font-family: "Courier New",Courier,monospace;">0</span><br />
<br />
Now verify the xenstored.service is loaded:<br />
<br />
<span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;">mcgrof@ergon ~ $ sudo systemctl status xenstored.service</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;">xenstored.service - Xenstored - daemon managing xenstore file system</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;"> Loaded: loaded (/usr/local/lib/systemd/system/xenstored.service; disabled)</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;"> Active: active (running) since Tue 2014-05-20 04:33:09 PDT; 1 day 16h ago</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;"> Main PID: 1621 (oxenstored)</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;"> CGroup: /system.slice/xenstored.service</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;"> └─1621 /usr/local/sbin/oxenstored --no-fork</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;"><br /></span></span>
<span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;">May 21 21:24:24 ergon oxenstored[1621]: xenstored is ready</span></span><br />
<h2>
</h2>
<h2>
Why you want active sockets</h2>
<h2>
</h2>
<a href="http://www.freedesktop.org/wiki/Software/systemd/">Systemd</a> has support for <i>"active sockets</i>" or <i>"socket based activation</i>", but this concept is not new, socket based activation was pioneered by Apple's <a href="http://launchd/">Launchd</a>,
and that software was released under the Apache 2.0 license, that
project got its first release in 2005, while systemd's initial release
dates 2010. Go and watch <a href="https://www.youtube.com/watch?v=cD_s6Fjdri8">Dave Zarzycki's talk at Google about Launchd</a>, there's tons of talks about systemd and, here's an old <a href="https://www.youtube.com/watch?v=TyMLi8QF6sw">introduction talk about systemd it by Lennart Poettering</a>,
and Lennart does give Apple proper kudos here. Systemd is simply über
optimized for Linux, it takes advantage of tons of special Linux kernel
enhancements. Socket based activation is ideal for local service,
AF_UNIX sockets, although support does exist for inet sockets as well.
There are two reasons why you want active sockets:<br />
<ol>
<li>On demand auto-spawning</li>
<li>Help with bootup parallelizaiton</li>
</ol>
The on demand auto-spawning can be taken advantage by xen if and
only if its tools are converted to try to open the unix socket when
they run, but they currently don't do this and some communication uses
the kernel ring interface, not the unix domain sockets. If you use the
stubdoms you also never end up using the unix domain sockets. The gains
from parrallelization however are awlays welcomed, you essentially let
systemd figure out how to bring things up by associating dependencies
rather than trying to pile things up in a specific strict numbered
order, this is all controlled by the service unit files and the
requirements specified. Udev lends a here as well, which is not merged
part of systemd, but I'll have to cover udev on another post. If one had
an ecosystem that one was sure did not require the service to be
spawned up all the time and you didn't need the kernel ring interface
immediatley up, you could just either enable only the xenstored.socket
or remove this section from the xenstored.service:<br />
<br />
WantedBy=multi-user.target<br />
<br />
A few things worth noting for daemons and systemd that I do not see covered <b>clearly</b>
in documentation, the exact expectations on the different type of
service types. Systemd supports different types of daemons, for those
that don't fork you should declare in your service unit file a type of:<br />
<br />
Service=simple<br />
<br />
For daemons that do call fork() you should use the following:<br />
<br />
Service=forking<br />
<br />
In
legacy init world, this consists of most of the daemons out there.
There's a bit of a caveat here though: systemd expects you to behave in a
certain way if you use Service=forking, <b>your first parent process should be the one to call sd_notify_fds()</b>,
you should not let child processes do the sd_notify_fds() call. What
deamons do vary and the assumption on systemd that daemon's spawn
sockets on the parent rather than children means deamons will need a bit
of a change in order to work with systemd properly as there is no way
to tell systemd a child is going to be the main process, even if you try
sd_notifyf() with the process ID of the child. Arguably there's a good
reason for this though, you should consider using <b>Service=notify</b> and when you use this type of service you <b>don't fork</b>
as part of your deamonizing effort, instead you just tell systemd when
your service is ready with sd_notify(). There's some curious
architectural design principles worth elaborating on that comes with
this that highlight a mistake typically in place on some deamons that do
fork. When deamonizing and forking killing the parent immediately is
the easy and fastest way from a programmer's perspective <b>but</b>
should typically not be done given that regular legacy init that spawn
daemons in order will enable processes to make use of the daemon under
the impression that the deamon is ready, <b>leaving a small amount of time for a race condition to trigger</b>.
Typically this is addressed with nasty undocumented workarounds, for
example retry connections to connect to the unix domain sockets on
daemons that are expected to be created after initialization. Mind you, <i>the race condition is small</i>
but yet very possible, specially if we want to boot up fast. This is
one of the races that systemd services using sd_notify() avoid by
design. This is pretty cool. <br />
<h2>
</h2>
<h2>
funk-systemd - example complex systemd daemon </h2>
<h4>
</h4>
<h4>
</h4>
Apart
from corner cases there is also the complexities introduced by the
different types of build systems / target systems, specially for
projects which really want to support multiple Operating Systems and
init systems such as Xen. To address different build environments and
targets a lot of projects use autotools, Xen follows this practice so
integrating support for systemd on Xen required proper autotools
support. Autotools support with systemd can get complicated fast -- you
see, systemd does not allow variable placements on ExecStart settings
for the binary you wish to run, this means that if your project uses
configure to dynamically place the path of the binary you will also need
proper replacement for the paths upon configure time. With autotools
this is accomplished with the AC_CONFIG_FILES() helper but in order to
make use of some paths with AC_CONFIG_FILES() you'll want to eval and
call AC_SUBST() on them. This is not only useful for the ExecStart but
also consider the different placements of the socket files. If using
${prefix} for any of the paths you will need to work with a not-so-well
documented $ac_default_prefix. You also have to consider the different
types of build environments and the different types of target systems
that a project wishes to support for a produced single binary daemon.
The different build environments may vary. A project may wish to
support forcing systemd to be present, some may wish to only use systemd
if the development libraries are present, and others may with to
require you to specify that you want systemd explicitly. As far as
target systems are concerned -- they vary as well, in the worst case
scenario a project may wish to support legacy init with and without
systemd libraries present and then for the case where systemd is the
init process. In this example situation if its desirable to support a
single binary for all types of init systems the dynamic link loader
(using dlopen(), dlsym()) can be used, or a in-place replacement for
sd_booted() can be implemented as well instead of relying and calling on
the systemd helper sd_booted(). A project such as Xen that supports two
daemons for the same type of service also needs to consider which route
to take for supporting and maintaining service until files for the
different possible daemons. There's different strategies for this. A lot
of this is not well documented, and good examples for for projects as
complex as Xen's build system are not readily available, let alone cover
all the cases I've described. Becuase of all this and since I ended up
doing the work for systemd Xen integration I made sure to try to
generalize a solution and address all types of environments as described
above, I have also stuffed a sample daemon which also covers documents
the legacy init corner case that sd_notify() explicitly addresses. You
can find the sample code here, the autoconf helpers defined and
documented here are also being submitted as part of the xen system
integration patches:<br />
<br />
<a href="https://github.com/mcgrof/funk-systemd">https://github.com/mcgrof/funk-systemd</a><br />
<br />
To
look at an example solution for the legacy init race condition look at
the usage of funk_wait_ready() which is called on the parent process
that forks. As for xen, the legacy init daemon has as part of init
script a retry counter, we should be able to remove that code with a
similar solution for the legacy socket implementation. In this tree you
will also find a few helpers if you want to get ramped up with systemd
and autoconf which xen's systemd ingration patches make use of:<br />
<ul>
<li><a href="https://github.com/mcgrof/funk-systemd/blob/master/src/m4/systemd.m4">src/m4/systemd.m4</a> - systemd autoconf library which enables easy build integration support for systemd. There are four build options supported</li>
<ul>
<li>AX_ENABLE_SYSTEMD() - enables systemd by default and requires an
explicit --disable-systemd option flag to configure if you want to
disable systemd support.</li>
<li>AX_ALLOW_SYSTEMD() - systemd will be disabled by default and
requires you to run configure with --enable-systemd to look for and
enable systemd</li>
<li>AX_AVAILABLE_SYSTEMD() - systemd will be disabled by default but if
your build system is detected to have systemd build libraries it will be
enabled. You can always force disable with --disable-systemd. This is
the option we have decided to use for Xen.</li>
<li>If you want to use the dynamic link loader you should use
AX_AVAILABLE_SYSTEMD() but must then ensure to use -rdynamic -ldl when
linking, if using automake autotools will deal with this for
you,otherwise you must ensure this is in place on your Makefile.</li>
</ul>
<li><a href="https://github.com/mcgrof/funk-systemd/blob/master/src/m4/paths.m4">src/m4/paths.m4</a>
- Implements AX_LOCAL_EXPAND_CONFIG() which you can use to replace meta
@VAR@ variables on files defined with AC_CONFIG_FILES(). You might want
to make use of this for example on systemd service unit file ExecStart,
on the socket definition file, and/or the code that connects to the
sockets.</li>
<li><a href="https://github.com/mcgrof/funk-systemd/blob/master/src/funk_dynamic_helpers.c">src/funk_dynamic_helpers.c</a>
- example systemd integration implementation support using the dynamic
link loader -- using dlopen() and dlsym() which can be used for the
one-binary-fits all solutions. Although a solution with this strategy
was tested for systemd, this is not the option we are going to support
on Xen.</li>
<li><a href="https://github.com/mcgrof/funk-systemd/tree/master/with-autoconf">funk daemon with-autoconf implementation</a> - example implementation with the above helpers with autoconf support alone</li>
<li><a href="https://github.com/mcgrof/funk-systemd/tree/master/with-automake">funk daemon with-automake implementation</a> - example implementation with the above helpers with automake support</li>
<li><a href="https://github.com/mcgrof/funk-systemdh">README</a> and <a href="https://github.com/mcgrof/funk-systemd/blob/master/INSTALL">INSTALL</a> - read these for more details on this example</li>
</ul>
<h4>
</h4>
<h2>
Systemd support for projects with multiple daemon replacements</h2>
<h2>
</h2>
<h4>
</h4>
Xen
is a good example of a project that requires support for multiple
alternative binaries that can run as the daemon. For such type of
situations there are a few possible solutions, <a href="http://lists.freedesktop.org/archives/systemd-devel/2014-May/019427.html">this has been discussed only briefly on the systemd-devel list</a>, you can end up implementing:<br />
<ol>
<li>Define a service unit file each for daemon, and define one target
which defines the overall service. Service unit files that require the
service will require the target, not the actual service unit file. The
service unit files are then mutually exclusive with each other, the
system administrator would then have to then manually select which
service unit to enable. The downside to this strategy is you end up with
multiple service unit files which in the worst case are identical and
only differ on the ExecStart path.</li>
<li>Define a service unit file for each daemon and define an
Alias=foo.service for the general service. Services that need to depend
on this service would then Require the alias, not the specific service
file for each binary. The same downside is present with this solution.</li>
<li>One service file and environment variables to be used by a binary
launcher which will get use getenv() and execve() to launch the
respective preferred daemon. This option gives the flexibility to be
easily compatible with legacy init daemons that typically require
/etc/sysconfig/ or /etc/default/ configuration files. Although <a href="http://0pointer.de/blog/projects/on-etc-sysinit.htm">Lennart has clarified that ideally the systemd-way could be to ignore /etc/sysconfig and /etc/default all together</a>
this solution would still enable to ignore /etc/sysconfig/ and
/etc/default/ by requiring the default variable to be set via
Environment=FOO_DEFAULT_DAEMON=/usr/local/sbin/bar. For support with
legacy init systems the EnvironmentFile=-/etc/sysconfig/foodaemon and
EnvironmentFile=-/etc/default/foodaemon can be used. </li>
</ol>
No example code or service unit files is provided at this point, what we end up doing for Xen remains to be decided.<br />
<br />
<h2>
Ocaml and systemd support</h2>
<h2>
</h2>
Xen has an ocaml implementation of the xenstore so as you can imagine we also had to add some support for systemd with ocaml. I won't provide examples here, but just not that support has been provided using a C interface wrapper. For details please review the posted patches.mcgrofhttp://www.blogger.com/profile/06081818694231731816noreply@blogger.com0tag:blogger.com,1999:blog-29679292.post-13860480789270593812014-04-08T13:16:00.000-07:002014-04-09T00:28:30.795-07:00Open Research through collaborative development<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-ohK1_tgAqRI/U0DKO83la3I/AAAAAAABlLg/J8pOECAfKag/s1600/14+-+1" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-ohK1_tgAqRI/U0DKO83la3I/AAAAAAABlLg/J8pOECAfKag/s1600/14+-+1" height="400" width="300" /></a></div>
<br />
Academia helps shape our lives but it also helps with <b>economics</b> whether that be privately funded or publicly funded for evaluation of exploring new markets for capital gain, general well being and progress. One aspect of both type of funded research efforts are concerns over getting your ideas taken (being "<i>scooped</i>" seems to be the term used) and not getting any funding all together even if your ideas are very promising. If you follow my blog posts I hope its clear by now that I am terribly <a href="http://www.do-not-panic.com/2014/03/free-software-patents-surveillance-and.html">concerned over rapid evolution</a> but are looking for <a href="http://www.do-not-panic.com/2013/08/evolving-capitalism-with-ethical.html">solutions</a>. As a collateral to the Internet and efforts behind <a href="http://en.wikipedia.org/wiki/Free_software"><b>free software</b></a> and <a href="http://en.wikipedia.org/wiki/Open_source_software"><b>open source</b></a> software we have spawned new mechanisms that can help research tremendously with rapid progress, one of them obviously being <a href="http://en.wikipedia.org/wiki/Collaborative_software_development_model">collaborative development</a> models. In this post I will explain and encourage folks to look into a few new areas of development in research and to consider a bit more seriously how they're spending their time and money.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj9d0iv2siPNcoNDRrJrjNZjI76RpmGiLt3mR6Wp95dwBIZTTKGgJxV38bX4QIGnJjOMZwOZHpb2QKxoiBAl-KjJeACY1Q-mc4ITe5o0G0-8tCUh4ylnYHzt65xNXIiBOaGOgAjSA/s1600/photo.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj9d0iv2siPNcoNDRrJrjNZjI76RpmGiLt3mR6Wp95dwBIZTTKGgJxV38bX4QIGnJjOMZwOZHpb2QKxoiBAl-KjJeACY1Q-mc4ITe5o0G0-8tCUh4ylnYHzt65xNXIiBOaGOgAjSA/s1600/photo.jpg" height="206" width="400" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
Things have changed quite a bit since the inception of the Internet and one example of a prominent innovative pioneer that has been very vocal about preparing us for a series of new advances is <a href="http://en.wikipedia.org/wiki/Ray_Kurzweil">Ray Kurzeweil</a>. For example he's very vocal about expressing concerns over <b>legacy roadmaps</b> on currently established and well known schools such as <a href="http://en.wikipedia.org/wiki/Massachusetts_Institute_of_Technology">MIT</a>, its in fact one of the reasons why we have the spawning of <a href="http://en.wikipedia.org/wiki/Singularity_University">Singularity University</a> backed by NASA, Google and <a href="https://singularityu.org/community/partners/">others partners</a>. If we are going to start trying to even consider to address the "<b>humanity's greatest challenges</b>", which actually was a requirement by Google to back <a href="http://en.wikipedia.org/wiki/Singularity_University">Singularity University</a> <a href="https://www.youtube.com/watch?v=HMYVH-hBGWg%20">(6:54)</a>, we need research to be transparent, embracing and shepherding <a href="http://en.wikipedia.org/wiki/Collaborative_software_development_model">collaborative development</a> models, and even addressing fair use of "Intellectual Property". Fortunately at a <a href="https://www.youtube.com/watch?v=HMYVH-hBGWg%20">Ted talk where Ray announced Singularity University he sated</a> that (7:30) "these projects [which started as intensive group summer sessions in 2009 to address <b>humanity's greatest challenges</b>] will continue past these sessions using collaborative development methods and all the <b>Intellectual Property</b> that is created will be online and available and developed online in a collaborative development fashion". I'm hoping that Google will live up to its promise to ensure that <a href="http://en.wikipedia.org/wiki/Singularity_University">Singularity University</a> lives up to its promise and that any concerns over <b>Intellectual Property</b> will be addressed.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh9yC7u5igBsTHCgRKfWRtR5_6JvAFdGkiPRmF_ZxwaxQBTeAZ2KMtV_tK6OX5loGA2xEMR2dWQbfZELErPsRwTkN3EZfyjwpFqeJ0TJPa6gne5OQ-dujUmEkeZKRSdIGHPRCN5yw/s1600/IMG_20120715_063233.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh9yC7u5igBsTHCgRKfWRtR5_6JvAFdGkiPRmF_ZxwaxQBTeAZ2KMtV_tK6OX5loGA2xEMR2dWQbfZELErPsRwTkN3EZfyjwpFqeJ0TJPa6gne5OQ-dujUmEkeZKRSdIGHPRCN5yw/s1600/IMG_20120715_063233.jpg" height="320" width="320" /></a></div>
<br />
<br />
Another new research effort announced recently was the <a href="https://www.newschallenge.org/challenge/2014/brief.html">Knight News Challenge</a>. In June 2014 they will award $2.75
million, including $250,000 from the Ford Foundation, to support the
most compelling ideas and projects that make the Internet better. A recent entry into the competition <a href="https://www.newschallenge.org/challenge/2014/submissions/opening-up-research-proposals#_=_">addresses the concerns of funding and folks taking your ideas (scooping)</a>:<br />
<blockquote class="tr_bq">
"<i>If everyone knows you were the first to propose (and actually pursue)
that idea, anyone who tries to sell it as their own will risk loosing
reputation</i>" </blockquote>
Also there are two carrots: 1) for the <a href="https://www.newschallenge.org/challenge/2014/submissions/opening-up-research-proposals">casino fund</a>
(1% funding to this pool by different parties funding legacy research) contributors there is expected research studies that will take place on increasing efficiency of funding research, and 2) for researchers there are new incentives provided by the slew of new changes
incurred by an open strategy such as news coverage, public documentation, and of course the ability to socialize ideas for more
funding and of course... the gains from public <a href="http://en.wikipedia.org/wiki/Collaborative_software_development_model">collaborative development</a>. Folks who get already established grants through legacy research could
also help contribute 1% to the funds for promising but unfundable by
traditional means. I'm pretty <b>confident</b> <a href="http://en.wikipedia.org/wiki/Bradley_M._Kuhn">Bradley M. Kuhn</a> would <u>cringe</u> at the idea that it seems this research effort however is underselling it by trying to target research only in the current category of "<a href="https://www.newschallenge.org/challenge/2014/submissions/opening-up-research-proposals">promising but unfundable by traditional means</a>", as he recently posted about <a href="http://ebb.org/bkuhn/blog/2014/04/03/last-resort.html">Open Surce as a last resort</a>. He'd be right and the fact that tons of money and interest is pouring into <a href="http://en.wikipedia.org/wiki/Singularity_University">Singularity University</a> through a different approach should be proof that new research using <a href="http://en.wikipedia.org/wiki/Collaborative_software_development_model">collaborative development</a> models <u><b>should not be undersold</b></u> only to "<a href="https://www.newschallenge.org/challenge/2014/submissions/opening-up-research-proposals">promising but unfundable by traditional means</a>". With that said, it <b>doesn't mean</b> that they are restricting their ideas submitted only to that category... so any daring researcher with an idea to help spawn "<a href="https://www.newschallenge.org/challenge/2014/brief.html">projects that make the Internet better</a>" but confident or <b>curious</b> on the gains of <a href="http://en.wikipedia.org/wiki/Collaborative_software_development_model">collaborative development</a> should seriously consider submitting their proposal for evaluation. Two
biophysicists have signed up for the competition already, who's next?<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-Zp-eYYh7g1w/T8OUaDssH0I/AAAAAAAAyn4/ATO4WSJ7Cnk/s1600/photo.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-Zp-eYYh7g1w/T8OUaDssH0I/AAAAAAAAyn4/ATO4WSJ7Cnk/s1600/photo.jpg" height="185" width="400" /></a></div>
<br />
<br />
The prospects can set great precedents, its the type of stuff that I
think we ultimately need to avoid the next big "race", the last one being the atomic race, the next one, in my opinion, likely being the Artificial Intelligent race or collateral because of it in light of other research. Another curious thing is that there seems to be an intersection between the folks at <a href="http://en.wikipedia.org/wiki/Singularity_University">Singularity University</a> and the <a href="https://www.newschallenge.org/challenge/2014/brief.html">Knight News Challenge</a> and I'd be curious to know if they have considered... you know, collaborating together. Just a thought ;)mcgrofhttp://www.blogger.com/profile/06081818694231731816noreply@blogger.com0tag:blogger.com,1999:blog-29679292.post-86575252335452303412014-04-07T16:50:00.000-07:002014-04-29T01:11:21.925-07:00Summary of the gains of Xen oxenstored over cxenstored<div class="separator" style="clear: both; text-align: center;">
<a href="http://downloads.xen.org/Branding/Images/Mascot/Xen-Panda-Motorcycle-Green-500px.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://downloads.xen.org/Branding/Images/Mascot/Xen-Panda-Motorcycle-Green-500px.png" height="320" width="255" /></a></div>
Apart from upkeeping ongoing FOSS projects I help maintain and push forward one of the first things I've been asked to help with at <a href="http://en.wikipedia.org/wiki/SUSE_Linux_distributions">SUSE</a> is <a href="http://wiki.xen.org/">Xen</a>, specifically helping address the huge delta in place with upstream. Before you give me the <a href="http://www.linux-kvm.org/page/Main_Page">kvm</a> lecture, realize that I'm very well aware of <a href="http://www.linux-kvm.org/page/Main_Page">kvm</a> <span id="goog_1948064926"></span><span id="goog_1948064927"></span>now and while architecturally I think its beautiful tons of folks are still investing a lot into Xen and even new industries are considering it. As an example at the Linux Collaboration summit there was a talk by <a href="http://collaborationsummit2014.sched.org/speaker/alex.agizim#.U0M4SGfqmfg">Alex Agizim</a> about using <a href="http://collaborationsummit2014.sched.org/event/ce73e0d2e8a421761a156062ab27c349?iframe=no&w=100&sidebar=yes&bg=no#.U0M31Wfqmfg">Xen in the automotive industry</a> by the folks at <a href="https://www.globallogic.com/">Global Logic</a>. They prefixed their talk with a great video of on <a href="https://www.youtube.com/watch?v=wf1xR2kX3rU">Steeri - Driverless car parody</a>, hope is that that's not what things will be like. As we move forward with <a href="http://wiki.xen.org/">Xen</a> my goal will also be to see what folks are doing on <a href="http://www.linux-kvm.org/page/Main_Page">kvm</a> and see if there might be anything to share or learn from. Before starting at <a href="http://en.wikipedia.org/wiki/SUSE_Linux_distributions">SUSE</a> I knew squat about <a href="http://wiki.xen.org/">Xen</a> so I figure as I ramp up I can help with the documentation as well. Learning about <a href="http://wiki.xen.org/">Xen</a> has been fun as it involves tons of areas of the kernel, and the history is very rich. As I ramp up I intend on helping with the the documentation on its <a href="http://wiki.xen.org/">wiki</a>. As a collateral in dealing with the delta for upstream and documentation at times I may look for better way to do things, specially if it reduces our delta or if it can help the project, or at the very least socialize the ideas for a future feature enhancement. Apart from helping on the wiki, which I think is critical, I'll try to post things every now and then about parts of its architecture which perhaps don't yet belong on the wiki, or may use my blog post things first here, and then go curate them over into the wiki. I've now sent my brain dump to a few people over the summary of Thomas Gazagnaire and Vincent Hanquez's paper (both at Citrix) on their implementation of a <a href="http://wiki.xen.org/wiki/XenStore">xenstore</a> in an <a href="http://en.wikipedia.org/wiki/OCaml">OCaml</a> implementation called oxenstored. I will likely want to point more folks to this summary later given I'm actually also interested in alternatives and I don't expect folks to read a full paper to evaluate alternatives. I'm not going to get into the specifics of what I hope to see in alternatives now though other than mentioning that this came about in discussions at the Linux Collaboration summit in Napa and that it involves <b>git</b>. In this post I'll just cover the basic generals of the <a href="http://wiki.xen.org/wiki/XenStore">xenstore</a>, a review of the first implementation and a summary of oxenstored.<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://downloads.xen.org/Branding/Images/Mascot/Xen-Panda-Running-500px.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://downloads.xen.org/Branding/Images/Mascot/Xen-Panda-Running-500px.png" height="320" width="226" /></a></div>
<br />
The paper: <a href="http://gazagnaire.org/pub/GH09.pdf">OXenstored - An Efficient Hierarchical and Transactional Database using Functional Programming with Reference Cell Comparisons</a><br />
<br />
First a general description of the <a href="http://wiki.xen.org/wiki/XenStore">xenstore</a> and its first implementation. The <a href="http://wiki.xen.org/wiki/XenStore">xenstore</a> is where Xen stores the information over its systems. It covers dom0 and guests and it uses a filesystem type of layout kind of how we keep a layout of a system on the Linux kernel in <b>sysfs</b>. The original xenstored, which the paper refers to a Cxenstored was written in C. Since all information needs to be stored in a filesystem layout any library or tool that supports designing a tree to have key <--> value store of information should suffice to upkeep the <a href="http://wiki.xen.org/wiki/XenStore">xenstore</a>. The <a href="http://wiki.xen.org/">Xen</a> folks decided to use the <a href="http://tdb.samba.org/">Trival Database, tdb</a>, which as it turns out was designed and implemented by the <a href="http://www.samba.org/">Samba</a> folks for its own database. <a href="http://wiki.xen.org/">Xen</a> then has a deamon sitting in the background which listens to reads / write requests onto this database, that's what you see running in the background if you 'ps -ef | grep xen' on dom0. dom0 is the first host, the rest are guests. dom0 uses Unix domain sockets to talk to the <a href="http://wiki.xen.org/wiki/XenStore">xenstore</a> while guests talk to it using the kernel through the xenbus. The code for opening up a connection onto the c version of the xenstore is in tools/xenstore/xs.c and the the call is xs_open(). The first attempt by code will be to open the Unix domain socket with get_handle(xs_daemon_socket()) and if that fails it will try get_handle(xs_domain_dev()), the later will vary depending on your Operating System and you can override first by setting the environment variable XENSTORED_PATH. On Linux this is at /proc/xen/xenbus. All the <a href="http://wiki.xen.org/wiki/XenStore">xenstore</a> is doing is brokering access to the database. The <a href="http://wiki.xen.org/wiki/XenStore">xenstore</a> represents all data known to Xen, we build it upon bootup and can throw it out the window when shutting down, which is why we should just use a tmpfs for it (Debian does, <a href="http://www.opensuse.org/en/">OpenSUSE</a> should be changed to it). The actual database for the C implementation is by default stored under the directory /var/lib/xenstored, the file that has the database there is called tdb. On <a href="http://www.opensuse.org/en/">OpenSUSE</a> that's /var/lib/xenstored/tdb, on Debian (as of xen-utils-4.3) that's /run/xenstored/tdb. The C version of the <a href="http://wiki.xen.org/wiki/XenStore">xenstore</a> therefore puts out a database file that can actually be used with tdb-tools (actual package name for Debian and SUSE). xentored does not use libtdb which is GPLv3+, <a href="http://wiki.xen.org/">Xen</a> in-takes the tdb implementation which is licensed under the LGPL and carries a copy under tools/xenstore/tdb.c. Although you shouldn't be using tdb-tools to poke at the database you can still read from it using these tools, you can read the entire database as follows:<!------><!------><!------><!------></--><br />
<blockquote class="tr_bq">
tdbtool /run/xenstored/tdb dump </blockquote>
The biggest issue with the C version implementation and relying on tdb is that you can live lock it if you have have have a guest or any entity doing short quick accesses onto the <a href="http://wiki.xen.org/wiki/XenStore">xenstore</a>. We need <a href="http://wiki.xen.org/">Xen</a> to scale though and the research and development behind oxenstored was an effort to help with that. What follows next is my brain dump of the paper. I don't get into the details of the implementation because as can be expected I don't want to read OCaml code. Keep in mind that if I look for a replacement I'm looking also for something that Samba folks might want to consider.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://downloads.xen.org/Branding/Images/Mascot/Xen-Panda-Security-500px.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://downloads.xen.org/Branding/Images/Mascot/Xen-Panda-Security-500px.png" height="320" width="225" /></a></div>
<br />
<br />
OXenstored has the following observed gains:<br />
<ul>
<li>1/5th the size in terms of line of code in comparison to the C xenstored</li>
<li>better performance increasing support for the number of guests, it supports 3 times number of guests for an upper limit of 160 guests</li>
</ul>
The performance gains come from two things:<br />
<ul>
<li>how it deals with transactions through an <b>immutable</b> prefix tree. Each transaction is associated with a triplet (T1, T2, p) where T1 is the root of the database just before a transaction, T2 is the local copy of the database with all updates made by the transaction made up to that point, p is the path to the furthest node from the root T2 whose subtree contains all the updates made by the transaction up that point.</li>
<li>how it deals with sharing <b>immutable</b> subtrees and uses 'reference cell
equality', a limited form of pointer equality, which compares the
location of values instead of the values themselves. Two values are
shared if they share the same location. Functional programming languages
enforce that multiple copies of immutable structures share the same
location in memory. oxenstored takes avantage of this functional
programming feature to design trie library which enforces sharing of subtrees as much as possible. This lets them simpilfy how to determine and merge / coalesce concurrent transactions. </li>
</ul>
The complexity of the algorithms used by oxenstored is confined only to the<br />
length of the path, which is rarely over 10. This gives predictable performance<br />
regardless of the number of guests present.mcgrofhttp://www.blogger.com/profile/06081818694231731816noreply@blogger.com1tag:blogger.com,1999:blog-29679292.post-34063555276773851672014-04-05T05:41:00.000-07:002014-05-11T09:38:45.132-07:00Automatic kernel backporting with Coccinelle<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-_fFlmDEmVXg/Uqs4c1oYUTI/AAAAAAABhbc/xA50Bjk4Yfg/s1600/IMG_20131213_143529.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-_fFlmDEmVXg/Uqs4c1oYUTI/AAAAAAABhbc/xA50Bjk4Yfg/s1600/IMG_20131213_143529.jpg" height="300" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
Exponential growth does not only apply to technology and biology, it applies to software too. We need to adapt and scale better. I've written before on the implications on <a href="http://www.do-not-panic.com/2014/03/free-software-patents-surveillance-and.html">rapid software growth and freedoms</a>, this post addresses the pure technical implications of accelerated software growth specifically on the <a href="https://www.kernel.org/">Linux kernel</a> in consideration of backporting. Tools to help rapid software growth are huge assets to large software projects. <a href="http://en.wikipedia.org/wiki/Git_%28software%29">git</a> is one example, and let's recall that Linus refused to use anything other than <a href="http://en.wikipedia.org/wiki/BitKeeper">BitKeeper</a> to maintain the <a href="https://www.kernel.org/">Linux kernel</a> until he replaced it with <a href="http://en.wikipedia.org/wiki/Git_%28software%29">git</a>. Its not only important, these tools are <b>required</b>. Its best if you already have Free Software tools available to help software evolve though. Another good example, but one which <b>has</b> been Free Software since the start, is <a href="http://coccinelle.lip6.fr/">Coccinelle</a>, which came about through the research and development spearheaded by <a href="http://www.diku.dk/hjemmesider/ansatte/julia/">Julia Lawall</a> and her team in trying to address the concept of <a href="http://coccinelle.lip6.fr/ce.php">collateral evolutions</a> to help the <a href="https://www.kernel.org/">Linux</a> kernel grow faster, safely, and to make us lazier. I've written before on <a href="http://www.do-not-panic.com/2012/08/optimizing-backporting-collateral.html">my conjecture on usage of Coccinelle for automatically backporting Linux</a>. I looked at this as the Linux kernel and <a href="https://backports.wiki.kernel.org/">Linux kernel backports</a> project grew, to the point I could not stop it, even if I wanted to -- <i>I've tried twice now</i>. I've tested my conjecture recently and <a href="http://coccinelle.lip6.fr/">Coccinelle</a> has <b>exceeded</b> my expectations both on what it is already capable of and also on what <b>might be possible in the future</b>. We have been grooming a slew of techniques within the <a href="https://backports.wiki.kernel.org/">Linux backports</a> project to do backporting automatically for a series of years now and I'm happy to report that <a href="http://coccinelle.lip6.fr/">Coccinelle</a> will be one of those technologies that we embrace that will help us scale faster. In light of the high pace of evolution on hardware, <a href="https://www.kernel.org/">Linux</a> and <a href="https://backports.wiki.kernel.org/">Linux backports</a> I'm convinced now that striving to backport the Linux kernel automatically is the only reasonable way to do backporting.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-8tg1q4pet8Q/UqcR27jB56I/AAAAAAABhO8/iqDmM1H99Eg/s1600/IMG_20131017_091535.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-8tg1q4pet8Q/UqcR27jB56I/AAAAAAABhO8/iqDmM1H99Eg/s1600/IMG_20131017_091535.jpg" height="300" width="400" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
Before I dive into details I'll have to dedicate one paragraph to provide context under which evaluation of <a href="http://coccinelle.lip6.fr/">Coccinelle</a> took place. One of the <b>many</b> reasons <a href="http://www.do-not-panic.com/2013/11/i-quit-qualcomm-today-whoohoo.html">I left my job at Qualcomm</a> was I found out some folks were using backports, a project <a href="http://www.winlab.rutgers.edu/~mcgrof/accomplishments-2006-2007.pdf">I started <b>with the community</b> since way before</a> I joined <a href="http://en.wikipedia.org/wiki/Qualcomm_Atheros">Atheros</a>, with a proprietary driver. It gave me great reason to focus more on strengthening backports' core architecture, but also making it crystal clear that since our project's inception we are nothing more than a <i>derivative works of the Linux kernel </i>but we're also making it crystal clear that proprietary drivers are not allowed when you use backports. In case you haven't gotten any of the memos:<b> proprietary drivers on backports are not allowed</b>. Our documentation cannot be any more clearer I think. Since I <u>strongly suspected</u> we could help <b>evolve backports more efficiently with Coccinelle</b> I decided to reach out to <a href="http://www.diku.dk/hjemmesider/ansatte/julia/">Julia Lawall</a>. I had met Julia at the 2011 Linux Plumbers conference during my talk on <a href="http://www.linuxplumbersconf.org/2011/ocw/sessions/771">Backporting the Linux kernel for good</a> where she provided to me a high level architectural hope on the ability to further automatically backport the Linux kernel. Julia happens to be the main research scientists leading <a href="http://coccinelle.lip6.fr/">Coccinelle</a> research and development. I'm forever grateful that Julia managed to secure funding for me to do a 2-month research collaboration effort with her team and folks at <a href="http://www.lip6.fr/">LIP6</a> and <a href="http://en.wikipedia.org/wiki/IRILL">IRILL</a>, in Paris. Part of the work was seeing how we can better integrate and grow <a href="http://coccinelle.lip6.fr/">Coccinelle</a> with the backports project and also collaborate on ideas at both <a href="http://www.lip6.fr/">LIP6</a> and <a href="http://en.wikipedia.org/wiki/IRILL">IRILL</a>. Given the lack of good faith I saw at Qualcomm, like dragging their feat on my own legal employee agreement for over one year and even being told to my face that they claimed to own copyright even on my <b>personal public blog posts</b> over <a href="https://backports.wiki.kernel.org/">backports</a>, I also knew I couldn't continue there and had to eventually quit. I also couldn't let any work I do fall into the cracks of questionable gray area, specially if it was going to be of use for the community -- I didn't want my contributions to be used by proprietary drivers on Linux. I decided to spend one month learning as much as I could with the <a href="http://coccinelle.lip6.fr/">Coccinelle</a> folks and pushing out a slew of enhancements for work on <a href="http://wireless.kernel.org/en/developers/Regulatory/CRDA">CRDA</a>. I decided then it would also be a great idea to relicense <a href="http://wireless.kernel.org/en/developers/Regulatory/CRDA">CRDA</a> to <a href="https://gitorious.org/copyleft-next/">copyleft-next</a>, of course. I knew I had to quit though so that I could do some work without it being skewed or used in some other proprietary <a href="http://www.do-not-panic.com/2014/03/the-dangers-of-free-software.html">PHB project</a>. I was prepared to quit without a job at hand, if anything I could just go to Costa Rica and live by the beach for a while and sell bananas, or something. I was very fortunate that by the time I took a trip to Germany to visit some friends I was negotiating with two companies which seemed to respect everything I cared about. I decided to join <a href="http://en.wikipedia.org/wiki/SUSE_Linux_distributions">SUSE</a>, I'd start with them in January -- this was a huge relief -- I could quit my job early in my trip which gave me the perfect chance to focus on solid research and development my last month in Paris. None of this was planned. Even though I was doing research and development in Paris the first month was tough as I still had my other job and I was working late nights to be able to focus my attention on both. This also meant I didn't have much time to even keep <a href="https://backports.wiki.kernel.org/">backports</a> up to date... I'm forever gratIeful <a href="http://www.hauke-m.de/">Hauke Mehrtens</a> was willing to step up and take on maintainership over the project while as I was in <a href="http://en.wikipedia.org/wiki/Limbo">Limbo</a>. Things turned out well.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-JzGA0mHEaSY/Uoj9bwYKlAI/AAAAAAABgVM/HN-xIBPs9tY/s1600/IMG_20131111_123149.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-JzGA0mHEaSY/Uoj9bwYKlAI/AAAAAAABgVM/HN-xIBPs9tY/s1600/IMG_20131111_123149.jpg" height="300" width="400" /></a></div>
<br />
I'd like to prefix the technical details that I will provide below by also thanking everyone in France -- the Atheros France office which despite the circumstances were simply awesome, folks at <a href="http://www.lip6.fr/">LIP6</a>, <a href="http://en.wikipedia.org/wiki/IRILL">IRILL</a>, and specially Julia, for the warm welcome and great time both on research, development, and simply an amazing time. Specifically at <a href="http://en.wikipedia.org/wiki/IRILL">IRILL</a> -- <a href="http://www.dicosmo.org/index.en.html">Roberto Di Cosmo</a> and <a href="https://identi.ca/zack">Stefano Zacchiroli</a>. At <a href="http://www.lip6.fr/">LIP6</a> -- <a href="http://pagesperso-systeme.lip6.fr/Gilles.Muller/">Gilles Muller</a>, <a href="http://www.mysmu.edu/faculty/davidlo/">David Lo</a>. And then my strong French coffee buddies <a href="https://plus.google.com/+PeterSenna/posts">Peter Senna Tschudin</a> (<a href="http://blog.parahard.com/">blog</a>) and <a href="http://www.lip6.fr/actualite/personnes-fiche.php?ident=D1334">Lisong Guo</a>. I hope to be back some time for some more strong good French coffee. Recruiters, researchers, and kernel developers, pay close attention to what these folks are doing and will be doing, they're <b>bad ass</b>. Now that the context is all fleshed lets dive into the technical
details, I'll provide a status update of where were we stand with
regards to <a href="http://coccinelle.lip6.fr/">Coccinelle</a> integration on backports and what you can expect in the near future.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEimHIM9urTXxdOjc_kg8zS5TtVYd5TPlA0TVW6cECKldMPeaTJlNOBvMvG9xJ1VuIPh0iEPMWzpKyD6fN9t5IwFxnfE3rQqI5NNaFh6hjijXAHns9rAYP7RY_vKk3fH0ug-6NyTbw/s1600/IMG_20131215_022139.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEimHIM9urTXxdOjc_kg8zS5TtVYd5TPlA0TVW6cECKldMPeaTJlNOBvMvG9xJ1VuIPh0iEPMWzpKyD6fN9t5IwFxnfE3rQqI5NNaFh6hjijXAHns9rAYP7RY_vKk3fH0ug-6NyTbw/s1600/IMG_20131215_022139.jpg" height="400" width="300" /></a></div>
<br />
Let's start by reviewing the development flow on <a href="http://coccinelle.lip6.fr/">Coccinelle</a> -- its a research project and as such the work flow of ideas are pretty flexible. Although <a href="http://www.diku.dk/hjemmesider/ansatte/julia/">Julia Lawall </a>leads its research and development she's also incredibly busy giving students feedback, reading / evaluating papers, giving talks, or traveling, and of course, hacking on the Linux kernel. I'm surprised she manages to get so much done. All this means that the way that code flows in <a href="http://coccinelle.lip6.fr/">Coccinelle</a> is to do most of the evolution loosely within an internal tree and full releases get pushed out to <a href="https://github.com/coccinelle/coccinelle">coccinelle's repo on github</a>. My hope is that with a bit of volunteering we can strive to change this to follow a more linear work flow, as with the Linux kernel, but we'd have to prioritizing not disrupting any existing evolutions by its researchers. It'd also require learning <a href="http://en.wikipedia.org/wiki/OCaml">OCaml</a> -- eek. This is only required if you don't have access to the internal repository, which I do now, and if you're expected to try test patches as features or issues are addressed. If you're in that situation you might as well create an account and request read access to the <a href="https://gforge.inria.fr/projects/coccinelle/">Coccinelle INRIA gforge repository</a>. As I'll explain below, we might be able to live without a linear tree on <a href="http://coccinelle.lip6.fr/">Coccinelle</a>.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-NG-EQoFYm0E/UmdzYUe9ulI/AAAAAAABiww/mvAX25QWkDM/s1600/13+-+1" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-NG-EQoFYm0E/UmdzYUe9ulI/AAAAAAABiww/mvAX25QWkDM/s1600/13+-+1" height="400" width="400" /></a></div>
<br />
Now go get <a href="https://github.com/coccinelle/coccinelle">Coccinelle code on github</a>. At this point you likely want the latest and greatest from github, you want to compile it yourself as any distro release right now is probably too old. If you're like me and use vim, go get <a href="https://github.com/ahf/cocci-syntax">Coccinelle vim syntax highlighting</a>.<a href="http://coccinelle.lip6.fr/"> Coccinelle</a> is written in <a href="http://en.wikipedia.org/wiki/OCaml">OCaml</a>, a functional programming language. Although I tried to pick it up, <a href="http://www.diku.dk/hjemmesider/ansatte/julia/">Julia</a> expressed that kernel developers should not have to pick it up, she'd be happy to receive feedback as problems or new ideas are found and address them. That made me happy :) Fortunately <a href="https://plus.google.com/101449330804536230754/posts">Johannes Berg</a> had added the first SmPL (Semantic Patch Language) patch to backports and it was merged by the time I was in Paris. The code generation recipe backports follows is to copy a target set of code listed in a copy-list, copy over our backports module, its header files, and throw in some Kconfig logic. The last step involves patching the code. Any patch in the patches/ directory will be applied linearly in alphabetical order. If any patch happens to end with a .cocci file extension we treat it as a <a href="http://coccinelle.lip6.fr/">Coccinelle</a> SmPL patch and call out on spatch to apply it. The first performance concerns expressed were spotted by <a href="https://plus.google.com/101449330804536230754/posts">Johannes</a>, you see once you use <a href="http://coccinelle.lip6.fr/">Coccinelle</a> within backports you use it at the very least once a day, and the Intel folks were already using backports in production deployments and relying on it. Johannes noted that using the <a href="http://coccinelle.lip6.fr/">Coccinelle</a> spatch --dir option, which we use to specify applying a cocci file to the entire kernel, triggers a '<i>sh -c egrep -q</i>' call for every file. In short Johannes' feedback was that calling out to the shell just to call another binary was pretty expensive. The alternatives that are documented for <a href="http://coccinelle.lip6.fr/">Coccinelle</a> support are to use software indexing utilities, one is called Glimpse, and the other ideutils. Glimpse unfortunately is not free software so not even sure why that is supported, and one issue with using ideutils in a kernel development work flow is you are always updating your trees and not everyone may wish to be updating indexes with additional software. I personally never use ideutils or regular grep on git trees, I use '<i>git grep</i>' as it skips tons files you don't care over. The issue spotted by Johannes can be seen with an example:<span style="font-family: "Courier New",Courier,monospace; font-size: x-small;"> </span><br />
<br />
<pre><span style="font-family: "Courier New",Courier,monospace; font-size: x-small;">sys/wireless$ time (find . -name *.c | xargs <span class="il">egrep</span> -q '(\bgenl_ops\b)')
real 0m0.239s
user 0m0.208s
sys 0m0.088s
sys/wireless$ time (find . -name *.c | xargs -n1 sh -c "<span class="il">egrep</span> -q '(\bgenl_ops\b)'")
real 0m20.120s
user 0m0.216s
sys 0m1.072s</span></pre>
<br />
Julia experimented with a few things, even using '<i>git grep</i>' but that resulted in slower code evaluation, in the end an internal Cocci_grep was used and that in turn ended up speeding things quite significantly for both the cases where you are indexing code and where you are not. This change made it into the <a href="http://coccinelle.lip6.fr/">Coccinelle</a> 1.0.0-rc19 release, you want to have at least 1.0.0-rc20 right now though. Hat tip to Johannes Berg for reporting this with great detail and Julia for fixing it like a speed daemon.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-JnpCqQUXabY/UpnhkB7C4sI/AAAAAAABg4g/r6URMcLWv30/s1600/IMG_20131129_122946.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-JnpCqQUXabY/UpnhkB7C4sI/AAAAAAABg4g/r6URMcLWv30/s1600/IMG_20131129_122946.jpg" height="300" width="400" /></a></div>
<br />
The <a href="https://backports.wiki.kernel.org/">Linux backports</a> project backports automatically through a few strategies.It's main objective is to keep code it takes from upstream as-is as much as possible and only as as last resort does it use legacy patches and now SmPL patches. Patches are used typically to address core data structure changes which we cannot carry over, or to address complex functionality which we simply cannot tuck away under headers or newly exported symbols. All these changes are addressed typically with #ifdefs but as it turns out those pesky things are actually quite a bit problematic if you want to ensure you keep style using grammar rules. I remember there's at least one old mailing list thread (but I can't find it now) that had actually encouraged folks to not consider using <a href="http://coccinelle.lip6.fr/">Coccinelle</a> for #ifdef'ing code. This is the next area I explored with Julia, mainly because #ifdef'ing is a huge part of what backports does on its patches, and keeping code usable and sane to read is of utmost importance to us. In so far as style -- you can end up with functional code that compiles and works but if it looks odd can you easily debug it? Helping with <a href="http://coccinelle.lip6.fr/">Coccinelle</a> in this area has consisted in doing transformation of legacy patches to SmPL form and providing feedback where I don't see <a href="http://coccinelle.lip6.fr/">Coccinelle</a> respecting the expected style / form. This work is <b>ongoing</b> on <a href="http://coccinelle.lip6.fr/">Coccinelle</a> but <b>a lot of fixes have been done already</b>.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjRR1tcAP6rG0ZlBmpklHRfY5IoLnbmZvicVCKPTqE7gdf1_aGRDhpTAQ_farplPECZJ0hNK4fkdg3mJTg8Q-TZWdHQi312hHPGRV6LFPlAtOhUhm9o3kXn7eetMAEHqEx8dd1mwA/s1600/IMG_20131201_230302.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjRR1tcAP6rG0ZlBmpklHRfY5IoLnbmZvicVCKPTqE7gdf1_aGRDhpTAQ_farplPECZJ0hNK4fkdg3mJTg8Q-TZWdHQi312hHPGRV6LFPlAtOhUhm9o3kXn7eetMAEHqEx8dd1mwA/s1600/IMG_20131201_230302.jpg" height="300" width="400" /></a></div>
<br />
Once we have #ifdef style being respected perfectly, for all corner cases, it should be feasible to perform a proof that an SmPL patch can replace a series of legacy patches. This can also be useful for folks who may want to test converting an upstream patch into SmPL form as an exercise or to become an even lazier developer and only do a <a href="http://coccinelle.lip6.fr/ce.php">collateral evolutions</a> for one driver, test an SmPL grammar replacement and prove that it yields the same change, and then kick it of to generate the full patch for the entire Linux kernel. Maintainers who do not trust SmPL patches from people can use this to prove the provided SmPL patch also matches the provided full patch supplied. Doing an SmPL proof means you have to start reformatting legacy patches into a consistent form, as that is how you will need to write your SmPL rules and how <a href="http://coccinelle.lip6.fr/">Coccinelle</a> will generate patches. Remember, we're now trying to work with computers doing code modifications for us, not just patches, we need to be precise. There are several strategies that can be used to do a proof of SmPL patch replacement correctness. We could evaluate code compilation but that would take long. Another strategy is to see if we can infer the SmPL grammar from the patches and check for isomorphism using <a href="http://coccinelle.lip6.fr/">Coccinelle</a> somehow, but if we had the ability do do that we wouldn't really need to write rules, right? Even so if we're replacing legacy patches we may wish to not trust SmPL patch inference or at least desire to want proof of correctness. For backports's sake once we have all the #ifdef spacing / style respected we can use the following strategy to prove an SmPL patch can replace a series of legacy patches. If you don't need to work with #ifdefs then you likely can already use this general approach. <br />
<ol>
<li>Use one tree, g1, and another base tree, b1, without git - and on each import all code before applying any patches</li>
<li>Apply all legacy patches to g1 and commit</li>
<li>Apply SmPL patches to b1 </li>
<li>rm -rf all code on g1 and replace it with all code in b1</li>
<li>git diff --stat</li>
</ol>
I have this all implemented in Python and will be sending it soon for integration, but onto the backports project. If a generalized tool is desired we could consider doing that and merging it into Coccinelle. The 'git diff --stat' should yield 0 results if you have a proper SmPL replacement, it will show extra code additions if the SmPL patch actually added more code than what you had originally, and will show code removal for areas that perhaps the SmPL did not manage to address. It turns out though that you typically end up with more code changes but in files you hadn't addressed before. The reason for this is you have generalized a backport for a <a href="http://coccinelle.lip6.fr/ce.php">collateral evolutions</a> and are using <a href="http://coccinelle.lip6.fr/">Coccinelle</a> to perform the backport for <b>all</b> code, not just the target code you originally had addressed the <a href="http://coccinelle.lip6.fr/ce.php">collateral evolutions</a> for. This is exactly why I have been cleaning up patches over time on the backports project into <a href="http://coccinelle.lip6.fr/ce.php">collateral evolutions</a>. It makes you think of backports as atomic pieces, each addressing one <a href="http://coccinelle.lip6.fr/ce.php">collateral evolutions</a>, one or a few series of commits upstream. Since <a href="http://coccinelle.lip6.fr/">Coccinelle</a> lets you peg grammar rules to a structure granularity it means you can strictly localize a change, therefore avoiding similar structural changes in other parts of code for data structures that resemble the structure you are trying to change. Because of this there is no harm to having the additional changes applied since if a driver didn't have the change in place it meant that the driver was not enabled to be compiled for an older kernel, so <a href="http://coccinelle.lip6.fr/">Coccinelle</a> was just extending <a href="http://coccinelle.lip6.fr/ce.php">collateral evolutions</a> backport to more code and you won't use it where it wasn't there before. In the case that a driver was not enabled for older kernels the additional code will simply never run, although it will increase the binary mildly. This could however mean enabling drivers for older kernels. Getting <a href="http://coccinelle.lip6.fr/">Coccinelle</a> to generate more backport code is a great examples of taking automatic backporting further. It also means that you have to do less work. Since we don't yet have SmPL grammar patch inferences, or support for the inverse of an SmPL patch, it means we have to translate <a href="http://coccinelle.lip6.fr/ce.php">collateral evolutions</a> to SmPL form manually today. In the future this will change though and my hope is that we also get developers to write new <a href="http://coccinelle.lip6.fr/ce.php">collateral evolutions</a> with SmPL, so that we can simply backport it with SmPL.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEigzfFqG0DfmPrlgK6m7b31ZGhYT2r-7U2y4ig1fbFb7ZrWIxSrPZMDEZC8aTSlgM_wMRzJ8pCSQlG1cqa1wjXJc9ZwYPnO5Mqn7SttYXh-hOySgVmtUKjk9ENJ5NLg9LMHvpeyXg/s1600/IMG_20131123_160828.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEigzfFqG0DfmPrlgK6m7b31ZGhYT2r-7U2y4ig1fbFb7ZrWIxSrPZMDEZC8aTSlgM_wMRzJ8pCSQlG1cqa1wjXJc9ZwYPnO5Mqn7SttYXh-hOySgVmtUKjk9ENJ5NLg9LMHvpeyXg/s1600/IMG_20131123_160828.jpg" height="400" width="300" /></a></div>
<br />
To what extent can we use <a href="http://coccinelle.lip6.fr/">Coccinelle</a>? Can we backport everything with <a href="http://coccinelle.lip6.fr/">Coccinelle</a>? I wanted to explore the limits of what we could do with <a href="http://coccinelle.lip6.fr/">Coccinelle</a>. My original goal was to categorize <a href="http://coccinelle.lip6.fr/ce.php">collateral evolutions</a> by types, and try to see which ones <a href="http://coccinelle.lip6.fr/">Coccinelle</a> could not address. I started this by reviewing the list of <a href="http://coccinelle.lip6.fr/ce.php">collateral evolutions</a> backports we had, and looking for the <a href="http://coccinelle.lip6.fr/ce.php">collateral evolution</a> series of patches that had the largest amount of code or that I thought was more complex to address. Turns out that the more complex <a href="http://coccinelle.lip6.fr/ce.php">collateral evolutions</a> that do have #ifefs are exactly what <a href="http://coccinelle.lip6.fr/">Coccinelle</a> can be used to backport. Remember that backports already does most of its backport work by using a series of header files and exported symbols that implements backport functionality not available on older kernels. The series of legacy patches simply address the corner cases, things that we cannot backport through helpers or through exported symbols, and these typically require #ifdef'ing code. There's really only a few types of changes left then. Patches for <a href="http://coccinelle.lip6.fr/ce.php">collateral evolutions</a> are split by the subsystem that they affect and then for each specific <a href="http://coccinelle.lip6.fr/ce.php">collateral evolutions</a> by labeling it to describe the change backported. For networking we currently have 76 collateral evolutions. Of those only about 10 have extensive series of changes and the rest are really small. Using <a href="http://coccinelle.lip6.fr/">Coccinelle</a> for small changes doesn't make sense -- specially if all we're doing is carrying over the patch as the kernel evolves and simply updating its hunks. <b>Its the long patches, the complex patches, which are error prone, which are a bitch to backport, and that are applicable to a lot of drivers which we want to address with <a href="http://coccinelle.lip6.fr/">Coccinelle</a></b>. This lets us simplify maintenance with a simple set of grammar rules and removes the nightmare of complex patch management. I decided to test the limits of <a href="http://coccinelle.lip6.fr/">Coccinelle</a> by working on the craziest patch I could find that I was sure could not be backported. Obviously I was wrong. Here's what this looks like.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi54vLSg16-yNDVNpRsLzv2B5uA_WDSbzoFiP8pEPrfZ4Tyw68NfZGAn3q9j61omqn258ehKLJQQcLeTtYDscxmB-ws9EYQYIqQJfFQC5QHJCaeRslaGPJEF7L2aSk3KJ26SuLXUQ/s1600/IMG_20131019_194922.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi54vLSg16-yNDVNpRsLzv2B5uA_WDSbzoFiP8pEPrfZ4Tyw68NfZGAn3q9j61omqn258ehKLJQQcLeTtYDscxmB-ws9EYQYIqQJfFQC5QHJCaeRslaGPJEF7L2aSk3KJ26SuLXUQ/s1600/IMG_20131019_194922.jpg" height="400" width="300" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEimHIM9urTXxdOjc_kg8zS5TtVYd5TPlA0TVW6cECKldMPeaTJlNOBvMvG9xJ1VuIPh0iEPMWzpKyD6fN9t5IwFxnfE3rQqI5NNaFh6hjijXAHns9rAYP7RY_vKk3fH0ug-6NyTbw/s1600/IMG_20131215_022139.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"></a></div>
<br />
Thread IRQ support is an option that lets drivers let their IRQ context be run in a thread, which let it sleep. In order to backported threaded IRQ support <a href="http://michael%20buesch/">Michael Buesch</a> decided in 2009 to build our own struct compat_threaded_irq that older kernels can use to queue_work() onto as the kernel thread will be running in process context, all the magic is dealt with behind the scenes through a header file that older kernels use, go check out the <a href="https://git.kernel.org/cgit/linux/kernel/git/backports/backports.git/tree/backport/backport-include/linux/interrupt.h">interrupt.h</a> file. For now its required that each driver that wants to use this helper must extend one of their own private data structures with a struct compat_threaded_irq, to be later used on older kernels when a driver uses <span style="font-family: "Courier New", Courier, monospace; font-size: x-small;">request_threaded_irq()</span>, for older kernels we call <span style="font-family: "Courier New", Courier, monospace; font-size: x-small;">compat_request_threaded_irq()</span> instead. Then other IRQ calls for older kernels must call the respective backported IRQ version call, so when synchronize_irq(dev->dev->irq) is called on newer kernels older kernels must call compat_synchronize_threaded_irq(&dev->irq_compat). Likewise when free_irq() is called two helpers are now needed, compat_free_threaded_irq() and compat_destroy_threaded_irq(). All this is addressed in the <a href="https://git.kernel.org/cgit/linux/kernel/git/backports/backports.git/tree/patches/collateral-evolutions/network/09-threaded-irq?h=linux-3.13.y">09-thread-irq</a> series of legacy patches. I'm providing links to the code in place on the linux-3.13.y branch of backports as this code will soon disappear from the master branch.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiE6bufabjPQ2lF_vSQC2T6PpgqcM2VXwLlcJduSSHB-VpZGhKf-8nV093aTN4164zXjkOZagO8isxGit8b0xQFCCGJsHOx2ChK8TVJldSZofxhVo3B0tK2x1TEuSMIpQQcAUmfOA/s1600/IMG_20131111_155549-ERASER.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiE6bufabjPQ2lF_vSQC2T6PpgqcM2VXwLlcJduSSHB-VpZGhKf-8nV093aTN4164zXjkOZagO8isxGit8b0xQFCCGJsHOx2ChK8TVJldSZofxhVo3B0tK2x1TEuSMIpQQcAUmfOA/s1600/IMG_20131111_155549-ERASER.jpg" height="300" width="400" /></a></div>
<br />
To replace this series we end up with 4 SmPL rules, each rule has a name, pegged in between @ symbols. So @ rule_name @ is an example for a rule named rule_name. For complex examples we may want to tell <a href="http://coccinelle.lip6.fr/">Coccinelle</a> only to proceed if a series of rules match on the expressed expressions or types declared. Lets review each rule separately.<br />
<br />
Let's take a look at one change on a driver through the legacy patch approach. We'll review first the extension which I figured would be hard to generalize through grammar.<br />
<br />
<pre><span style="background-color: black; font-size: small;"><span style="font-family: "Courier New", Courier, monospace;"><span style="color: lime;">--- a/drivers/net/wireless/b43/b43.h
+++ b/drivers/net/wireless/b43/b43.h</span>
<span style="color: white;"><span style="color: yellow;">@@ -805,6 +805,9 @@</span> <span style="color: blue;">enum {</span>
/* Data structure for one wireless device (802.11 core) */
struct b43_wldev {
<span style="color: cyan;">+#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,31)
+ struct compat_threaded_irq irq_compat;
+#endif</span>
struct b43_bus_dev *dev;
struct b43_wl *wl;
/* a completion event structure needed if this call is asynchronous */ </span></span></span>
</pre>
This change should be pretty straight forward, but consider figuring out somehow telling <a href="http://coccinelle.lip6.fr/">Coccinelle</a> that what we need to do is modify this specific data structure. How can we give a hint to <a href="http://coccinelle.lip6.fr/">Coccinelle</a> that this is the exact data structure? Remember that we don't want to make a grammar rule only for this driver, we want to generalize this change, this is the purpose of breaking things out with <a href="http://coccinelle.lip6.fr/ce.php">collateral evolutions</a>, they are not specific to a driver but rather a change was done to them as a collateral to an evolution on the Linux kernel. To get a better idea of what we should tell <a href="http://coccinelle.lip6.fr/">Coccinelle</a> we should change we must look at where this data structure is used in the series of patches.<span style="font-family: "Courier New", Courier, monospace; font-size: x-small;"> </span><br />
<pre><span style="color: white;"><span style="background-color: black;"><span style="font-family: "Courier New", Courier, monospace; font-size: x-small;"><span style="color: lime;">--- a/drivers/net/wireless/b43/main.c
+++ b/drivers/net/wireless/b43/main.c</span>
<span style="color: yellow;">@@ -4290,9 +4299,17 @@</span> <span style="color: blue;">static int b43_wireless_core_start(struc</span>
goto out;
}
} else {
<span style="color: cyan;">+#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,31)</span>
err = request_threaded_irq(dev->dev->irq,</span>
<span style="font-family: "Courier New", Courier, monospace; font-size: x-small;"> b43_interrupt_handler,
b43_interrupt_thread_handler,
IRQF_SHARED, KBUILD_MODNAME, dev);</span> <span style="font-family: "Courier New", Courier, monospace; font-size: x-small;"> </span>
<span style="font-family: "Courier New", Courier, monospace; font-size: x-small;"><span style="color: cyan;">+#else
+ err = compat_request_threaded_irq(&dev->irq_compat,
+ dev->dev->irq,
+ b43_interrupt_handler,
+ b43_interrupt_thread_handler,
+ IRQF_SHARED, KBUILD_MODNAME, dev);
+#endif</span>
if (err) {
b43err(dev->wl, "Cannot request IRQ-%d\n",
dev->dev->irq);</span></span></span>
</pre>
<span style="font-family: inherit;">The struct b43_wldev is used for the dev variable above, its declared in its entry routine as an argument as struct b43_wldev *dev. So we know we can tell <a href="http://coccinelle.lip6.fr/">Coccinelle</a> somehow that the target data structure we want to modify is used on request_threaded_irq(), but we also have to add our own #ifdef there for the older kernel's case as well.</span><br />
<span style="font-family: inherit;"><br /></span>
<span style="font-family: inherit;">The trick to getting this change in is using the generic type, and referencing it as a pointer, and later using that<span style="font-size: small;"> to tell <a href="http://coccinelle.lip6.fr/">Coccinelle</a> which data structure you want modified. Pay close attention to the type T and T *private declarations. </span></span><br />
<span style="color: #3d85c6;"><br /></span>
<br />
<pre><span style="color: white;"><span style="background-color: black;"><span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;"><span style="color: #3d85c6;">@ threaded_irq @</span>
<span style="color: yellow;">identifier</span> ret;
<span style="color: yellow;">expression</span> irq, irq_handler, irq_thread_handler, flags, name;
<span style="color: yellow;">type</span> T;
T *private;
<span style="color: #3d85c6;">@@</span>
<span style="color: cyan;">+#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,31)</span>
ret = request_threaded_irq(irq,
irq_handler,
irq_thread_handler,
flags,
name,
private);
<span style="color: cyan;">+#else
+ret = compat_request_threaded_irq(&private->irq_compat,
+ irq,
+ irq_handler,
+ irq_thread_handler,
+ flags,
+ name,
+ private);
+#endif</span></span></span></span></span></pre>
<br />
By using Type T and then T *private we are telling <span style="font-family: inherit;"><a href="http://coccinelle.lip6.fr/">Coccinelle</a> that private is a pointer, and that we want its type declared as T. This lets us later reference that type and if we want to extend it, which is exactly what we want to do! Let's look at the last piece that does this magic.<span style="font-size: x-small;"><span style="font-family: "Courier New", Courier, monospace;"> </span></span></span><br />
<br />
<span style="color: white;"><span style="background-color: black;"><span style="font-family: inherit;"><span style="font-size: x-small;"><span style="font-family: "Courier New", Courier, monospace;"><span style="color: #3d85c6;">@ modify_private_header depends on threaded_irq @</span><br /><span style="color: yellow;">type</span> threaded_irq.T;<br /><span style="color: #3d85c6;">@@</span><br /><br />T {<br /><span style="color: cyan;">+#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,31)<br />+ struct compat_threaded_irq irq_compat;<br />+#endif</span><br /><span style="color: red;">...</span><br />};</span></span></span></span></span><br />
<br />
<span style="font-family: inherit;">The key is to notice the usage of threaded_irq on the type on this rule, that is actually the name of the first rule, the name is put in between the two @ symbols. Delcaring it as type threaded_irq.T means that we want to use whatever Coccinelle picked up on the threaded_irq rule for T. This is actually the last rule, and the first one I showed is the first in the series. I decided that if we're going to extend driver data structure it'd be best to do that in the beginning as the end of a data structure at times can be used for <a href="http://en.wikipedia.org/wiki/Dynamic_array">dynamic arrays</a> of variable size. The full thing looks as follow:<span style="font-family: "Courier New", Courier, monospace;"><span style="font-size: xx-small;"> </span></span></span><br />
<br />
<pre><span style="color: white;"><span style="background-color: black;"><span style="font-family: inherit;"><span style="font-family: "Courier New", Courier, monospace;"><span style="font-size: xx-small;">/*
Backports threaded IRQ support
The 2.6.31 kernel introduced threaded IRQ support, in order to
backport threaded IRSs on older kernels we built our own struct
compat_threaded_irq to queue_work() onto it as the kernel thread
will be running the thread in process context as well.
For now each driver's private data structure is modified to add
the their own struct compat_threaded_irq, and that is used by
the backports module to queue_work() onto it. We can likely avoid
having to backport this feature by requiring to modify the private
driver's data structure by relying on an internal worker thread
within the backports module, this should be revised later.
*/
<span style="color: #3d85c6;"><span style="background-color: black;">@ threaded_irq @</span></span>
<span style="color: yellow;">identifier</span> ret;
<span style="color: yellow;">expression</span> irq, irq_handler, irq_thread_handler, flags, name;
<span style="color: yellow;">type</span> T;
T *private;
<span style="color: #3d85c6;">@@</span>
<span style="color: cyan;">+#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,31)</span>
ret = request_threaded_irq(irq,
irq_handler,
irq_thread_handler,
flags,
name,
private);
<span style="color: cyan;">+#else
+ret = compat_request_threaded_irq(&private->irq_compat,
+ irq,
+ irq_handler,
+ irq_thread_handler,
+ flags,
+ name,
+ private);
+#endif</span>
<span style="color: #3d85c6;">@ sync_irq depends on threaded_irq @</span>
<span style="color: lime;">expression</span> irq;
<span style="color: lime;">type</span> threaded_irq.T;
T *threaded_irq.private;
<span style="color: #3d85c6;">@@</span>
<span style="color: cyan;">+#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,31)</span>
synchronize_irq(irq);
<span style="color: cyan;">+#else
+compat_synchronize_threaded_irq(&private->irq_compat);
+#endif</span>
<span style="color: #3d85c6;">@ free depends on threaded_irq @</span>
<span style="color: lime;">expression</span> irq, dev;
<span style="color: lime;">type</span> threaded_irq.T;
T *threaded_irq.private;
<span style="color: #3d85c6;">@@</span>
<span style="color: cyan;">+#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,31)</span>
free_irq(irq, dev);
<span style="color: cyan;">+#else
+compat_free_threaded_irq(&private->irq_compat);
+compat_destroy_threaded_irq(&dev->irq_compat);
+#endif</span><span style="color: #3d85c6;">
@ modify_private_header depends on threaded_irq @</span>
<span style="color: lime;">type</span> threaded_irq.T;
<span style="color: #3d85c6;">@@</span>
T {
<span style="color: cyan;">+#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,31)
+ struct compat_threaded_irq irq_compat;
+#endif</span>
...
};</span></span></span></span></span></pre>
<br />
The beautiful thing about this is that <a href="https://git.kernel.org/cgit/linux/kernel/git/backports/backports.git/tree/patches/collateral-evolutions/network/09-threaded-irq?h=linux-3.13.y">09-thread-irq</a> collateral evolution backport only addressed 3 drivers and that using the above SmPL patch in backports means we can transpose this backport onto all of the drivers that we are carrying, 13 of which are using request_threaded_irq(), so 10 new drivers get this collateral evolution backported, <b>automatically</b>. This is simply <b>one</b> of the architectural backport gain of backporting collateral evolutions with <a href="http://coccinelle.lip6.fr/">Coccinelle</a>. There's a few other interesting things worth mentioning that converting the <a href="https://git.kernel.org/cgit/linux/kernel/git/backports/backports.git/tree/patches/collateral-evolutions/network/09-threaded-irq?h=linux-3.13.y">09-thread-irq</a> legacy patch to SmPL revealed:<br />
<ol>
<li>The backport was inconsistently programmed. This was revealed when <a href="http://coccinelle.lip6.fr/">Coccinelle</a> ended up adding the <span style="font-family: inherit;"><span style="font-size: x-small;"><span style="font-family: "Courier New", Courier, monospace;">struct compat_threaded_irq</span></span></span><br /> on the struct iwl_trans data structure rather than the struct iwl_trans_pcie! The reason this worked however was that it didn't matter which data structure we used in the driver, so long as it was consistent. <a href="http://coccinelle.lip6.fr/">Coccinelle</a> deserves a brownie point here, security programmers should be excited about the prospects about precision here (also check out the <a href="http://blog.parahard.com/2012/12/httpcoccinelleryorg.html">Coccinelle library </a>maintained by Peter).</li>
<li>The <a href="https://git.kernel.org/cgit/linux/kernel/git/backports/backports.git/tree/patches/collateral-evolutions/network/09-threaded-irq?h=linux-3.13.y">09-thread-irq</a> backport was mildly sloppy -- it backported two <a href="http://coccinelle.lip6.fr/ce.php">collateral evolutions</a> not one! We tuck this way under a new series, in case other drivers wish to backport this. Its important to do this to document and track why things are done. This new series backports commit b25c340c1 added by Thomas Gleixner through kernel v2.6.32 which added support for IRQF_ONESHOT. This lets drivers that use threaded IRQ support to request that the IRQ is not masked after the hard interrupt handler completes as this requires device access in hard IRQ context and for buses such as i2c and spi this at times is not possible. The TI driver uses this when a platform quirk with<br />WL12XX_PLATFORM_QUIRK_EDGE_IRQ is detected. In retrospect this quirk<br />does not seem backportable unless IRQF_ONESHOT is really not a requirement, but desired. If WL12XX_PLATFORM_QUIRK_EDGE_IRQ is indeed a requirement for IRQF_ONESHOT then we should not probe complete. Its unclear if this is a universal thing or not.</li>
<li>The amount of time to generate the backports target code is reduced by embracing this new SmPL patch, even though it actually automatically backported the collateral evolution to 10 other drivers. The run time used to be 1 minute and 33 seconds, so it turns out 27 seconds were shaved off somehow magically. The new run time after this patch:</li>
</ol>
<blockquote class="tr_bq">
<span style="font-family: "Courier New",Courier,monospace;">real 1m6.023s</span><br />
<span style="font-family: "Courier New",Courier,monospace;">user 10m0.276s</span><br />
<span style="font-family: "Courier New",Courier,monospace;">sys 0m26.196s</span></blockquote>
Now how wicked cool would it be if you didn't have to write SmPL grammar patches in the first place, you'd instead get <a href="http://coccinelle.lip6.fr/">Coccinelle</a> to <b>infer</b> the SmPL grammar patch for you by giving it a series of legacy patches? Theoretically this is possible and I have hinted in the past that this is what drove me to consider <a href="http://coccinelle.lip6.fr/">Coccinelle</a> more seriously as I found it hard to believe we could get a lot of developers to pick up a language for a grammar, which at this point seems rather important do embrace anyway, much like learning C. Another interesting prospect is getting <a href="http://coccinelle.lip6.fr/">Coccinelle</a> to grok an -R option similar to patch -R, which would tell <a href="http://coccinelle.lip6.fr/">Coccinelle</a> to apply the inverse of the patch. This way if we could get some kernel developers to write collateral evolutions with SmPL it would mean we wouldn't even have to infer the SmPL patch but instead extract it from the commit git log which could be used to keep record of the SmPL grammar rule. This is the ideal situation we need to strive to to perform automatic Linux kernel backporting. During my trip in Paris I found out though that the above two sets of objectives require a bit more research and development, however there is hope that this is achievable. Computer science students: if the above sounds interesting to you reach out to Julia, you might be able to help advance Linux with helping with research and development in this area.<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgRknU2qLBsv3RzesLvk6J4uqRrf8T9M3Msu1OpbeO0sWbBAv0bsmXk-RHAaQUCkgV8EoKFrjL9pQfwfzrWnq6FGslXBMbmjOcW7XKh3x5cINQIioPli0QtWW4dvOFl7EaiWVseJw/s1600/IMG_20131111_142417-MOTION.gif" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgRknU2qLBsv3RzesLvk6J4uqRrf8T9M3Msu1OpbeO0sWbBAv0bsmXk-RHAaQUCkgV8EoKFrjL9pQfwfzrWnq6FGslXBMbmjOcW7XKh3x5cINQIioPli0QtWW4dvOFl7EaiWVseJw/s1600/IMG_20131111_142417-MOTION.gif" height="240" width="320" /></a></div>
<br />
Parallelism was the last aspect I tried to address on <a href="http://coccinelle.lip6.fr/">Coccinelle</a>. <a href="http://coccinelle.lip6.fr/">Coccinelle</a> already already has support for parallelism, it does this by splitting up all the target files it needs to address into a series of buckets, and each process addresses one bucket. <a href="http://coccinelle.lip6.fr/">Coccinelle</a> requires you right now to specify that if you need j threads you iterate and kick it j times separately and for each iteration you specify j as the max number of threads you want and keep bumping an index on each run. In short it is not doing the spawning of children / threads automatically for you. As with the first performance observation made by Johannes it should be clear that this incurs a performance hit, requiring you to hit the shell for each spawned thread. A change to do this all internally on <a href="http://coccinelle.lip6.fr/">Coccinelle</a> is welcomed but Julia notes making this change poses major problems for the <a href="http://coccinelle.lip6.fr/">Coccinelle</a> internal structure, it can be addressed but will require a student to tackle this, and hope is that it can be tackled some time in 2014, unless a daring <a href="http://en.wikipedia.org/wiki/OCaml">OCaml</a> guru with spare time wishes to help and jump in. In the meantime we have to work with the solution in place. For shell this looks as follows:
<br />
<pre><span style="font-family: "Courier New",Courier,monospace;"> </span>
<span style="font-size: small;">
<span style="color: white;"><span style="background-color: black;"><span style="font-family: "Courier New",Courier,monospace;"><span style="color: #45818e;">#!/bin/bash
# By Kees Cook
# http://comments.gmane.org/gmane.comp.version-control.coccinelle/680</span>
<span style="color: yellow;">set</span> <span style="color: #c27ba0;">-e</span>
<span style="color: cyan;">MAX</span>=<span style="color: #3d85c6;">$(</span><span style="color: #c27ba0;">getconf _NPROCESSORS_ONLN</span><span style="color: #3d85c6;">)</span>
<span style="color: cyan;">dir</span>=<span style="color: #3d85c6;">$(</span><span style="color: #c27ba0;">mktemp -d</span><span style="background-color: #3d85c6;">)</span>
<span style="color: yellow;">for</span> <b>i</b> <span style="color: yellow;">in</span> $(seq 0 $(( MAX - 1 )) ); do
spatch <span style="color: #c27ba0;">-max</span> <span style="color: #3d85c6;">$MAX</span> <span style="color: #c27ba0;">-index</span> <span style="color: #3d85c6;">$i</span> <span style="color: #c27ba0;">-very_quiet</span> <span style="color: red;">"</span><span style="color: #3d85c6;"><span style="background-color: black;">$@</span></span><span style="color: red;">"</span><b> > </b><span style="color: #3d85c6;">$dir</span>/<span style="color: #3d85c6;">$i.out</span> <b>&</b></span></span></span>
<span style="color: white;"><span style="background-color: black;"><span style="font-family: "Courier New",Courier,monospace;"><b>done</b>
<span style="color: yellow;">wait</span>
cat <span style="color: #3d85c6;">$dir</span>/*.out
<span style="color: yellow;">rm</span> <span style="color: #c27ba0;">-f</span> <span style="color: #3d85c6;">$dir</span>/*.out
<span style="color: yellow;">rmdir</span> <span style="color: #3d85c6;">$dir</span></span></span></span></span></pre>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-b8cVdr7qM5s/Uz3CvZbJtII/AAAAAAABlIg/LKjBZyIGWCQ/s1600/before-threaded-cocci.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-b8cVdr7qM5s/Uz3CvZbJtII/AAAAAAABlIg/LKjBZyIGWCQ/s1600/before-threaded-cocci.png" height="52" width="400" /> </a></div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjXdGKoHw95bhx2oPD_dKYpJ56J7sbHCl3KItGTaLcwU_h2COQKdFrj7jC3PR66N_zFAvhMeJUuRqH0j1PuMPckGZlRo1BAPCB2oXpT7VHtjd6TkLwJG-CYfBZNBz5GrGXtpmfGWQ/s1600/cocci-jobless-processes.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjXdGKoHw95bhx2oPD_dKYpJ56J7sbHCl3KItGTaLcwU_h2COQKdFrj7jC3PR66N_zFAvhMeJUuRqH0j1PuMPckGZlRo1BAPCB2oXpT7VHtjd6TkLwJG-CYfBZNBz5GrGXtpmfGWQ/s1600/cocci-jobless-processes.png" height="57" width="400" /> </a> </div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgYMiIgWj4B4hRYmzvpcmknbK9qvBZPLjGpLXMMkXrDo7GYcFRrOypQznwZUpYkjBY01vvQCz2f9cYwCOrUTCW0ePnSOFs9ZM5paogQNm1jTfiOYIG_QLvvBptXPWMu800V1WNXNQ/s1600/after-threaded-cocci.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgYMiIgWj4B4hRYmzvpcmknbK9qvBZPLjGpLXMMkXrDo7GYcFRrOypQznwZUpYkjBY01vvQCz2f9cYwCOrUTCW0ePnSOFs9ZM5paogQNm1jTfiOYIG_QLvvBptXPWMu800V1WNXNQ/s1600/after-threaded-cocci.png" height="55" width="400" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
For now I've created a Linux backports python library that takes care of this for us but it still has the incurred overhead of creating j number of processes if its going to use j threads. (<b>Update</b>: you can now download <a href="http://drvbp1.linux-foundation.org/~mcgrof/coccinelle/pycocci">pycocci</a>, a standalone Python wrapper with sensible default for Coccinelle and multithread support as was implemented in backports, this has been submitted for inclusion into upstream Coccinelle). The overhead isn't as significant as with grep though as grep was used for every file during inspection, the overhead currently should therefore be <b>extremely minimal</b>. In practice I've observed the best performance using Linux as the target source using 3 times the number of CPUs available on the <a href="http://www.do-not-panic.com/2013/03/machine-slavery-compat-drivers-build-box.html">monster build box</a> donated by SUSE, HP, and the Linux Foundation. The explanation for this is that <a href="http://coccinelle.lip6.fr/">Coccinelle</a> threads that have nothing to do will bail out, leaving the CPU idle. If using <a href="http://coccinelle.lip6.fr/">Coccinelle</a> on the Linux kernel this can happen quite a bit as an SmPL patch only has rules that should apply to certain files. If you are not using indexing this happens quite a bit. On this system embracing parallelism on <a href="http://coccinelle.lip6.fr/">Coccinelle</a> as of backports-20140311 <b>yields for a gain of 94.38% on run time at code generation time</b>! On this system it meant reducing the amount of time at code generation time from 19 minutes 34 second to 1 minute and 6 second, after the new threaded IRQ SmPL patch! For those interested further in performance it is worth mentioning that all of the statistics provided, as well as statistics reported on the backports commit log are from <a href="http://www.do-not-panic.com/2013/03/machine-slavery-compat-drivers-build-box.html">monster build box</a> that we use, and that we have all of our code and git trees in RAM. I suspect we can do better, maybe by using RCU. I've socialized a few other ideas as well but Julia recently expressed interest in exploring usage of <a href="http://www.do-not-panic.com/2012/02/gnu-things-are-gnu-simplicity-of-gnu.html">GNU make's jobserver</a> which I have written about before. Let's see!<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjwEW7hDuclc9PsE-IkbJ1UYz6jm6ZWEQvPvaX_n3P3YmuTu-FSsKBWwoFjBnZ5BcNigeZvFdoCBtDxr82Uw6L8Tdea5ytXNo1JBuzE61OIqOus86IwNsHAX1q847T9OrWkTgs5Bg/s1600/IMG_20131112_010326.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjwEW7hDuclc9PsE-IkbJ1UYz6jm6ZWEQvPvaX_n3P3YmuTu-FSsKBWwoFjBnZ5BcNigeZvFdoCBtDxr82Uw6L8Tdea5ytXNo1JBuzE61OIqOus86IwNsHAX1q847T9OrWkTgs5Bg/s1600/IMG_20131112_010326.jpg" height="300" width="400" /></a></div>
<br />
Can using <a href="http://coccinelle.lip6.fr/">Coccinelle</a> scale? The answer is yes, specially with parallelism in place. And let's remember that the current performance improvement is without software indexing. Recall that we've now determined that you won't be using <a href="http://coccinelle.lip6.fr/">Coccinelle</a> for every single backport. You only want to use <a href="http://coccinelle.lip6.fr/">Coccinelle</a> for long series of patches backport <a href="http://coccinelle.lip6.fr/ce.php">collateral evolutions</a>. My focus in Paris was to use my time efficiently and review these and only bug Julia about the series which I deemed likely impossible for <a href="http://coccinelle.lip6.fr/">Coccinelle</a> to address. The only patches <a href="http://coccinelle.lip6.fr/">Coccinelle</a> can't address are the obvious ones, where there is no form that can be expressed. Its also not worth it to use <a href="http://coccinelle.lip6.fr/">Coccinelle</a> for tiny changes too.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg9a62IXshg3-ntrkQUJ1OwwRaeMhXABCQhSwpZdYG4rn97mss7VopxzFY8Wm5uT-hZL5TU990gGL6nIB3r9quuNy_5zDDaZc3qd7bnY6898luZ2QL60aL6XRqdVQkXh84DH4OVhw/s1600/IMG_20131111_144914.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg9a62IXshg3-ntrkQUJ1OwwRaeMhXABCQhSwpZdYG4rn97mss7VopxzFY8Wm5uT-hZL5TU990gGL6nIB3r9quuNy_5zDDaZc3qd7bnY6898luZ2QL60aL6XRqdVQkXh84DH4OVhw/s1600/IMG_20131111_144914.jpg" height="400" width="300" /></a></div>
<br />
Do you want to test a <a href="http://coccinelle.lip6.fr/">Coccinelle</a> patch? You can either use spatch directly with the above bash script for parallelism (note I use 3 num CPUs threads, the above just uses the number of CPU threads). If using backports you can use gentree.py within backports with --test-cocci. Johannes added a while ago git support into backports whereby if you specify --gitdebug a git tree will be generated for the code generated. A commit will be triggered after the first initial code import, and then one commit per patch, and one commit per SmPL patch. We take advantage of this feature and --test-cocci will skip all legacy patches, and only applies the <a href="http://coccinelle.lip6.fr/">Coccinelle</a> SmPL patch specified. Can you can then just change into the directory where the code was generated and issue 'git show'.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh7upVXUXmDHSiJ9wImH-T_Ir2_Ebke37H_Tk6WD31NSpD9vB9uQz3c7egA-PoATyYA-877H1YoUpop096vSiLeJOwc8YpXpZMaF-Cp4pBCHJn6z26HV6vnJv-9t8CzTMUxtbn00Q/s1600/IMG_20131111_150441.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh7upVXUXmDHSiJ9wImH-T_Ir2_Ebke37H_Tk6WD31NSpD9vB9uQz3c7egA-PoATyYA-877H1YoUpop096vSiLeJOwc8YpXpZMaF-Cp4pBCHJn6z26HV6vnJv-9t8CzTMUxtbn00Q/s1600/IMG_20131111_150441.jpg" height="240" width="320" /></a></div>
<br />
What about profiling <a href="http://coccinelle.lip6.fr/">Coccinelle</a> within backports? Sure -- just use --profile-cocci for gentree.py and specify the <a href="http://coccinelle.lip6.fr/">Coccinelle</a> patch you want to test and profile. <a href="http://coccinelle.lip6.fr/">Coccinelle</a> will generate a profile report for the time spent on each routine within <a href="http://coccinelle.lip6.fr/">Coccinelle</a> for each process that ran. Remember that each process will only work on a bucket list of files, so at times a process will yield no interesting profile results, which could likely explain why I get best results with 3 * number of CPU threads -- some processes likely just die out fast. Here is an example output of a profile run on the 11-dev-pm-ops.cocci collateral evolution. First the SmPL rule, this one should be pretty straight forward.<br />
<br />
<span style="color: white;"><span style="background-color: black;"><span style="font-size: x-small;"><span style="font-family: "Courier New", Courier, monospace;">// The 2.6.29 kernel has new struct dev_pm_ops [1] which are used<br />// on the pci device to distinguish power management hooks for suspend<br />// to RAM and hibernation. Older kernels don't have these so we need<br />// to resort back to the good ol' suspend/resume. Fortunately the calls<br />// are not so different so it should be possible to resuse the same<br />// calls on compat code with only slight modifications.<br />//<br />// [1] http://lxr.linux.no/#linux+v2.6.29/include/linux/pm.h#L170<br /><br /><span style="color: #3d85c6;">@ module_pci @</span><br /><span style="color: yellow;">declarer</span> name MODULE_DEVICE_TABLE;<br /><span style="color: yellow;">identifier</span> pci_ids;<br /><span style="color: #3d85c6;">@@</span><br /><br />MODULE_DEVICE_TABLE(pci, pci_ids);<br /><span style="color: #3d85c6;"><br />@ simple_dev_pm depends on module_pci @</span><br /><span style="color: yellow;">identifier</span> ops, pci_suspend, pci_resume;<br /><span style="color: yellow;">declarer</span> name SIMPLE_DEV_PM_OPS;<br /><span style="color: yellow;">declarer</span> name compat_pci_suspend;<br /><span style="color: yellow;">declarer</span> name compat_pci_resume;<br /><span style="color: #3d85c6;">@@</span><br /><br /><span style="color: cyan;">+compat_pci_suspend(pci_suspend);<br />+compat_pci_resume(pci_resume);</span><br />SIMPLE_DEV_PM_OPS(ops, pci_suspend, pci_resume);<br /><br /><span style="color: #3d85c6;">@@</span><br /><span style="color: yellow;">identifier</span> backport_driver;<br /><span style="color: yellow;">expression</span> pm_ops;<br /><span style="color: yellow;">fresh identifier</span> backports_pci_suspend = simple_dev_pm.pci_suspend ## "_compat";<br /><span style="color: yellow;">fresh identifier</span> backports_pci_resume = simple_dev_pm.pci_resume ## "_compat";<br /><span style="color: #3d85c6;">@@</span><br /><br />struct pci_driver backport_driver = {<br /><span style="color: cyan;">+#if (LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,29))</span><br /> .driver.pm = pm_ops,<br /><span style="color: cyan;">+#elif defined(CONFIG_PM_SLEEP)<br />+ .suspend = backports_pci_suspend,<br />+ .resume = backports_pci_resume,<br />+#endif</span><br />};</span></span></span></span><br />
<br />
Although the profile support currently uses 3 * number CPU threads on <a href="http://coccinelle.lip6.fr/">Coccinelle</a> here is one example output with only 1 process. This is with an older version of <a href="http://coccinelle.lip6.fr/">Coccinelle</a>.<br />
<br />
<pre><span style="font-size: xx-small;"><span style="font-family: "Courier New", Courier, monospace;">$ ./gentree.py --clean --verbose --profile-cocci \
patches/collateral-evolutions/network/11-dev-pm-ops.cocci \
/home/mcgrof/linux-next/ \
/home/mcgrof/build/backports-20131206
On big iron backports server:
---------------------
profiling result
---------------------
Main total : 275.327 sec 1 count
Main.outfiles computation : 274.761 sec 1 count
full_engine : 272.034 sec 291 count
C parsing : 158.143 sec 346 count
TOTAL : 158.141 sec 346 count
HACK : 69.680 sec 696 count
C parsing.tokens : 54.000 sec 693 count
Parsing: 1st pass : 43.870 sec 22728 count
YACC : 42.921 sec 22615 count
C parsing.fix_cpp : 36.855 sec 349 count
MACRO mgmt prep 2 : 32.787 sec 346 count
TAC.annotate_program : 32.779 sec 318 count
flow : 31.251 sec 21004 count
LEXING : 26.415 sec 346 count
bigloop : 15.298 sec 291 count
process_a_ctl_a_env_a_toplevel : 14.840 sec 46661 count
C parsing.lookahead : 13.254 sec 1965389 count
mysat : 12.777 sec 46661 count
show_xxx : 11.933 sec 49645 count
C parsing.fix_define : 6.957 sec 693 count
Type_c.type_of_s : 6.729 sec 135896 count
module_pci : 6.653 sec 291 count
rule starting on line 28 : 6.630 sec 291 count
fix_flow : 6.498 sec 20038 count
C parsing.lex_ident : 6.388 sec 1695857 count
C consistencycheck : 6.213 sec 346 count
Pattern3.match_re_node : 6.110 sec 969235 count
Common.full_charpos_to_pos_large : 5.684 sec 693 count
C parsing.mk_info_item : 3.556 sec 22728 count
worth_trying : 2.683 sec 1902 count
Parsing: multi pass : 2.574 sec 206 count
simple_dev_pm : 2.011 sec 291 count
TAC.unwrap_unfold_env : 2.007 sec 171026 count
TAC.typedef_fix : 1.948 sec 273295 count
TAC.lookup_env : 1.583 sec 236568 count
TAC.add_binding : 0.896 sec 57620 count
MACRO managment : 0.439 sec 118 count
Main.result analysis : 0.418 sec 1 count
Common.=~ : 0.305 sec 80558 count
C unparsing : 0.168 sec 41 count
MACRO mgmt prep 1 : 0.148 sec 346 count
parse cocci : 0.115 sec 1 count
pre_engine : 0.115 sec 1 count
Common.info_from_charpos : 0.102 sec 54 count
Main.infiles computation : 0.033 sec 1 count
ctl : 0.019 sec 94 count
Transformation3.transform : 0.006 sec 27 count
TAC.lookup_typedef : 0.004 sec 332 count
check_duplicate : 0.003 sec 1 count
Common.group_assoc_bykey_eff : 0.003 sec 1 count
merge_env : 0.003 sec 649 count
post_engine : 0.000 sec 1 count
get_glimpse_constants : 0.000 sec 1 count
Common.full_charpos_to_pos : 0.000 sec 2 count
asttoctl2 : 0.000 sec 1 count
On a Chromebook Pixel:
---------------------
profiling result
---------------------
Main total : 379.349 sec 1 count
Main.outfiles computation : 379.139 sec 1 count
full_engine : 372.905 sec 1902 count
HACK : 96.134 sec 2785 count
C parsing.tokens : 76.053 sec 2769 count
Parsing: 1st pass : 57.708 sec 82287 count
YACC : 56.033 sec 81918 count
C parsing.fix_cpp : 52.167 sec 1400 count
TAC.annotate_program : 46.560 sec 1356 count
MACRO mgmt prep 2 : 43.976 sec 1384 count
flow : 41.027 sec 80563 count
LEXING : 38.111 sec 1384 count
bigloop : 17.631 sec 1329 count
process_a_ctl_a_env_a_toplevel : 17.174 sec 161123 count
C parsing.lookahead : 15.138 sec 6737589 count
mysat : 14.537 sec 161123 count
Type_c.type_of_s : 11.815 sec 675989 count
Common.full_charpos_to_pos_large : 9.507 sec 2769 count
fix_flow : 8.685 sec 77269 count
module_pci : 8.459 sec 1329 count
rule starting on line 28 : 8.400 sec 1329 count
C consistencycheck : 8.137 sec 1384 count
C parsing.lex_ident : 7.810 sec 5665983 count
C parsing.fix_define : 7.343 sec 2769 count
Pattern3.match_re_node : 6.827 sec 3046889 count
C parsing.mk_info_item : 5.787 sec 82287 count
show_xxx : 4.825 sec 174022 count
Parsing: multi pass : 2.802 sec 679 count
TAC.lookup_env : 2.556 sec 858456 count
TAC.typedef_fix : 2.540 sec 976842 count
TAC.unwrap_unfold_env : 2.278 sec 587709 count
TAC.add_binding : 1.157 sec 204618 count
simple_dev_pm : 0.762 sec 1329 count
MACRO managment : 0.610 sec 394 count
Common.=~ : 0.377 sec 253422 count
MACRO mgmt prep 1 : 0.299 sec 1384 count
Main.result analysis : 0.200 sec 1 count
Common.info_from_charpos : 0.135 sec 248 count
C unparsing : 0.132 sec 41 count
pre_engine : 0.050 sec 1 count
parse cocci : 0.050 sec 1 count
C parsing : 0.016 sec 1384 count
check_duplicate : 0.012 sec 1 count
Common.group_assoc_bykey_eff : 0.012 sec 1 count
TAC.lookup_typedef : 0.011 sec 1314 count
Main.infiles computation : 0.010 sec 1 count
ctl : 0.009 sec 94 count
merge_env : 0.005 sec 2725 count
TOTAL : 0.004 sec 1384 count
Transformation3.transform : 0.003 sec 27 count
Common.full_charpos_to_pos : 0.002 sec 2 count
C unparsing.new_tabbing : 0.000 sec 149 count
get_glimpse_constants : 0.000 sec 1 count
asttoctl2 : 0.000 sec 1 count
post_engine : 0.000 sec 1 count</span></span></pre>
<br />
<br />
OK one last SmPL example, this one was added by Johannes:<span style="font-size: x-small;"><span style="font-family: "Courier New", Courier, monospace;"> </span></span><br />
<br />
<span style="color: white;"><span style="background-color: black;"><span style="font-size: x-small;"><span style="font-family: "Courier New", Courier, monospace;"><span style="color: #3d85c6;">@@</span><br /><span style="color: yellow;">expression</span> dev;<br /><span style="color: yellow;">expression</span> ops;<br /><span style="color: #3d85c6;">@@</span><br /><span style="color: #a64d79;">-dev->netdev_ops = ops;</span><br /><span style="color: cyan;">+netdev_attach_ops(dev, ops);</span></span></span></span></span><br />
<span style="font-size: x-small;"><span style="font-family: "Courier New", Courier, monospace;"> </span></span> <br />
We do all the dirty trickery of backporting the net_device data structure changes for device operation callbacks into a helper, this is an example of how we minimize code changes. I documented how and why we do this in my last post on <a href="http://www.do-not-panic.com/2012/08/automatically-backporting-linux-kernel.html">automatically backporting the Linux kernel</a>, go read that to learn how we backport this. I stated above that you can use data structures to ask <a href="http://coccinelle.lip6.fr/">Coccinelle</a> to only make changes specific to the data structure you are interested in modifying, therefore reducing the namespace for changes. This SmPL patch can be modified to be data structure specific.<span style="font-size: x-small;"><span style="font-family: "Courier New", Courier, monospace;"> </span></span><br />
<br />
<span style="background-color: black;"><span style="font-size: x-small;"><span style="font-family: "Courier New", Courier, monospace;"><span style="color: #3d85c6;">@@</span><br /><span style="color: white;"><span style="color: yellow;">struct</span> net_device *dev;<br /><span style="color: yellow;">struct</span> net_device_ops ops;<br /><span style="color: #3d85c6;">@@</span><br /><span style="color: #c27ba0;">-dev->netdev_ops = &ops;</span><br /><span style="color: cyan;">+netdev_attach_ops(dev, &ops);</span></span></span></span></span><br />
<span style="font-size: x-small;"><span style="font-family: "Courier New", Courier, monospace;"> </span></span> <br />
To see what I mean let me give you an example simple code snippet we can use to test this on, I've also put this on github on a <a href="https://github.com/mcgrof/netdev-ops">netdev-ops git tree</a>. After installing Coccinelle, you just want to run:<br />
<ol>
<li>make test1</li>
<li>git checkout -f </li>
<li>make test2</li>
</ol>
The difference in output should be enough to provide clarity for what I'm about to explain. Below is the netdev.c code snippet that we deal to test transformations.<br />
<br />
<span style="color: white;"><span style="background-color: black;"><span style="font-size: x-small;"><span style="font-family: "Courier New", Courier, monospace;"><span style="color: #3d85c6;">#include</span> <span style="color: #c27ba0;"><stdlib .h=""></stdlib></span><stdlib .h=""><br /><br /><span style="color: lime;">struct</span> net_device_ops {<br />};</stdlib></span></span></span></span><br />
<span style="color: white;"><span style="background-color: black;"><span style="font-size: x-small;"><span style="font-family: "Courier New", Courier, monospace;"><stdlib .h=""><br /><span style="color: lime;">struct</span> net_device {<br /> struct net_device_ops *netdev_ops;<br />};<br /><br /><span style="color: lime;">struct</span> bubble_ops {<br />};<br /><br /><span style="color: lime;">struct</span> bubbles {<br /> struct bubble_ops *netdev_ops;<br />};<br /><br /><span style="color: lime;">static</span> struct net_device_ops my_netdev_ops = {<br />};<br /><br /><span style="color: lime;">static</span> struct bubble_ops my_bubble_ops = {<br />};<br /><br /><span style="color: lime;">static struct</span> parent {<br /> struct net_device *dev;<br /> int b;<br />};<br /><br /><span style="color: lime;">static struct</span> parent_usb {<br /> struct net_device *net;<br /> int b;<br />};<br /><br /><span style="color: lime;">int</span> main(<span style="color: lime;">void</span>)<br />{<br /> <span style="color: lime;">struct</span> parent *p = malloc(<span style="color: red;">sizeof</span>(<span style="color: lime;">struct</span> parent));<br /> <span style="color: lime;">struct</span> parent_usb *p_usb = malloc(<span style="color: red;">sizeof</span>(<span style="color: lime;">struct</span> parent));<br /> <span style="color: lime;">struct</span> net_device *dev = malloc(<span style="color: red;">sizeof</span>(<span style="color: lime;">struct</span> net_device));<br /> <span style="color: lime;">struct</span> bubbles *bubble = malloc(<span style="color: red;">sizeof</span>(<span style="color: lime;">struct</span> bubbles));<br /><br /> dev->netdev_ops = &my_netdev_ops;<br /> bubble->netdev_ops = &my_bubble_ops;<br /><br /> free(dev);<br /> free(bubble);<br /> free(p);<br /> free(p_usb);<br /><br /> p->dev = dev;<br /> p->dev->netdev_ops = &my_netdev_ops;<br /> p_usb->net->netdev_ops = &my_netdev_ops;<br /><br /> <span style="color: yellow;">return</span> <span style="color: #c27ba0;">0</span>;<br />}</stdlib></span></span></span></span><br />
<br />
Using the second version of the SmPL rules file will ensure <b>we do not modify</b> bubble->netdev_ops, and we only change the dev->netdev_ops line. The p_usb case was added as an example where the dev name was not used but it still works. Notice that for the Linux kernel source it does however mean that you should use --recursive-includes on Coccinelle. Security folks should be excited about this though. The reason why this works is that the code generation and inspection is happening on the latest upstream code always. The above change to the SmPL rule actually incurs an added 50 second penalty but that is because of the --recursive-includes flag. Now, the interesting thing about this specific collateral evolution, and why I've dedicated so much attention to it in previous posts and in this post is that if we added a static inline upstream on the Linux kernel we can get rid of this SmPL file completely and the backport would just be done automatically through the backports module. I'm trying to do as much homework as I can to ensure that before I send a patch upstream its well understood and documented exactly why I believe an upstream change should be considered in light of the gains of automatically backporting Linux. Even though we'd end up removing that SmPL patch using SmPL however helps formalize this entire process, and we likely wouldn't end up removing all SmPL patches.<br />
<br />
Our SmPL patch count per backports release so far:<br />
<ul>
<li>linux-3.13.y - 3 SmPL patches (all thanks to Johannes!):</li>
<ul>
<li><span style="color: cyan;">25-multicast.cocci</span></li>
<li><span style="color: cyan;">0005-netlink-portid.cocci</span></li>
<li><span style="color: cyan;">0001-netdev_ops.cocci</span></li>
</ul>
<li> linux-3.14.y - 5 SmPL patches:</li>
<ul>
<li><span style="color: #3d85c6;">25-multicast.cocci</span></li>
<li><span style="color: #3d85c6;">0005-netlink-portid.cocci</span></li>
<li><span style="color: #3d85c6;">0001-netdev_ops.cocci</span></li>
<li><span style="color: cyan;">11-dev-pm-ops.cocci</span></li>
<li><span style="color: cyan;">62-usb_driver_lpm.cocci</span></li>
</ul>
<li>master - 6 SmPL patches</li>
<ul>
<li><span style="color: #3d85c6;">25-multicast.cocci</span></li>
<li><span style="color: #3d85c6;">0005-netlink-portid.cocci</span></li>
<li><span style="color: #3d85c6;">0001-netdev_ops.cocci</span></li>
<li><span style="color: #3d85c6;">11-dev-pm-ops.cocci</span></li>
<li><span style="color: #3d85c6;">62-usb_driver_lpm.cocci</span></li>
<li><span style="color: cyan;">0015-threaded-irq.cocci </span></li>
</ul>
</ul>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh1mftoFDN-ZCbpaF0QkJGx9wOlvRPU8QZ4gNkceH8_BVyjwTtJ5F0PjtCzhVn6Orryc78E3X6tIaKKCwf7JpXR64xn62iiUVT9XBzjXFLSTSdKE6dNh-1S1qOhQNb_20wg_-i5hg/s1600/IMG_20131111_150748.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh1mftoFDN-ZCbpaF0QkJGx9wOlvRPU8QZ4gNkceH8_BVyjwTtJ5F0PjtCzhVn6Orryc78E3X6tIaKKCwf7JpXR64xn62iiUVT9XBzjXFLSTSdKE6dNh-1S1qOhQNb_20wg_-i5hg/s1600/IMG_20131111_150748.jpg" height="320" width="240" /></a></div>
<br />
The last thing we'll need to address somehow is to ensure that we don't
provide support for a ton of kernels. That just doesn't scale. We currently provide backports down to the last 30 kernels. Before we make a release we test compile each release against 30 kernels. We need to shave that down but the reason we haven't been doing much of this is that most silicon industry have customers using tons of ancient random kernels and they never upgrade. It should be clear this is a <b>security concern</b> and I hope that with education, and perhaps using backports as a carrot, we can streamline only getting the entire industry to work on and embrace usage of only the kernels listed on kernel.org. <b>Folks -- if we want to scale, we gotta do this</b>.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-a5h_cKS_7TA/Up4GetRULwI/AAAAAAABhDw/2goG6JALV3I/s1600/13+-+1" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-a5h_cKS_7TA/Up4GetRULwI/AAAAAAABhDw/2goG6JALV3I/s1600/13+-+1" height="300" width="400" /></a></div>
<br />
In so far as my new role at SUSE is concerned, we'll be primarily focusing only on kernels greater than 3.0. We will also need to start being pickier about what drivers we backport. We should not backport / compile / test carry anything ancient. There is simply no point. You should expect a good cleaning on backports soon to address all this. We should strive so that kernel developers never have to do backporting
-- in order to help with this maintainers can help be considerate of
software architecture strategies that can be embraced to help do
backporting efficiently. <i>We're still figuring out what those things are</i>, but you will soon see some example patches posted by me upstream that try to help with this. One example will be to introduce usage of static inlines for usage on data structure assignment. We should be conservative about what things make sense though. We're still learning the ropes as to what can help, but we're on our way.mcgrofhttp://www.blogger.com/profile/06081818694231731816noreply@blogger.com0tag:blogger.com,1999:blog-29679292.post-48746095185099991482014-03-24T03:39:00.002-07:002014-03-24T03:41:03.989-07:00Free Software, Patents, Surveillance and Cosmos<br />
<iframe src="https://www.flickr.com/photos/masondan/8904114591/player/ff567082bd" height="375" width="500" frameborder="0" allowfullscreen webkitallowfullscreen mozallowfullscreen oallowfullscreen msallowfullscreen></iframe>
<div style="text-align: center;">
<br /></div>
<br />
I've had a few set of concepts brewing for a while now which I finally had time to jot down this weekend. I split this into 3 pieces but they're all related. This integrates free software, legal innovations, patents, freedoms, surveillance, singularity and even the cosmos.<br />
<ul>
<li><a href="http://www.do-not-panic.com/2014/03/the-dangers-of-free-software.html">The dangers of free software</a></li>
<li><a href="http://www.do-not-panic.com/2014/03/the-free-software-patent-paradox.html">The Free Software patent paradox</a></li>
<li><a href="http://www.do-not-panic.com/2014/03/cosmic-evolution-of-free-software.html">Cosmic evolution of free software</a></li>
</ul>
When I call out a company on having an archaic business model I hope its clear now why. Our challenge in the community is to simply not tolerate stupidity, help advance, support and evolve business models that will help us innovate ethically, but also ensure we educate folks to avoid friction and collateral with evolution.mcgrofhttp://www.blogger.com/profile/06081818694231731816noreply@blogger.com0San Francisco, CA 94108, USA37.7909427 -122.4084993999999837.7783947 -122.42866939999998 37.803490700000005 -122.38832939999999tag:blogger.com,1999:blog-29679292.post-36094653192517623902014-03-24T03:39:00.001-07:002017-05-23T14:44:41.246-07:00Cosmic evolution of free software<div style="text-align: left;">
This is part of a 3 piece entry, the cover of which is: <a href="http://www.do-not-panic.com/2014/03/free-software-patents-surveillance-and.html">free software patents and cosmos.</a></div>
<div style="text-align: center;">
</div>
<div style="text-align: center;">
<iframe allowfullscreen="" frameborder="0" height="271" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" src="https://www.flickr.com/photos/lgnome/5394618070/player/f502468a54" webkitallowfullscreen="" width="500"></iframe>
</div>
<div style="text-align: center;">
<br /></div>
There is a relationship between cosmology, singularity and <a href="http://en.wikipedia.org/wiki/Free_software"><b>free software</b></a> and even <a href="http://en.wikipedia.org/wiki/NSA_warrantless_surveillance_%282001%E2%80%9307%29">surveillance</a> which this post dares to consider in a <u>very condensed</u> way which I will use to try to remove any <a href="http://en.wikipedia.org/wiki/Fear,_uncertainty_and_doubt">fear, uncertainty and doubt</a> over the <b>evolution</b> of <a href="http://en.wikipedia.org/wiki/Free_software"><b>free software</b></a> and <a href="http://en.wikipedia.org/wiki/Open_source_software"><b>open source software</b></a> but also to provide clarification of why business models <b>must become dynamic and adaptable</b>.<br />
<br />
<div style="text-align: center;">
<iframe allowfullscreen="" frameborder="0" height="448" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" src="https://www.flickr.com/photos/aleiex/2123333438/player/6c891b22b4" webkitallowfullscreen="" width="328"></iframe>
</div>
<div style="text-align: center;">
<br /></div>
There's a concept which Carl Sagan popularized in his 1980's <a href="http://en.wikipedia.org/wiki/Cosmos:_A_Personal_Voyage">Cosmos: A Personal Voyage</a> which I'd like for us to recall -- the <a href="https://en.wikipedia.org/wiki/Cosmic_Calendar">Cosmic Calendar.</a> <br />
<br />
<div style="text-align: center;">
<iframe allowfullscreen="" frameborder="0" height="500" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" src="https://www.flickr.com/photos/h-l-n/6759599217/player/c17de4432f" webkitallowfullscreen="" width="500"></iframe>
</div>
<br />
Singularity is a new craze these days, people are not only getting really excited about the prospects, some are also <b>investing a lot of money into this</b>. If you are not familiar with the concepts of Singularity I recommend to check out <a href="http://www.youtube.com/watch?v=zihTWh5i2C4">Ray Karzweil's talk at Google about the Singularity</a>, now hired at Google as Director of Engineering to help bring natural language understanding to Google. I'd recommend following this up with the <a href="http://en.wikipedia.org/wiki/Transcendent_Man">Transcendent Man</a> at least twice. Then watch the <a href="http://www.youtube.com/watch?v=rB7VkrUYCAg">IBM's revelations</a> of their new <a href="http://en.wikipedia.org/wiki/Watson_%28computer%29#IBM_Watson_Group">$1 billion investment into expanding their Watson Group</a>, . Good 'ol Watson beat humans in Jeopardy in 2011, it ended up using used <a href="http://en.wikipedia.org/wiki/Main_Page">Wikipedia</a> to learn tons of human information. Finally watch <a href="http://en.wikipedia.org/wiki/Ben_Goertzel">Benjamin Goertzel</a>'s <a href="https://www.youtube.com/watch?v=i6ctsWLi_G4">interview regarding the Singularity</a>. In short the concept of singularity is that at one point in time computers will surpass the intelligence of humans. Ray has popularized the term even more by integrating a firm prediction of when that will happen. He believes that a machine will pass the Turing test by <b>2029</b>, and that around <b>2045</b>, "the pace of change will be so astonishingly quick that we won't be able to keep up, unless we enhance our own intelligence by merging with the intelligent machines we are creating". The foundation of Ray's predictions is the concept that technology keeps evolving exponentially. If this sounds fuzzy consider <a href="http://en.wikipedia.org/wiki/Moore%27s_law">Moore's law</a> and extend that before and beyond. He mentions in the the <a href="http://en.wikipedia.org/wiki/Transcendent_Man">Transcendent Man</a> he has <b>10</b> folks who work for him specifically only for gathering data and building projections on technological growth. He uses this to help build more accurate <b>business models</b>. Ray clarifies:<br />
<blockquote class="tr_bq">
"<i>Business plans set out to only have an outlook for the next 3-4 years are pretty short sighted. You only need to look at the last 3 or 4 years to see that that's not correct.</i>"</blockquote>
<div style="text-align: center;">
<br /></div>
<div style="text-align: center;">
<iframe allowfullscreen="" frameborder="0" height="334" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" src="https://www.flickr.com/photos/jonathanmcintosh/3744953433/player/e3b523e24d" webkitallowfullscreen="" width="500"></iframe>
</div>
<div style="text-align: center;">
<br /></div>
We're at a point in time in history where some folks are gathering together random materials of their long past relatives with realistic expectations that if we we can't bring their long lost relative back some new <a href="http://en.wikipedia.org/wiki/Artificial_intelligence">Artificial Intelligence</a> (AI) systems in the future will be able to bring back the long lost relative somehow under the expectation that these AIs will be far more advanced in intelligence than humans. There's a lot of concerns over <a href="http://en.wikipedia.org/wiki/NSA_warrantless_surveillance_%282001%E2%80%9307%29">surveillance</a> lately, how will that look like in the future? In <a href="http://en.wikipedia.org/wiki/Ben_Goertzel">Benjamin Goertzel</a>'s <a href="https://www.youtube.com/watch?v=i6ctsWLi_G4">interview regarding the Singularity</a> he postulates a possible <b>"suveillance"</b> situation under which its accepted a certain amount of if not most of our private information is surveiled by advanced AIs, the concept and question of a <i>Nanny AI</i> comes up, if we had an advanced AI could we trust it to surveil us fairly? More importantly -- how would a Nanny AI be able to <b>ethically surveil us</b>?<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://upload.wikimedia.org/wikipedia/commons/9/99/Cosmic_Calendar.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://upload.wikimedia.org/wikipedia/commons/9/99/Cosmic_Calendar.png" height="300" width="400" /></a></div>
<br />
<br />
The last point I will need to make to wrap up is to clarify that Ray's own appreciation of the exponential doesn't do enough justice to give us a good picture of what we should dream of, consider, and appreciate in the grand scheme of things. <a href="http://en.wikipedia.org/wiki/Cosmic_Calendar">Carl Sagan's cosmic calendar</a> is perhaps the best most dramatic crystal clear proof of exponential growth. Its not attached to technology, or biology, it goes way beyond that to incorporate the entire cosmos.<br />
<br />
<div style="text-align: center;">
<iframe allowfullscreen="" frameborder="0" height="240" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" src="https://www.flickr.com/photos/58782395@N03/5519582796/player/81b0068316" webkitallowfullscreen="" width="500"></iframe>
</div>
<br />
When you take into consideration <a href="http://en.wikipedia.org/wiki/Ray_Kurzweil">Ray Kurzwei</a>l's exponential growth some folks are starting to believe that Singularity is near. When you take into consideration <a href="http://en.wikipedia.org/wiki/Cosmic_Calendar">Carl Sagan's cosmic calendar</a> singularity should seem something more easy to swallow. Abuses of big data might be unavoidable but in order to help mitigate abuses we should consider introducing ethics into <a href="http://en.wikipedia.org/wiki/Artificial_intelligence">Artificial Intelligence</a> (AI) systems. <a href="http://www.do-not-panic.com/2013/08/evolving-capitalism-with-ethical.html">Ethhical attributes</a> <b>should help not only</b> shape appreciation on new evolutions of <a href="http://en.wikipedia.org/wiki/Free_software"><b>free software</b></a> in light of evolution of new freedoms, but it can obviously be used to help appreciate new <b>business models</b> and perhaps even one day teach AIs to be ethical as we explore the cosmos.mcgrofhttp://www.blogger.com/profile/06081818694231731816noreply@blogger.com0San Francisco, CA 94108, USA37.7909427 -122.4084993999999837.7783947 -122.42866939999998 37.803490700000005 -122.38832939999999tag:blogger.com,1999:blog-29679292.post-27504041274762966632014-03-24T03:39:00.000-07:002014-12-10T11:15:28.593-08:00The Free Software patent paradox<br />
This is part of a 3 piece entry, the cover of which: <a href="http://www.do-not-panic.com/2014/03/free-software-patents-surveillance-and.html">free software patents and cosmos</a><br />
<div style="text-align: center;">
<iframe allowfullscreen="" frameborder="0" height="432" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" src="https://www.flickr.com/photos/angelsgate/549231098/player/569e8aefab" webkitallowfullscreen="" width="324"></iframe>
</div>
<div style="text-align: center;">
<br /></div>
<div style="text-align: center;">
<br /></div>
On my last post I reviewed the rapid pace of evolution of <a href="http://en.wikipedia.org/wiki/Free_software"><b>free software</b></a>, <a href="http://en.wikipedia.org/wiki/Open_source_software"><b>open source software</b></a> and legal evolutions and the implications for business models. There is an interesting set of problems which arise in light of all these evolutions in consideration of personal freedoms, existing business models and patents which I'd like to review now which in the worst case scenario make <a href="http://en.wikipedia.org/wiki/Free_software"><b>free software</b></a> and <a href="http://en.wikipedia.org/wiki/Open_source_software"><b>open source software</b></a> targets. I'll refer to this as the free software patent paradox. A paradox, as per my lazy search on google, is defined as:<br />
<blockquote class="tr_bq">
paradox: <i>a statement or proposition that, despite sound (or apparently
sound) reasoning from acceptable premises, leads to a conclusion that
seems senseless, logically unacceptable, or self-contradictory.</i></blockquote>
I'll define the free software patent paradox as follows:<br />
<blockquote class="tr_bq">
Due to the explosion, market use and leadership of <a href="http://en.wikipedia.org/wiki/Free_software"><b>free software</b></a> and <a href="http://en.wikipedia.org/wiki/Open_source_software"><b>open source software</b></a> we have created a great set of safe havens for <a href="http://en.wikipedia.org/wiki/Free_software"><b>free software</b></a> and <a href="http://en.wikipedia.org/wiki/Open_source_software"><b>open source software</b></a> developers to work in.</blockquote>
You might think that this paradox might <b><u>only</u></b> make sense for those who understand and appreciate <a href="http://en.wikipedia.org/wiki/Copyleft">copyleft</a> in the industry, that is, that BSD hackers could not be affected by all this, but that's not the case. If you don't care or understand the reasons for the advancements of <a href="http://en.wikipedia.org/wiki/Copyleft">copyleft</a> today you can still easily get burned by the rules put in place at companies that have patents. The issue I'm trying to highlight then does not only affect Linux / GNU hackers, but also BSD and permissive licensed hackers. This might even hold true, although I'm certain to a lesser degree, for companies that provide devices with trusted boot images (addressed by the <a href="http://en.wikipedia.org/wiki/GNU_General_Public_License#Version_3">GPLv3</a>), or web services code (addressed by the <a href="http://en.wikipedia.org/wiki/Affero_General_Public_License">AGPL</a>). It must be less of an issue for these companies as the licenses that could create a conflict for these companies such as the <a href="http://en.wikipedia.org/wiki/GNU_General_Public_License#Version_3">GPLv3</a> and <a href="http://en.wikipedia.org/wiki/Affero_General_Public_License">AGPL</a> are naturally less common than the combination of <a href="http://en.wikipedia.org/wiki/GNU_General_Public_License#Version_3">GPLv3</a>, <a href="http://en.wikipedia.org/wiki/Affero_General_Public_License">AGPL</a>, and <a href="http://en.wikipedia.org/wiki/Apache_License">Apache 2.0 licensed</a> projects put together, all of which address patents. There are tons of important projects licensed under the <a href="http://en.wikipedia.org/wiki/Apache_License">Apache 2.0 license</a>. Don't take my word for the issues I'm saying exist for <a href="http://en.wikipedia.org/wiki/Free_software"><b>free software</b></a> and <a href="http://en.wikipedia.org/wiki/Open_source_software"><b>open source software</b></a> developers -- if you don't know what this is like go ask someone who works at a high patent portfolio owning company what it's like to get a better sense of what's required to contribute to any <a href="http://en.wikipedia.org/wiki/Free_software"><b>free software</b></a> and <a href="http://en.wikipedia.org/wiki/Open_source_software"><b>open source software</b></a> project, heck while at it ask what are the rules to contribute to <a href="http://en.wikipedia.org/wiki/Main_Page">Wikipedia</a>.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-8kq-aXTc8t8/US2V3DBYQ0I/AAAAAAABPVw/WrLzp8XNUQI/s1600/20130203_114329.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-8kq-aXTc8t8/US2V3DBYQ0I/AAAAAAABPVw/WrLzp8XNUQI/s1600/20130203_114329.jpg" height="240" width="320" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
The explanation to this paradox comes from the fact that the rate of
change of corporate adoption of new freedoms will be slower than the
rate at which you may wish to embrace these
freedoms and at the same time protect its own corporate interests. Unfortunately in the the worst case scenario for some developers its like being in a glacial moving prison.<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj4kVQwXwyM0YazVDfOQ45rRAsovo7TZ5jD2kbSo3ZCIR12zqywgSD7jiFoElC1K4ptft7jdcCcym1hxRrTgYsS1s-xDMGQL3np1TNBPs-qX5Wafd-48_ESsf72fzsIiFuOD4afXQ/s1600/patent-paradox.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj4kVQwXwyM0YazVDfOQ45rRAsovo7TZ5jD2kbSo3ZCIR12zqywgSD7jiFoElC1K4ptft7jdcCcym1hxRrTgYsS1s-xDMGQL3np1TNBPs-qX5Wafd-48_ESsf72fzsIiFuOD4afXQ/s1600/patent-paradox.png" height="311" width="320" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
The motivation for reviewing this paradox comes from the consideration of the unfortunate implications for any <a href="http://en.wikipedia.org/wiki/Free_software"><b>free software</b></a> developer who wishes to work at any modern company or is already working at a company that has put in place rules the engineer must follow to participate in <a href="http://en.wikipedia.org/wiki/Free_software"><b>free software</b></a> and <a href="http://en.wikipedia.org/wiki/Open_source_software"><b>open source software</b></a> projects either at work or at home. If we are to consider the situation of a developer, lets call him Joe Hacker, having started <a href="http://en.wikipedia.org/wiki/Free_software"><b>free software</b></a> projects prior to joining a corporation, lets call it Yoyodyne, Inc. Lets assume Joe Hacker only gets hired to work on <a href="http://en.wikipedia.org/wiki/Free_software"><b>free software</b></a> and <a href="http://en.wikipedia.org/wiki/Open_source_software"><b>open source software</b></a>.There are copyright considerations for:<br />
<ol>
<li>Who owns the copyrights to the software that Joe Hacker wrote prior to joining Yoyodyne, Inc?</li>
<li>Who owns the copyrights to the software that Joe Hacker will write for Yoyodyne Inc?</li>
<li>What software projects can Joe Hacker contribute to while at Yoyodyne Inc? </li>
</ol>
At least the state of California provides clarifications that what you do on your own time without office equipment is your own but a lot of corporate legal agreements contain language that could try to abuse this in other jurisdictions, so the above questions need to be considered separately for both the case of when Joe Hacker is on the clock or at home. I think its reasonable to say that in terms of copyright one could assume that any company hiring any <a href="http://en.wikipedia.org/wiki/Free_software"><b>free software</b></a> should be able to contribute to any <a href="http://en.wikipedia.org/wiki/Free_software"><b>free software</b></a> project when at home and the very least <a href="http://en.wikipedia.org/wiki/Main_Page">Wikipedia</a>. You'd be surprised, unfortunately this is not the case, and I'm afraid you might as well consider a different profession other than software engineering if you want to contribute freely to any <a href="http://en.wikipedia.org/wiki/Free_software"><b>free software</b></a> project. <br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-MY_h_UaNPHs/US2V3Oz7xzI/AAAAAAABPVw/0KnkSeAJWpc/s1600/20130218_102357.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-MY_h_UaNPHs/US2V3Oz7xzI/AAAAAAABPVw/0KnkSeAJWpc/s1600/20130218_102357.jpg" height="240" width="320" /></a></div>
<br />
<br />
The situation is already a bit complex, now throw in patent considerations and things get even worse. I'm not going to get into the details and leave the considerations as an exercise for the reader. Remember that money talks and that companies and governments can toy
around with our freedoms as they see legally fit, ethics can very easily simply be
thrown out the fucking window. I will provide a clear conclusion and some advice for <a href="http://en.wikipedia.org/wiki/Free_software"><b>free software</b></a> developers. The free software patent paradox <b>does not need to be a paradox</b>, you are a free person by nature! Don't set yourself up and don't go work for a prison, as you may easily simply get yourself locked up in there. The paradox does however present a <b>serious</b> problem in terms of evolution and the problem of creating more highly evolved business models that respect and take into serious consideration the evolution of freedoms in a more timely manner. In my next post I will explain why this is necessary.mcgrofhttp://www.blogger.com/profile/06081818694231731816noreply@blogger.com8San Francisco, CA 94108, USA37.7909427 -122.4084993999999837.7783947 -122.42866939999998 37.803490700000005 -122.38832939999999tag:blogger.com,1999:blog-29679292.post-39645149972126818092014-03-24T03:38:00.004-07:002014-12-10T11:15:36.057-08:00The dangers of free software<br />
This is part of a 3 piece entry, the cover of which is: <a href="http://www.do-not-panic.com/2014/03/free-software-patents-surveillance-and.html">free software patents and cosmos</a><br />
<br />
<iframe allowfullscreen="" frameborder="0" height="375" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" src="https://www.flickr.com/photos/activesteve/5041066177/player/ee1f83619a" webkitallowfullscreen="" width="500"></iframe>
<br />
<div style="text-align: center;">
<br /></div>
<br />
There are serious dangers with <a href="http://en.wikipedia.org/wiki/Free_software"><b>free software</b></a> and <a href="http://en.wikipedia.org/wiki/Open_source_software"><b>open source</b></a> software which I'd like to review. Some zealots may tell you that there is nothing to worry about when embracing <a href="http://en.wikipedia.org/wiki/Free_software"><b>free software</b></a> and <a href="http://en.wikipedia.org/wiki/Open_source_software"><b>open source</b></a> software but I'm here to tell you that this is all wrong, explain where all this badness comes from and give some advice about what you can do about it. I will do this by providing a brief summary on the efforts on transforming a non upstream Linux focused company around but would also like to put emphasis on the root of all this evil.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-0mO64Z0ZY9o/US2V3PjV2DI/AAAAAAABPVw/nVelK4hQSJQ/s1600/20130206_091134.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-0mO64Z0ZY9o/US2V3PjV2DI/AAAAAAABPVw/nVelK4hQSJQ/s1600/20130206_091134.jpg" height="240" width="320" /></a></div>
<br />
Once upon a time in a not so far distant time I recall being <b>seriously</b> frustrated about progress, progress about pretty much everything related to helping what was once - a small company and later large - <b>evolve</b> to properly understand and participate the Linux upstream community. Before I go on any further I should note that my role at Atheros was clarified since before I got hired and I accepted, I firmly believe that providing a loose interpretation about what happened, without getting into specifics, is part of the rights of my role which was defined below.<br />
<blockquote class="tr_bq">
<i>We are committed to having a driver in upstream kernel. Your role is to help us find that path to make it happen. We have ideas, if they do not work then we shall try other plans. You shall be the driving force to get in the kernel. The entirety of your role is to do this and continue to work with FOSS and Linux wireless. I am a little surprised that you feel uncertainty in that; which makes me feel that we/I have not communicated effectively. We can talk more live if that would help. - </i>March 20, 2008<i><br /></i></blockquote>
That made things crystal clear, so I accepted! I packed my bags and left dirty Jersey behind. With confidence I can say <a href="http://www.do-not-panic.com/2012/03/what-have-you-done-for-me-lately.html">I gave it my very fucking best</a>, and am happy to say that Atheros <b>even become a leader</b>, we even put out <a href="https://www.fsf.org/news/ryf-certification-thinkpenguin-usb-with-atheros-chip">open firmware for 802.11 device drivers</a>, but after <i>some big changes... </i>and even though I also gave it my best after these changes... <a href="http://www.do-not-panic.com/2013/11/i-quit-qualcomm-today-whoohoo.html">I had to throw the towel</a> -- specially once my own freedoms and others' freedoms came to be questioned. There's some lessons to be learned here but more importantly, without getting into the specifics, are the realizations of <b>why</b> the tensions arose, understanding <b>where they came</b> from and <i>how to better prepare</i> for the future.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://upload.wikimedia.org/wikipedia/en/f/fb/Pointy-Haired_Boss.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://upload.wikimedia.org/wikipedia/en/f/fb/Pointy-Haired_Boss.jpg" /></a></div>
<br />
There is nothing more disturbing to a free software hacker than a <a href="http://en.wikipedia.org/wiki/Pointy-haired_Boss">PHB</a>, a term popularized through <a href="http://en.wikipedia.org/wiki/Dilbert">Dilbert</a> in 1989 and specifically within our <a href="http://en.wikipedia.org/wiki/Atheros">Atheros</a> group's hacker culture by <a href="http://msujith.net/">sujith</a>. The term caught on some love within the group, to the point we even used it publicly for predicting the next time Linus Torvalds would decide to release the next kernel release <b>for them</b> through the <a href="http://phb-crystal-ball.org/">phb-crystal-ball</a>. You see, in any <b>evolving</b> company, as you try to get work done you could eventually hit a brick wall by a <a href="http://en.wikipedia.org/wiki/Pointy-haired_Boss">PHB</a>. <a href="http://en.wikipedia.org/wiki/Pointy-haired_Boss">PHB</a>s don't necessarily need to exist though, I'm happy to report that I haven't ran into a single <a href="http://en.wikipedia.org/wiki/Pointy-haired_Boss">PHB</a> at SUSE now though. My new manager is a direct contributor to the Linux kernel and I wouldn't be surprised if he surpasses me on upstream contributions. But the corporate world is <b>plagued</b> with <a href="http://en.wikipedia.org/wiki/Pointy-haired_Boss">PHB</a>s and this is specially true for silicon valley companies making hardware. Its an epidemic. One of the disturbing aspect of <a href="http://en.wikipedia.org/wiki/Pointy-haired_Boss">PHBs</a> though is that <a href="http://en.wikipedia.org/wiki/Pointy-haired_Boss">PHBs</a> need more <a href="http://en.wikipedia.org/wiki/Pointy-haired_Boss">PHB</a>s around, and they speak the same language. They have no fucking clue what they are talking about, so they obviously need others to ramble off on tangents that make no fucking sense whatsoever. You are <b>not</b> a <a href="http://en.wikipedia.org/wiki/Pointy-haired_Boss">PHB</a> when you are comfortable in accepting <i>you just fucking don't know</i> and stop making stupid fucking assumptions and getting in the way of engineer's work, or at the very least show effort to and care to analyze proposals. I became increasingly concerned over <a href="http://en.wikipedia.org/wiki/Pointy-haired_Boss">PHBs</a> though, not because they were getting in my way, although that was disturbing, but because they actually existed and because of the unnecessary tensions that arose because of them. It was also frightening as I was working in Silicon Valley with brilliant fuckers, and I struggled to understand the nature of the <a href="http://en.wikipedia.org/wiki/Pointy-haired_Boss">PHB</a>, you see a <a href="http://en.wikipedia.org/wiki/Pointy-haired_Boss">PHB</a> can even be a rocket scientist, I kid you not. It didn't make any fucking sense.<br />
<br />
<div style="text-align: center;">
<iframe allowfullscreen="" frameborder="0" height="320" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" src="https://www.flickr.com/photos/mongreloid/2347236377/player/17c8fd44a8" webkitallowfullscreen="" width="500"></iframe>
</div>
<br />
WTF was going on, how could this be? There's two possible sides to the answers for this question. In the end you are either the one who is nuts or your <a href="http://en.wikipedia.org/wiki/Pointy-haired_Boss">PHB</a> is nuts. But this is beyond <a href="http://en.wikipedia.org/wiki/Pointy-haired_Boss">PHBs</a>, you see, who do you think ends up at upper middle management at corporations? Upper echelons at corporations breed <a href="http://en.wikipedia.org/wiki/Pointy-haired_Boss">PHBs</a>, and for a lot of folks in the corporate world this could mean a ticket to going back to your original country and putting down a <a href="http://en.wikipedia.org/wiki/Pointy-haired_Boss">PHB</a> fort there too, and maybe even help your peers and own country's economy. There's a slew of collateral damage incurred for a company that is not used to doing free software development and all of a sudden change to do so, and even more so if it decides to become a <b>leader</b> with free software. The collateral damage makes <b>market capital sense</b> in light of the gains of the <a href="http://en.wikipedia.org/wiki/Collaborative_software_development_model">open collaborative development models</a>, specifically of working upstream on the Linux kernel and in consideration of the <b>other</b> business collateral damage if its competitors are embracing upstream development <b>better. </b><a href="http://en.wikipedia.org/wiki/Pointy-haired_Boss">PHB</a>s are only a small part of the issues you could run into though. Upon a free software corporate culture change everyone needed to become educated <i>somehow</i> about free software and for those on the <a href="http://en.wikipedia.org/wiki/German_Reich">Nazi German Reich</a> who worked only on proprietary software, it meant you had to go change your outfit and either join the allied forces or go to one of those remaining corners full of cobwebs left with some room for proprietary software development, or you go rediscover yourself and quit.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://upload.wikimedia.org/wikipedia/commons/6/6f/Editorial_cartoon_depicting_Charles_Darwin_as_an_ape_(1871).jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://upload.wikimedia.org/wikipedia/commons/6/6f/Editorial_cartoon_depicting_Charles_Darwin_as_an_ape_(1871).jpg" height="320" width="237" /></a></div>
<br />
Markets and business models <b>evolve</b>, employment is simply an attribute of how folks can fit into the current economic landscape.<br />
<blockquote class="tr_bq">
The nature of the <a href="http://en.wikipedia.org/wiki/Pointy-haired_Boss">PHB</a> is simply an artifact of <b>accelerated evolution</b> of markets and business models, and the <i>inability</i> or <i>lack of true will</i> to embrace the changes that they bring.</blockquote>
This is perhaps hard to understand, and definitely hard to swallow as it has other implications. When I tell people that a current business model that they embrace is archaic they're typical reaction is to cringe in disbelief and oblivious dismissal. As an example, I don't think the web services business model is a strong one, specially if user freedoms are being compromised but I actually don't expect anyone to believe me though, specially as we're in a ooze of comfort for <a href="http://en.wikipedia.org/wiki/Open_source_software"><b>open source</b></a>. You see the concept of <a href="http://en.wikipedia.org/wiki/Open_source_software"><b>open source</b></a> software is simply an apologetic excuse for adopting only <a href="http://en.wikipedia.org/wiki/Free_software"><b>free software</b></a> licenses which were <b>compatible</b> with existing business models at the time these two terms started differentiating. The results are a boom of adoption of <a href="http://en.wikipedia.org/wiki/Free_and_open-source_software">Free and Open Source Software</a> but only in the realms of what was expected up to the freedoms granted by the GPLv2, and even the GPLv2 is still looked at with careful eyes by a few silly corporations like Apple and Microsoft. <a href="http://en.wikipedia.org/wiki/Copyleft">Copyleft</a> is an innovative legal strategy used by more modern evolved software copyright licenses, starting with the GPL, which require the same rights be transferred to modified versions of the software, recursively, precluding usage of these types of licenses in proprietary software. Licenses that do not use <a href="http://en.wikipedia.org/wiki/Copyleft">copyleft</a> are considered <a href="http://en.wikipedia.org/wiki/Permissive_free_software_licence">permissive licenses</a> -- you can mix and use them with proprietary software. Markets always <b>evolve</b> though and in this particular case the software market has <b>evolved</b> best through collaborative development models. Markets <b>must keep evolving </b>though, but software is not the only thing that will evolve, <u>law and legal strategies to protect</u> <a href="http://en.wikipedia.org/wiki/Collaborative_software_development_model">open collaborative development models</a> will evolve as well, and those that are not ready when the next big swing comes will simply be put out of the way by the combination of the fast pace of both the <b>innovation</b> of the <a href="http://en.wikipedia.org/wiki/Collaborative_software_development_model">open collaborative development models</a> and through <b>new legal strategies</b> that protect these ecosystems.<br />
<br />
<div style="text-align: center;">
<img src="https://docs.google.com/drawings/d/1-LnC6065LlOILu0AnkwXnbAvkR_oTUdJSHMtm8a45MU/pub?w=331&h=230" />
</div>
<br />
Let me give you a small example of how the markets have <b>evolved</b> slightly <u>over the last few years</u> since the popularization of <a href="http://en.wikipedia.org/wiki/Open_source_software">open source</a>. With regards to <a href="http://en.wikipedia.org/wiki/Open_source_software">open source </a>software development in current markets what is the biggest pain in the ass that engineers in companies can run into? Its fucking patents. A <a href="http://en.wikipedia.org/wiki/Pointy-haired_Boss">PHB</a> is to open source software engineers as a <a href="http://en.wikipedia.org/wiki/Patent_troll">patent troll</a> is to an <a href="http://en.wikipedia.org/wiki/Open_source">open source</a> corporate leader. This is exactly why large projects like <a href="http://en.wikipedia.org/wiki/Apache">Apache</a>, <a href="http://en.wikipedia.org/wiki/Apache_Hadoop">Hadoop</a>, <a href="http://en.wikipedia.org/wiki/OpenStack">OpenStack</a>, <a href="http://en.wikipedia.org/wiki/Android_%28operating_system%29">Android</a> are licensed under the <a href="http://en.wikipedia.org/wiki/Apache_License">Apache 2.0 license</a>, which ensures that no contributor could ever become a patent troll for the ecosystem. The <a href="http://en.wikipedia.org/wiki/Apache_License">Apache 2.0 license</a> is therefore an epic <b>legal evolution</b> with regards to freedoms, <i>in the marketplace</i>. Its important to highlight however that the <a href="http://en.wikipedia.org/wiki/Apache_License">Apache 2.0 license</a> is not the only software license to include patent protection provisions, the <a href="http://en.wikipedia.org/wiki/GNU_General_Public_License#Version_3">GPLv3</a> license include these as well but due to what I believe now to be <b>unnecessary</b> tensions that arose during the brutal way in which consideration for <a href="http://en.wikipedia.org/wiki/GNU_General_Public_License#Version_3">GPLv3</a> for the Linux kernel was handled (<a href="http://www.do-not-panic.com/2012/07/gay-boring-gay-google-and-copyleft-next.html">which I also briefly hinted at here</a>), unfortunately the <a href="http://en.wikipedia.org/wiki/GNU_General_Public_License#Version_3">GPLv3</a> didn't catch the next immediate big wave in the Linux market place. <a href="http://en.wikipedia.org/wiki/GNU_General_Public_License#Version_3">GPLv3</a> fans could argue that should the Linux kernel have embraced the <a href="http://en.wikipedia.org/wiki/GNU_General_Public_License#Version_3">GPLv3</a> we would not be in the situation we are in now with patents on the Linux ecosystem, and although one could argue against this as well -- we have to accept the patent issues certainly were left untouched and one has to accept the ramifications of that. The embracing of the <a href="http://en.wikipedia.org/wiki/Apache_License">Apache 2.0 license</a> in major Linux ecosystem projects could be a good aftermath result of that.<br />
<blockquote class="tr_bq">
The <a href="http://en.wikipedia.org/wiki/Apache_License">Apache 2.0 license</a> provides an innovative legal protection strategy to the <a href="http://en.wikipedia.org/wiki/Patent_troll">patent troll</a> problem that <i>apologetic</i> <a href="http://en.wikipedia.org/wiki/Open_source_software"><b>open source</b></a> enthusiast corporations had, that still wished to strive for <a href="http://en.wikipedia.org/wiki/Collaborative_software_development_model">open collaborative development models</a> through <a href="http://en.wikipedia.org/wiki/Permissive_free_software_licence">permissive licenses</a>.</blockquote>
The <a href="http://en.wikipedia.org/wiki/Apache_License">Apache 2.0 license</a> is compatible with the <a href="http://en.wikipedia.org/wiki/GNU_General_Public_License#Version_3">GPLv3</a> license because they both have patent provisions. The difference of course is that the <a href="http://en.wikipedia.org/wiki/Apache_License">Apache 2.0 license</a> is a <a href="http://en.wikipedia.org/wiki/Permissive_free_software_licence">permissive license</a> while the <a href="http://en.wikipedia.org/wiki/GNU_General_Public_License#Version_3">GPLv3</a> is a <a href="http://en.wikipedia.org/wiki/Copyleft">copyleft</a> license. Strategically, although I believe not planned in any way, they both were trying to address the same series of patent problems we had and expected we'd have more of today.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh9yC7u5igBsTHCgRKfWRtR5_6JvAFdGkiPRmF_ZxwaxQBTeAZ2KMtV_tK6OX5loGA2xEMR2dWQbfZELErPsRwTkN3EZfyjwpFqeJ0TJPa6gne5OQ-dujUmEkeZKRSdIGHPRCN5yw/s1600/IMG_20120715_063233.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh9yC7u5igBsTHCgRKfWRtR5_6JvAFdGkiPRmF_ZxwaxQBTeAZ2KMtV_tK6OX5loGA2xEMR2dWQbfZELErPsRwTkN3EZfyjwpFqeJ0TJPa6gne5OQ-dujUmEkeZKRSdIGHPRCN5yw/s1600/IMG_20120715_063233.jpg" height="320" width="320" /></a></div>
<br />
The dangers of <a href="http://en.wikipedia.org/wiki/Free_software"><b>free software</b></a> and <a href="http://en.wikipedia.org/wiki/Open_source_software"><b>open source</b></a> are that they are simply part of the <b>evolutionary process</b>, which is the <b>greater evil</b> (which obviously isn't evil), and there is always collateral with evolution: PHBs, legal PHBs, patent trolls, supply and demand for good
engineers, non-PHBs, savvy attorneys, savvy marketing folks, you name
it. The list goes on and on. Without properly accepting and predicting <b>business model evolutions</b> fans of advanced copyleft licenses such as the <a href="http://en.wikipedia.org/wiki/Affero_General_Public_License">AGPL</a> license today will face the same <a href="http://en.wikipedia.org/wiki/Persona_non_grata">persona non grata</a> looks and experience the same tensions as I did with <a href="http://en.wikipedia.org/wiki/Pointy-haired_Boss">PHB</a>s when moving towards a Linux upstream model. The difficulty in all this lies in that <a href="http://en.wikipedia.org/wiki/Free_software"><b>free software</b></a> and <a href="http://en.wikipedia.org/wiki/Open_source_software"><b>open source</b></a> are not popularly trained as part of evolution when teaching or training folks on software engineering principles. This will obviously change but it has to happen sooner rather than later. Software is also highly evolutionary and changing a company to move away from proprietary software development to an <a href="http://en.wikipedia.org/wiki/Open_source_software"><b>open source</b></a> model requires a business model consideration, a business model evolution. I have previously explained how I believe <a href="http://www.do-not-panic.com/2013/08/evolving-capitalism-with-ethical.html">capitalism can be evolved by taking into consideration ethical attributes</a>, this is the landscape in which I believe advanced copyleft can be appreciated, and folks should prepare for the <b>evolutions in business models</b> that this can bring.mcgrofhttp://www.blogger.com/profile/06081818694231731816noreply@blogger.com0San Francisco, CA 94108, USA37.7909427 -122.4084993999999837.7783947 -122.42866939999998 37.803490700000005 -122.38832939999999tag:blogger.com,1999:blog-29679292.post-68515251467627330832014-02-20T17:26:00.001-08:002014-03-07T14:13:19.073-08:00Embracing the Developer Certificate of Origin<div style="text-align: center;">
<a href="http://www.flickr.com/photos/vblibrary/8699690415/" title="APPROVED Rubber Stamp by Enokson, on Flickr"><img alt="APPROVED Rubber Stamp" height="89" src="https://farm9.staticflickr.com/8130/8699690415_f275e6fd0c.jpg" width="500" /></a> </div>
<br />
Streamlining and embracing the <a href="http://developercertificate.org/">Developer Certificate of Origin (DCO)</a> in the <a href="https://en.wikipedia.org/wiki/Free_and_open-source_software">Free and Open Source (FOSS)</a> community has huge value and in order to help foster embracing the DCO more easily by any <a href="https://en.wikipedia.org/wiki/Free_and_open-source_software">FOSS</a> project, regardless of its license, the <a href="http://www.linuxfoundation.org/">Linux Foundation</a> has placed the <a href="http://developercertificate.org/">DCO</a> on a <a href="http://developercertificate.org/">standalone project page</a> which you can use as reference or take it to embrace it on your own project. If you are a maintainer of a <a href="https://en.wikipedia.org/wiki/Free_and_open-source_software">FOSS</a> project I highly encourage your to consider evaluating the use of the <a href="http://developercertificate.org/">DCO</a> in your own project. Embracing it should consist on simply referring to it through your documentation on how people should contribute, or simply copy and pasting it into your own project and using. I provide a simple example through the <a href="https://git.kernel.org/cgit/linux/kernel/git/mcgrof/crda.git/tree/CONTRIBUTING">CRDA project</a>. Through the rest of this post I will review why this document is important and how we ended up making the <a href="http://developercertificate.org/">DCO</a> a standalone project that the rest of the community can benefit from.
<br />
<br />
Good FOSS software projects receive tons of contributions from developers all over the world. As FOSS evolves at times we face to stand tricky questions, speculation and at times silly claims over how we evolve software. For very large projects this can get even more complex. The <a href="https://www.fsf.org/">Free Software Foundation (FSF)</a> thought of this long ago and their preferred practice was to request developers provide <a href="https://www.gnu.org/licenses/why-assign.html">copyright assignment to the FSF</a>. Linux ended up creating a document and light weight process which has also proven to be highly appreciated by the industry for use when evaluating inbound contributions to a <a href="https://en.wikipedia.org/wiki/Free_and_open-source_software">FOSS</a> project. The document is called the <a href="http://developercertificate.org/">Developer Certificate of Origin (DCO)</a>, it was written and evolved to enable to give developers and maintainers explicit guidelines of the requirements to contribute and consumers of Linux some form of legal assurance of the provenance and integrity of contributions. The <a href="http://developercertificate.org/">DCO</a> is now a document <span style="color: #274e13;">cherished and appreciated</span> by many attorneys, and it has slowly been being embraced by more large <a href="https://en.wikipedia.org/wiki/Free_and_open-source_software">FOSS</a> projects. If a project has seen issues with getting their developers to jump on board with a <a href="https://en.wikipedia.org/wiki/Contributor_License_Agreement">Contribution License Agreements (CLA)</a> it should consider this more lightweight process.
<br />
<br />
Lets review some of the tough questions a <a href="https://en.wikipedia.org/wiki/Free_and_open-source_software">FOSS</a> project may face. How do we get some form of assurance that folks contributing were the ones doing the development ? Are we getting any written consent that folks contributing are legally entitled to do ? In a distributed development environment what is a subsystem maintainer assuring me when they send me all the contributions they've collected ? <a href="https://en.wikipedia.org/wiki/Free_and_open-source_software">FOSS</a> contributions create public records with names attached, as laws over privacy evolve are we sure contributors are aware that they are waiving any privacy concerns over such contributions and that a public record will be made ? We may not necessarily have to prove all the above but if we could embrace a light weight procedure in order to provide some assurances would we be willing to embrace it ? How valuable would it be, what would it look like ?<br />
<br />
Linux has had to face some of these questions, mainly because the claims <a href="https://en.wikipedia.org/wiki/SCO_Group">SCO</a> was making in 2003 now known as the <a href="https://en.wikipedia.org/wiki/SCO-Linux_controversies">SCO-Linux controversies</a> and on May 23, 2004 Linus Torvalds decided to send out to the Linux community a <a href="http://marc.info/?l=linux-kernel&m=108529494402563">Request For Discussion for considering embracing a Developer Certificate or Origin</a>. The gist of it can be summarized in the following two paragraphs:
<br />
<blockquote>
Some of you may have heard of this crazy company called SCO (aka "Smoking Crack Organization") who seem to have a hard time believing that open source works better than their five engineers do. They've apparently made a couple of outlandish claims about where our source code comes from, including claiming to own code that was clearly written by me over a decade ago."
</blockquote>
<blockquote>
So, to avoid these kinds of issues <span style="color: #274e13;">ten years from now</span>, I'm suggesting that we put in more of a process to explicitly document not only where a patch comes from (which we do actually already document pretty well in the changelogs), but the path it came through.</blockquote>
<br />
The <a href="http://developercertificate.org/">DCO</a> started to be embraced by large projects, examples of that are any project embracing Gerrit, Android, OpenStack, LibreOffice, QT, but they all typically just referred to the Documentation in Linux. The reason the <a href="http://developercertificate.org/">DCO</a> became a standalone project was that referring to the Documentation in Linux is not exactly ideal, and some non GPLv2 projects wanted to embrace it and they expressed copyright concerns over it. On November 2012 I sent a request out to lkml to review if we could <a href="https://lkml.org/lkml/2012/11/20/636">make the DCO a standalone document</a>. That discussion lead to <a href="http://blog.tremily.us/">W. Trevor King</a> to create a github tree with the original contributions to it alone through a <a href="https://github.com/wking/signed-off-by">signed-off-by git tree</a> and a <a href="http://collaborationsummit2013.sched.org/event/e1676be130b4ca8ecb32aa07ef04071e">talk about the DCO at the April 2013 Linux Collaboration Summit</a> in San Francisco (<a href="https://docs.google.com/presentation/d/1J4VTSTuiJ88xcvOrYg7HRvwUoxOW9qvnBqOfoPPuc64/edit">slides here</a>), thanks to <a href="http://www.ebb.org/bkuhn/blog/">Bradley M. Kuhn</a> for help with coordination. After followup advice from volunteered attorneys, in particular <a href="https://twitter.com/richardfontana">Richard Fontana</a>, and a few folks from the <a href="http://www.linuxfoundation.org/programs/advisory-councils/tab">Linux Foundation Technology Advisory Board (TAB)</a> at the <a href="http://www.linuxplumbersconf.org/2013/">2013 Linux Plumbers conference</a>, it was decided that the community would surely stand to gain from this and we'd follow up with a release. The new standalone <a href="http://developercertificate.org/">DCO</a> project page closes these discussions and serves as a single point of reference that any <a href="https://en.wikipedia.org/wiki/Free_and_open-source_software">FOSS</a> project can use now. Thanks to the <a href="http://www.linuxfoundation.org/">Linux Foundation</a> for listening, and <a href="http://www.kroah.com/">Greg KH</a> for the last push.<br />
<br />
If you've embraced the <a href="http://developercertificate.org/">DCO</a> or know of projects that have embraced it please send me a note at mcgrof@do-not-panic.com, I'll extend the list below for now. If you can send me a link to a reference to the project's documentation that refers to it that'd be great too.<br />
<br />
Projects which embrace the <a href="http://developercertificate.org/">DCO</a>:<br />
<ul>
<li>Linux</li>
<li><a href="http://subsurface.hohndel.org/documentation/contributing/">Subsurface</a></li>
<li>Android</li>
<li>OpenStack</li>
<li>LibreOffice</li>
<li><a href="https://github.com/autotest/autotest/blob/master/DCO">Autotest</a></li>
<li>QT</li>
<li><a href="https://dev.openwrt.org/wiki/SubmittingPatches">OpenWrt</a></li>
<li><a href="http://elinux.org/Developer_Certificate_Of_Origin">elinux</a></li>
<li><a href="https://git.kernel.org/cgit/linux/kernel/git/mcgrof/crda.git/tree/CONTRIBUTING">CRDA</a></li>
<li><a href="http://hostap.epitest.fi/cgit/hostap/tree/CONTRIBUTIONS#n68">hostapd / wpa_supplicant</a></li>
<li><a href="http://criu.org/How_to_submit_patches#Signing_your_work">CRIU</a></li>
</ul>
mcgrofhttp://www.blogger.com/profile/06081818694231731816noreply@blogger.com0