YaK:: WebLog #535 Topic : 2006-10-02 20.00.55 matt : quoted in an article on Mono (updated) [Changes]   [Calendar]   [Search]   [Index]   [PhotoTags]   
  [Back to weblog: pretention]  

quoted in an article on Mono (updated)

I was quoted in an article about Mono usage in the enterprise. Here are some additional thoughts and clarifications that weren't printed.

UPDATE: Due to a last-minute editing error, the print edition contains an truncated quote attributed to me that is not true. A retraction and correction by the SDTimes editor(s) should appear in the next print issue of SDTimes. I sincerely apologize to Novell and the mono team, who work hard to do a good job in these areas. It was, of course, not my intention to so utterly misrepresent the current state of their project that I have put so much work into promoting and enhancing. That being said, I stand by my correctly represented statements in the article itself. Some developers are now making more effort to write unit tests for bug fixes in Windows.Forms now, which is great. Now back to our regularly scheduled blog post, already in progress...

The article I'm quoted in, which is printed in this month's Software Development Times, is up on the web . It is about why mono's uptake in the enterprise has had slow to no growth. We BugScan 2.0 was written 100% in C# (commandline analysis program wrapped in ASP.NET UI and Web Services) and we shipped on an appliance that used latest mono 1.1.x SVN head at the time running on a customized Knoppix 3.x bootable CD distro. Phew.

We found we had to use mono SVN head, because nearly every week there were serious memory leak, compiler wrong-code issues, and JIT bugs being fixed at that time. These days (late 2006), these issues are a thing of the past. Most of the compiler and JIT issues have automated tests associates with them. I've still found memory corruption issues as recently as 6 months ago by running a mono program under valgrind. After a bit of fenagling, Massi (a mono developer) figured out the issue and fixed it. He even added an automated test after some additional harassment from me.

At that time, we only had a couple of major issues with mono. The foremost of which was a memory leak bug when uploading large (over 1 megabyte) bianries via the ASP.NET web UI we had. We were doing this via a simple file input HTML control -- nothing fancy. To avoid issues in mono's ASP.NET, which we were told by the mono team at the time was not ready for the ASP.NET controls we wanted to use, we did most everything in HTML and XSLT. Even so, the uploading of the file and converting from base64 encoding to binary, caused a memory leak in the area of tens to hundreds of megabytes. Every upload would incrementally leak more memory. Since this was the primary use case for our product, this had to be fixed before we could ship.

The process of getting it fixed was a nightmare, politically. We had paid to be a Novell Silver Support partner, but that seemingly didn't give us leverage with getting support for mono. Miguel swore up and down it would require a rewrite of the GC system to be compacting. It took several phone calls with Erik Dasque who was (but no longer is) project manager for mono to get an engineer on the problem. The engineer, Gonzalo, was amazingly helpful and was able to fix the majority of the memory leak with a 3 line patch. That's right -- what Miguel swore would require a total rewrite took a 3 line patch. (This is a pattern that would repeat itself several times since.) There were a couple of fixes beyond that to incrementally improve things that involved making the base64 to bianry conversion be done in unmanaged code to avoid the need to allocated large buffers in managed space. Unit tests were added for that now-unmanaged functionality -- which found some bugs before users reported them. I was happy, we were able to ship, and everybody won -- including other mono users. Note that this is a great advantage that mono (and .NET in general) has over Java-based Application frameworks: It's an integrated package, so optimizing from the API level down to the compiler and JIT are easy. With something like JBoss or BEA, so many of their libraries are third party (myfaces, hibernate, etc) -- nevermind the actual Java compiler and runtime itself. This is a major advantage of using mono with a support contract versus other Java-based vendors of similar technologies.

Beyond that product, I wanted to keep programming in C# in my Linux environment for projects, books, and articles. I'm cool with using vim and the commandline, but I'm addicted to nunit-gui's green bar. So, I worked with the mono Windows Forms developers that work at Novell to file bugs and write a test plan . As the SDTimes article states, bugs became unfixed so often that I was encouraging the mono developers to add unit tests for bugs, if nothing else, to make sure they didn't regress. This wasn't just with day-to-day SVN head, but also between "official" releases. I even added some unit tests myself, and added nunit.mocks to mono's internal nunit libraries to enable more of this kind of thing. Sebastien, who focuses on System.Drawing and libgdiplus, really jumped on this did did a great job of unit testing. Another developer, Jackson, picked up the torch and ran with it to make sure that focus issues that got fixed didn't regress.

Unfortunately, as you can read in Miguel's response in the SDTimes article, unit testing WinForms just isn't a priority for them. He appears to be convinced that a screenshot comparison technology is required to do this. Upon hearing this, a co-worker said "That's the most backward bit of 1970s thinking I've heard in recent years. I was unit testing UI widget systems more than 20 years ago!" There are some legitimate roadblocks to implementing some of these regression tests. The most important one, the lack of a mock framework, was fixed by me in mono 1.1.15. The second issue is that the objects whose interaction needs to be tested are internal classes in their System.Windows.Forms.dll. The reason they aren't public in that library is because a) those internal objects are in the System.Windows.Forms namespace itself, and b) it would pollute the API exposed by the library. The first point can be refactored away, but the second point is legitimate for MS.NET compatibility.

The solution I proposed is to break those mono-specific objects out into a separate library (like Mono.Windows.Forms) that System.Windows.Forms then depends on. I did this refactoring over a weekend just by compiling a few specific .cs files into a separate assembly and referencing it from the original assembly, but the lead mono WinForms was concerned it would destabilize the code. So, they are choosing to fly blind with no unit tests in those areas out of fear, as well as waste developer time with with unreliable manual point and click testing. This position of paralysis and suboptimal use of developer time is not good for any business, especially when consumers of your product or service have this kind of insight into the process.

This isn't specific to mono. I'm working on a JBoss project right now and are running into critical bugs that need to be fixed before we ship, and we're being told to "try current CVS, someone might have fixed something similar". This is after paying tens of thousands of dollars for support. Now, I might feel more comfortable about this if I could run a suite of tests (that include regression tests for bugs I've reported) before I deploy. If I could see that pass, and know the coverage is 80%+, then that might be a decent reply to a support request. Since that isn't the case, the reply is totally unacceptable.

I'm not buying into open source because I want to deal with bullshit that puts my business at risk. I'm buying into it because I think it can be better than closed source alternatives, it is cheaper in the long run, and I can hire people in-house to maintain it if need be. Now, if open source application stacks like mono and JBoss don't get their shit together and *prove* they are quantifiably better than closed source alternatives with unit tests and high code coverage numbers, I will reticent use them again in a business context until they do. I want it in my support contract -- each incident that results in a modification to the source code will have a test added that fails when the cause(s) of the incident are present and passes once the incident is resolved. That is definitely worth $20k per year in support costs.

Unless enterprise customers get a support contract that specifies these things, as I said in the article, I wouldn't recommend enterprises do large-scale deployments of Mono. It's a chicken and egg problem, so once enterprise customers start demanding these checks and balances with regard to automated tests and are willing to pay for it, the economic incentive to these projects and companies will be difficult to argue around. This assumes, of course, that the closed source companies don't start doing this first. There's a great window of opportunity here for open source application stacks to differentiate in a way that matters to enterprise customers, and I sincerely hope they take advantage of it.


showing all 0 messages    

(No messages)

Post a new message:


(unless otherwise marked) Copyright 2002-2014 YakPeople. All rights reserved.
(last modified 2006-10-11)       [Login]
(No back references.)