Fascinating, especially in the light of the Google Java issues right now- the performance graphs are just the icing on the cake.
Mono has traditionally had a reputation as being a lot slower than .NET- perhaps that isn't the case any more either? It would be great to see some independent third party figures on this.
It's funny that the MS created C# is one of very few languages that lets you make native apps in iOS, Android and WP.
No one has made this specific point yet, which I've found surprising: it does not matter whether Mono is faster than .NET, it only matters whether Mono is faster than Dalvik.
Missing the point: you can write a blazingly fast android app in c++ with opengl; but it cant do anything useful because it can't use ui components, and will be stuck with sucky custom in-game ui components which, I think it's fair to say, are universally terrible (the QT port being the one exception to this).
If you can in _any way_ achieve a method of using the existing android ui components and making them faster, this is a massive win.
Actually the trend on both mobile platforms seems to be away from stock components to either heavily re-skinned components or completely bespoke components. Games in particular tend to go their own way on this, probably partly because they need to be x-platform.
Games (and often multimedia creation apps) have been rolling their own UI for a long time.
I think you still miss his point - games have been rolling their UIs out of necessity for a long time, in large part because in most OSes (mobile or otherwise) native UI widgets simply do not mix with a big real-time rendered DirectX/OpenGL surface.
So games have always been the fastest, most optimized performers on any platform.
This isn't about games though - this is about regular ol' apps, many of which are incredibly slow and could really use the 8x performance improvement that these folks have been able to achieve.
As a mobile dev myself, I don't think mobile platforms are moving away from Obj-C or Java. Sure, we have a lot of custom-built widgets to overcome shortcomings in what Google/Apple provides, but ultimately they're subject to the same performance limitations that plague the stock widgets. Nobody out there is digging into low-level OpenGL optimizations to write a custom navigation bar, for example.
Sure. Game devs don't build UIs in C++ and OpenGL because it's faster. It's because it's the lowest common denominator on the platforms they care about, which is why I was observing that it's funny that C++ is in a very pragmatic sense more portable than Java now.
Most devs use some kind of wrapper library like Cocos2d to take the pain out of raw OpenGL, of course, and they can usually get by with much simpler library of widgets than the OS itself has to provide.
That is useful for this discussion globally, but is irrelevant for my comment: with that in mind the performance of Mono vs. anything doesn't matter at all, much less specifically vs. .NET (which is the contention of the user I was responding to).
Google TV has to support the NDK if they want games for it and I think they do.
IMHO, the biggest problem with Windows Phone 7 (that wasn't emphasized much in all comparisons to iOS or Android) is the lack of support for native code. Whether this was done for legitimate reasons (like portability between ARM and x86) is not really important, what's important is that C++ is the language most games are written in and if you lack the support for it, then most games will only get ported if your platform is a clear winner, which is not the case yet for Windows Phone or Google TV ... therefore from what I know, Windows Phone 8 will have support for native code, because it badly needs it.
Well it doesn't support the Intel Atom ones, but they are going to use ARM soon for all the new ones, and it should work then. If they move fast, it could become quite a nice mini-console platform for set top boxes that you can get for $100, allowing you to play all the native and emulator games from Android on your TV.
In general, HotSpot is a more advanced VM that Mono is. They support dynamic recompilation at a higher optimization level, which we do not.
This is part of the debate in Google vs Oracle. Google could have used HotSpot had they figured out some agreement, but instead ended up with Dalvik which lacks many of the advanced optimizations of HotSpot.
That said, java -server compared against out of the box Mono 2.10.8 is not an apple's to apple's comparison, it is by no means "regular Java". That is Java tuned with some specific parameters. It comes at the expense of interactive time, Mono's default is fast-JIT, and fast startup.
Java -server uses a fixed heap, this means that you must preallocate how much memory the application will use during its entire lifetime. This is good for performance because the GC knows that it only has to scan for example memory between points A and B. 2 comparisons is all you need during GC to scan. Meanwhile, Mono is uses a dynamic heap, which means that it can grow its memory usage based on demand (no need to fine tune the maximum heap every time your app crashes due to a low setting) and can return the released memory to the OS. This complicates GC, because now we cant just compare against 2 values, we need to consider every object in the scope of a series of differently sized heaps.
With Mono 2.11, you can force your app to go fixed heap; The only people that really have a use for this are HPC people and a handful of server people. To be honest, most people dont want to configure the max heap of their server apps by crashing and trial and error.
The second component is that Mono's default is aimed for desktop/mobile configurations, not server loads. If you want to use server loads, run Mono with the --llvm flag: you will have very slow startup times, but you will get code quality that is essentially the same code quality that you get from an optimizing C compiler nowadays.
As for Mono vs Dalvik memory use, in practice people run Mono with Dalvik (Mono for Android), not in standalone mode, so we end up paying for Dalvik's memory footprint. But I agree, it would be nice to find out how much memory it actually uses.
Considering that we get the benefits of a more advanced GC, value types and generics while Java does not, it just seems like we would use less memory for comparable loads.
> java -server compared against out of the box Mono 2.10.8 is not an apple's to apple's comparison, it is by no means "regular Java"
Since at least Java 5 the default on machines with 2 or more processors and 2GB or more memory has been -server. That's what regular out-of-the-box Java is on those machines.
Do many new laptops not have 2 processors and have less than 2GB?
I've been successful in using C++ (with libSDL) to write apps for Android, iOS, QNX (PlayBook) and WebOS (RIP).
Whats more bizarre is that windows phone 7 does not have a development kit for using C/C++. Only C#/XNA. I've heard on HN that this may change for WP8. I hope so.
WP8 and Windows 8 are definitely very C++ friendly. I actually think that C++ may be the language to rule them all as the next generation of web application goes beyond what Javascript can handle. Well, that and C# given Miguel's effort.
I can develop ONE piece of core logic, and have it run on iOS, Android, Windows, Mac, even web (via NativeClient). I posted an AskHN about this earlier today (though not much interest). That sounds like a win.
It's ironic, but C++ does seem to be the best language to write portable code in right now. C is just too low level and everything else is either too slow or tied to one platform.
I don't get much of what you are saying here.. Any examples? Why would C code be less portable than C++? What C++ brings there that is more portable than C?
I would expect the opposite - some runtimes do not allow exceptions, RTTI, dynamic_casting, or simply ABI interworkings due to name mangling and calling conventions for methods. Also virtual inhertiance and it's interworkings (where the this pointer is stored, and how multiple virtual ihneritance is done).
For example: I currently develop my own musical software instruments. This requires a lot of custom DSP code that has to be fast and portable. My only real choice here is C/C++. C is definitely even more portable than C++ but it's too low level (IMO) for application development.
I like C, but you have to ask yourself why it's so rarely chosen for large-scale development. For instance, the Chrome team is certainly smart enough to write Chrome in C, but they chose C++. Why? Surely they understand the tradeoffs and the warts of C++ well enough to make an informed choice.
Firefox was written in C++, and WebKit too (coming from KDE). It could be that C++ is better fit for frameworks, while C for libraries.
V8 is also C++, but lots of other successfull VM's are written in "C"
I like "C" because it's more limiting, hence I won't see templates overused, or latest trick used (SFINAE). It's also easier for me to read.
Also at binary level things are simpler - here is your function, and it's name, it takes this and this as parameter, and returns this and that, also follows this convetion (stdcall, pascal, fortran, etc.)
With C++ my biggest pain has been name mangling. I'm not sure why this was not standartized. It's actually much better that the name of the function automatically contains what types it can take, which would've been much easier for dynamic binding, but then you have GNU, MSVC and others totally differs on how they mangle the name (and from version to version it changes a lot). Also exception handling (mingw/cygwin/msvc - there is big confusion there, and on binary compability level it's hard to combine one with another).
My last show-stopper for C++ was - on this platform you can' t use this feature. For example on certain game consoles exceptions are not allowed. So you go with longjmp/setjmp, but then this would not unwind the stack and call destructors.
Most of all, I got bitten by a heavily templated math library - it was much faster with optimizations than the one before, but without (debug) it was several times slower than same one before it. Why? Because it was relying on inlining everything, and back then gcc for playstation2 was not really inlining everything, even if it was forced.
So every overloaded operation (matrix by matrix, or matrix by vector) was actually function call.
There is another gotcha - sometimes overloaded C++ operators lose their ability to short-cut. For example overloading && and || would not longer short-cut, and many other gotchas.
Most of all, I can't stand boost - it's just too huge.
But I'm totally fine with C++ the simple way (whatever this is) - best example for me is ZeroMQ lib - it's C++ internally, they limit themselves to not use exceptions, and provide "C" interface by default. This makes it very easy to use from other languages - python, lua, ruby, etc.
Out of curiosity, which consoles don't support C++ exceptions? PS2? I would say it's a deficiency in the kernel rather than a language problem... but an understandable one, because the runtime support is tricky (see http://sourcery.mentor.com/public/cxx-abi/abi-eh.html).
You're right, a standardized mangling scheme would have been nice... but there's always 'extern "C"' if you need a C++ function from asm or a linker script.
Compilers really should short-circuit &&/||... were you using some kind of experimental compiler?
Yeah sure there are plenty of problems with C++. But I don't think writing in raw C is such a great answer either. What I really want is something in between but no such thing exists.
Same here... There is Vala, but just lazy enough to get into it.
My choice right now is Lua + C - for fun & experimentation. I also dabbled with Common Lisp, but haven't touched code in it in months, might get back at it later...
C++ is a superset of C, so being smart enough to write something in C++ usually implies the ability to write it in C, except with less language verbosity and much easier to read code without glyphic ampersands everywhere meaning different things @.@.
Then again, Linus said C++ developers were dumb and C was better.
C++ is not quite a superset of C, though most reasonable C code is valid C++. However, C++ includes a lot of facilities that make programming easier. They're easy to abuse, sure, but in most cases they help make things clearer and simpler.
Boost is almost a second standard library (it is one reference source used in the standards process), and it demonstrates a lot of what's cool and awful about C++. Boost invented a lot of the smart pointer classes that are now standard and save a lot of boilerplate that makes memory management in C annoying. Boost has things like Asio, which lets you write portable synchronous/asynchronous network code. On the other hand, it has Spirit, which is a massive abuse of operator overloading which is at once both impossibly complex and nifty/convenient.
Linus's opinion is worth noting, but he's hardly the best source.
I was about to say "Java!!!" then realized how misplaced that statement was given the circumstances. Oracle really is shooting themselves in the foot here.
I like C# although mono always failed to be nearly as good as .NET and that hindered C# a lot.
If microsoft had realized it and made .NET multiplaform as well, and open source, we may live in a different world today. Heck, Microsft could be leading.
But then again, rewritting the past is easier said than done.
You are right that details are not out. What I said was purely conjecture on the part of WP8. However, given the roadmap, rumors, hints, discussions, the kind of people in charge (Sinofsky, known .Net hater), etc... I am pretty sure that WP8 will be like W8 in many regard. There were already some rumors (+ non-denial from MS) that WP8 may not run WP7.5 code so this seems a strong indication that it will be more like W8.
And that's not a bad thing. I love C#/.Net (one of my projects, Tagxedo, was built with Silverlight), but given Microsoft doesn't have even the slightess will to push it, we may as well move on, or find some way to ride Mono. C++ is currently on my mind since it covers all ground (sans UI, which you still have to do anyway), probably in a way more straight-forward than Mono.
Lack of C++ code is a significant limitation for high end mobile games- both from performance and porting costs. MS absolutely knows this (and hears it regularly from game dev houses). Dunno if it'll be solved in WP8, but they have something in the works.
Derived from Win8 != WinRT- could just be the kernel and not app model.
How do you bootstrap an SDL app on iOS? I've been considering taking this route myself because I really hate multi-language development and all my DSP code is in C++.
Back when .NET was still in beta, I was flown up to Redmond to provide some feedback on Managed C++. I specifically asked them why their Socket API was "crippled" in comparison to WinSock (which has some really beautiful mechanisms for handling things like cross-protocol addressing), and the answer I got was "portability to non-Windows systems".
On the other hand the asynchronous parts of the Socket API was modeled after the Windows kernel, which has true support for asynchronous sockets.
That's one reason why the Mono implementation was always suboptimal, because implementing that API on top of poll, epoll, kqueue and the like is not really straightforward. It's also the reason for why the attempts to build alternative web servers are using bindings straight to libevent, bypassing the socket API.
The original plan was indeed for .NET to be portable to non-Windows platforms, but Windows-specific details have leaked in nonetheless.
Not to detract from the Mono team's awesome achievements but:
- Microsoft has released a cross-platform CLR ("Rotor"). Win/Mac/BSD I believe.
- C# and the CLR are ECMA specifications.
- A lot of the class libs are definitely not Windows only.
- Microsoft also has Silverlight (CLR) running on Mac, if that counts.
The whole .NET stack, apart from some Windows specific libraries (some COM+ management stuff, Win Forms) were definitely made to be cross-platform.
IIRC Rotor was more of a proof of concept, only supporting .Net 1, and later 2.0. It could not be built that easily either, and not bootstrapped.
Early C# (1.1 and 2.0) and the CLR specs were posted to ECMA, then it stopped. I think they published some more of it to ECMA much later, and some of the class platform too but it's still incomplete for .Net to be even remotely called an ECMA standard.
Silverlight does work on Mac, but on Linux you have to use Moonlight.
All in all, the Mono team went much, much farther than Microsoft, but it is impressive for Microsoft to have shown so much effort and so clear intent towards cross-platform support.
It costs $400 for a license, though. But boy I am tempted... I know, I know- you need to go with the native languages for the best experience. But keeping a huge chunk of my codebase between platforms is a very, very interesting idea to me.
you need to go with the native languages for the best experience.
With MonoTouch, you still write against the native iOS UI. Therefore, from the user's perspective, your app is native.
Since the UI is the same, the only thing that could be different between an ObjC and MonoTouch app is performance. In my experience there's no noticeable performance hit with MonoTouch except for app startup time (my MonoTouch app takes about 2 seconds to startup on a 3gs)
MonoTouch does give you a native experience. It's basically a C# binding for native APIs. It also has a few nice extras like MonoTouch.Dialog which provides a much nicer API for doing iPhone dialogs than using UITableView (although you can definitely use the UITableView API directly if you prefer).
Kind of surprised we haven't seen JetBrains step up with a Mono IDE yet. They've already got the IDEA platform and ReSharper, combining the two seems a logical next step.
Interesting! Going from VS 2005 to 2008 to 2010, Resharper has gone from must-have to nice to not needed anymore for me, since most of the features I relied on have been implemented natively in VS.
I would bet they are bumping up against a developer bandwidth issue. Don't forget, they have about 6 commercial products, plus their own language now (and IDE, natch).
The good news is that the community edition of IJ is Open Source (Apache licensed, IIRC) and there are quite a few existing language editors built on top of it. It isn't a rich client platform à la Eclipse or NetBeans, but I doubt such a thing would be required to win the affections of those who are unhappy using MonoDevelop.
It would be an interesting idea to them if they have an Android phone and I wouldn't have the time to code for two platforms otherwise.
I get that there is a tradeoff involved, but the Mono-x products are in an interesting space. They aren't webviews (like PhoneGap), they aren't a weird JS hybrid (like Titanium)- they're full native experiences. You'll get some newly released features later (I imagine), but the user experience really shouldn't be affected that much.
Lately I've been thinking that Haskell might be a really good language to write cross platform native apps in. It's super fast but also high level so you can be really productive. I know there are some libs for iOS but if some sort of cross platform framework existed I think it could be really powerful.
> We matured Sharpen a lot, and the result is a much-improved Java-to-C# translation tool for everyone. We are releasing this new version of Sharpen today along with the code for XobotOS and we hope that many more people will benefit from it and contribute to it.
Totally! If Google would actually dump Java in favor of C#, there would be the huge problem of all the apps that need to be ported. With a tool that has already proven to be capable of doing that, the required efforts are already massively reduced.
I find it interesting that there was such an effort to reduce memory usage because I've always had severe memory issues on the two Android phones I've had.
The first, the myTouch 3G had about 100MB of RAM, with only about 25MB free from a cold boot, meaning the phone is really really slow due to memory pressure.
My current phone, a G2, has about 350MB of RAM total and about 70MB free on a cold boot (ICS).
A full Ubuntu Desktop install can use only about 250MB of RAM on cold boot and even Windows XP would run in 256MB of RAM.
So with so much emphasis on saving RAM, why is 350MB of RAM insufficient to run basically one app at a time on a phone (plus the various background processes the system is running)?
"Free memory" in a garbage-collected managed language is difficult to measure. If you don't have to GC, the performance-minded choice would to not GC. Every computation costs battery life, so I would suppose Android's GC is tuned to not run if it doesn't have to.
If you look at the specifications of system.gc(), it can be implemented as a placebo, like the "push to walk" button at the street corner. So even explicitly GC'ing isn't guaranteed to do anything.
Android also has a strategy for "swapping" components within a process, and whole processes. This is, effectively, Android's GC strategy across processes. An Android system's memory, especially on small-memory devices, should always look full-ish, but you can almost always launch a new task, since GC and components and process "swapping" (the "destroy" phase of component lifecycle) and GC can almost always make room for more.
> like the "push to walk" button at the street corner.
Thanks for using that comparison - oddly enough it provided some nice entertainment for me :D
Since I wasn't aware if "push to walk" was a cultural reference that I might not know I started some quick research that lead me to a couple of interesting articles.
I think the biggest problem with embedded GC is that you generally don't want to do pay the memory overhead that comes with generational algorithms. It's a pity that most Java systems are layered on top of some C-based OS, so that each process effectively has to have its own heap, with all the memory overprovisioning that entails.
Managed operating systems such as JNode (Java based) or Singularity (.NET based) have managed kernels - everything is written in a memory-safe language. The OS can verify the memory-safety of a program in the loader (bytecode verification), so that there is no need for an MMU to implement process isolation; different processes can share the same heap. There is also a big performance advantage as process switching is essentially as fast as thread switching. IPC is typically done using message passing, but Singularity for example also allows processes to set up a shared memory space.
Thanks for this information. I am curious to see in which domain such Operating Systems will be used first. I am wondering actually why Google did not use such an OS architecture for Android (or Apple for their phones).
I have an older device with 256M installed that runs 2.2 and while it can slow down at times it does pretty well running opera, a mailer, navigation and google music streaming all in ram all on top of a bytecode vm. Lets not forget no swap space.
As a lark I just booted ubuntu 12.04 with mem=256M. It got to the greeter reasonably quickly, but it's been 10 minutes since i hit enter and I still haven't seen the desktop. I planned to launch chromium and thunderbird and report the swap used, but you get the idea. Its true that a lighter DE would be more usable, but I think you're looking at desktop RAM usage with rose colored memories.
All Android apps are subject to component lifecycle. Native code is no shield against a process getting reaped. The Android NDK provides lifecycle support even for all-native code apps.
The G2 (assuming you mean the T-Mobile G2/HTC Desire Z) has 512MB of RAM. It's actually surprising to me that it has that little free on a boot, though Android often sees the most memory pressure immediately after boot. For example, lots of applications listen to boot events and make sure that various alarms are still scheduled.
Are you saying Dalvik uses too little memory? Because I disagree from a consumer point of view. If they doubled the memory, Android phones would probably need 2 GB of RAM right now. So how can you say that?
Here's Dan Bornstein's talk at Google I/O 2008 about the dex format and the original Dalvik interpreter: http://www.youtube.com/watch?v=ptjedOZEXPM I'm not sure how useful it is in comparing the Dalvik JIT to MonoTouch though.
Isn't this like huge, though? If you can actually make it run faster, I am guessing it would be possible to spin it off and get serious funding.
I merely watch from the sidelines, but what the Mono/Xamarin guys accomplish year after year could be a good lesson for any founder. Having spent enough time in the .Net world, I can tell that any attempt at creating a compatible .Net framework is a daunting, enormous undertaking. I guess the key thing is, FOCUS.
I guess Google did make the wrong choice back then. I certainly prefer C# over Java any day, even by the most basic test like "language features". Using Mono would've certainly resulted in _less_ fragmentation than they have with Dalvik today.
> Using Mono would've certainly resulted in _less_ fragmentation than they have with Dalvik today
Why would using Mono result in less fragmentation? Most of their "fragmentation problem" comes from having developers having to develop for multiple devices (screen size, performance, GPU, android versions, etc.), and changing the language won't magically fix it.
Ah, that makes much more sense. Having said that, where do you see fragmentation in the Java ecosystem? I don't use Java to write anything but Android apps, but I've used a few pure Java libs with Android and they all seem to work great.
That isn't to say it isn't happening, I just don't experience and would be interested in knowing where it does occur.
Blackberry is more J2ME-ish and Android is more J2SE-ish. It's possible to share engine code (I've done it) -- you'd still have to do custom UI code for each.
given that the phenomena runs rampart in Android world, I should have been clearer.
It has become evidently clear that 9 out 10 times that the word "fragmentation" is uttered on HN or any comment board regarding Android, it is by people who have never developed a line of Android code and whose knowledge of the platform is what they picked up on advocacy sites.
Dalvik isn't fragmented at all. It is, in fact, a bloody marvel. Implementations of hardware specific APIs of course differ, exactly as expected.
But really, it is astonishing seeing the claims that we see. Microsoft struggles to run WP on a single reference hardware platform with the most trivial of variances. Android runs on friggin' everything with an overwhelmingly high level of compatibility that is so refined that those few outliers become a really big deal.
As someone who admittedly isn't an Android developer, isn't the point that Dalvik's very existence has fragmented the Java ecosystem? That's how I read the GP comment.
isn't the point that Dalvik's very existence has
fragmented the Java ecosystem
Yes and no.
Dalvik is not Java, being a totally different VM, just as .NET's CLR is. And just as .NET's CLR, you can transform Java's bytecode to Dalvik bytecode by means of a compiler. The equivalent project in the .NET world would be IKVM, which allows you to transform Jars into .NET dlls.
It is true that Google relies on the Java ecosystem for fueling Android ... they rely on Eclipse for IDE support, they rely on Harmony for the base classes and so on. But everybody working with Android knows that Dalvik is not Java and that Java libraries that are doing bytecode manipulations do not work on Dalvik out of the box, because Dalvik is not a JVM.
So really, Dalvik is fragmenting Java in the same way .NET did.
The .net ecosystem is already far more fragmented; it's gone through four incompatible revisions in about half the time java's been around, and that's before you talk about things like the compact framework (which, to be fair, java has an equivalent of).
In fact, depending on the language features you use, it may even be possible to compile down to an older CLR version, as the new language features are often simply syntactic sugar for some code generation that happens behind the scenes.
That's exactly what's happening. All versions of .NET since version 2.0 are primarily library updates. They can all safely run on the same CLR, and there haven't had to be any changes to the CLR to enable new language features. A computer really only needs to have a second version of the CLR installed in order to run code that targets .NET 1.x
That's pretty much how Java works too. In general, you can't take code compiled with a newer version and run it on an older VM. You will get class version mismatch errors.. they update the class version every time there's an incompatible change, such as when 'enum' became a keyword.
The compatibility that Sun, err.. Oracle, strives for is to be able to mix code compiled with old & new versions without issues.
Exactly! I don't think language choice has anything to do with Android fragmentation. It's because of the hardware manufacturers and the carriers who never push any updates.
I think it's business model. Because of the incentives (branding, revenue streams, etc.) an android handset maker makes its money when the phone is sold, and ceases to care. Because the phone is branded "Verizon" first, "Android" second, and "HTC" (say) third they don't care about the user once the phone is sold. Arguably, Verizon and Google should care more, but Verizon clearly doesn't and Google has too little influence. In essence it's a tragedy of the commons where the commons is Android (as a platform/brand).
Android was definitely written with multiple hardware configurations in mind. What I don't understand is why phone carriers and manufacturers simply provide the drivers for Google (and consumers) to work with. Essentially how one would install drivers on Windows. I don't understand the need for the drivers to be proprietary.
I also prefer C# and I think the .NET CLR is a solid platform.
However, I feel that Google did not make the "wrong choice" because they were likely trying to leverage two (interdependent) advantages: existing Java programmers and the overwhelming number of Java libraries.
I haven't made it through all the threads here, but I'm fully expecting to find a post containing "yeah, and we can just run the Java-to-C# translator on all those libraries", and then I'm hoping to find a reply that says that machine translation is not the same as porting a library.
I am curious about Sharpen, the automated translation tool used to convert from Java to C#. It seems like every project that uses it winds up with its own tweaks that have no centralized location to be upstreamed to.
This is really interesting project, and after taking a little bit of time to look it over, there are a couple of things that I have noticed:
1. It's not entirely clear how to use this. Is XobotOS a replacement for Android, or is it something that can be shipped as a standard application? In my limited time reading the documentation on the GitHub page, this was not clear to me.
2. It looks like a it is a terminated research project, to quote the README:
This code is provided as-is, and we do not offer support for any bits of code here, nor does Xamarin plan on continuing evolving XobotOS at this point.
From the blog post, it sounds like they are integrating some of this technology in their products, but XobotOS is otherwise just a code dump. As such, unless someone is really interested in this, it doesn't seem like it's going to go anywhere.
Do those benchmarks suggest that XobotOS is faster for those metrics than vanilla Android? That seems like a very large performance gain, and I already wonder what the performance on a device would be like.
Yeah, but this was 1 synthetic benchmark, which heavily uses 2 features they made a point about being significantly faster. The real question is what is the difference between dalvik and mono on more balanced applications. I expect we are talking closer to 10% difference then the order of magnitude their benchmark shows.
It really depends on the program load. In general, Mono is just a more mature VM than Dalvik is, so it is rare for Dalvik to match Mono (although there are a couple of cases where it matches it).
The other problem is that Dalvik uses a model that limits the amount of memory your app can use, so most of the heavy duty tests could not be ran to compare the apps, since Dalvik would crash with an out of memory condition while Mono does not impose this limit.
Dalvik has also other ugly limitations, like their GC being suspended whenever a JNI invocation is taking place.
Myriad had been trying to sell their own VM as a "turbo" Dalvik, but, so far, no takers, that I know of, among handset OEMs.
The thing is that Dalvik balances performance and battery life by aiming for fast interpretation plus minimal JIT compilation.
One could easily show that the Oracle VM would trounce Dalvik at computation benchmarks, since it is aiming for best possible performance using a very sophisticated JIT compiler. But I suspect getting the Oracle VM to not suck your battery dry would be a challenge.
The other thing that's holding me back on Mono for mobile is that it really only addresses the non-UI parts of an app and in most apps the UI code is 80-90% of the entire codebase. So rewriting some core logic and network code in a portable language isn't really enough of a win to justify using working with an unsupported (by Apple) and niche tool.
> in most apps the UI code is 80-90% of the entire codebase
Not in my experience. I write software for large enterprises, mostly .NET glue between systems like SAP, e-commerce engines, search engines, databases, etc. The UI is a thin little layer on top of complex systems integrations. I'd say the UI code is 10 to 20% in most systems I've worked on.
I think both of you are right. To each his own. I have a few applications that are algorithmically complicated and it is impossible (or very expensive) to implement multiple versions of the same thing on different platforms. Even though the core stuff is say 60% of the codebase (vs 40% in UI), say measuring in number of lines, the benefit of doing the 60% in a single productive platform is very very important. I'm talking about correctness, maintainability, consistency between different versions, etc. The remaining 40% UI code can be ugly, platform-dependent, inconsistent, etc, and it won't bother me as much. After all, an error in the presentation is far less lethal than an error in the core logic. In this case, a common platform is strongly favored.
That said, indeed many apps are not like that. Going forward though, I would conjecture that there will be more and more sophisticated apps.
Unlike Sun with Java, Microsoft submitted C# and the .NET VM for standardization to ECMA and saw those standards graduated all the way to ISO strong patent commitments. The .NET framework is also covered by Microsoft’s legally binding community promise.
It's strange- with the work Xamarind has been doing, C# is one of the few language choices you have to go cross-platform and native- MonoTouch does iOS, MonoDroid does Android, and MS themselves do Windows Phone. I've not used any of these extensively so can't vouch for them, but it's an interesting development.
Standards take time, the ISO spec is typically 2-3 years behind the current product.
But cheer up, the ECMA team just updated the VM spec a couple of months ago and the committee is resuming work on new features.
We are all pretty psyched about the next steps for the ECMA standards.
As for Mono, we already have C# 5 and many of the new class library features. Having two independent implementations helps coming up with a better standard.
The current version of Mono4Android and MonoTouch do not support async/await yet, but that is on the roadmap (once Mono 2.12 [currently in beta] is released).
C# the language is open, it's an ECMA standard. The only proprietary bits of C# are the .NET Framework and the Windows implementation of the VM (CLR), though even those are freely redistributable.
Java was supposed to be open, too, perhaps much more than C#, and yet here we are today. Given Microsoft's recent patent history against Android and even ChromeOS, if I were Google I'd stay as far away as possible from C#, no matter how "open" they say it is.
Heck, at this point Google probably can't even trust Python anymore, seeing how there are patent and copyright leeches everywhere. Their best bet is to use a language they own, be that Go or another one they made or bought.
I believe the concern is that without ECMA/ISO standardization, there's nothing to stop MS from suing Xamarin/Mono should they decide to back away from an open language specification.
I don't think MS can retroactively back away from existing language specifications. So worst-case scenario would be that Mono stagnates, but even then it would not be particularly easy target for litigation for MS.
There are valid concerns about the implementations of .NET and the possible ramifications that could come if Microsoft decided to enforce their patents surrounding them.
Having an open standard can be moot if effective implementations of it are patent-encumbered, forcing you to either use a licenced runtime, a second-class one, or risk getting sued.
"RMS: You shouldn't write software to use .NET. No exceptions.
The basic point is that Microsoft has patents over features in .NET, and its patent promise regarding free software implementations of those is inadequate. It may someday attack the free implementations of these features.
This is no reason not to write and distribute free implementations such as Mono and DotGNU. But we have to keep in mind that using and distributing these programs might become dangerous in certain countries. Therefore, we should minimize our dependence on them – we should not write programs that use those features.
Mono implements them, so if you develop software on Mono, you are liable to use those features without thinking about the issue. It is probably the same with DotGNU, except that I don't know whether DotGNU has these features yet.
The way to avoid this danger is not to write programs in C#. If you already have a program in C#, by all means use a free platform to run it. But don't increase your exposure to the danger – don't write additional code in C#, and don't encourage people to make more use of C# programs. We need to guide our community away from dependence on an interface we know Microsoft is in a position to attack.
It is like the situation with MP3 format, which is also patented. When people manage to release and distribute free players and free encoders for MP3, more power to them. But don't ever use MP3 format to encode audio!"
They reserve the right to sue and they carefully patented every aspect they could. I see a lot of merit in the language itself, but the legal problems Google is having vs Oracle and Java they could also have them with Microsoft and C#/dotNET, worse even considering Microsoft is a competitor in several areas.
I like .NET and I like Mono, but I was disappointed that the article showed no hard evidence of overall performance improvement for XobotOS vs Android. I understand that they probably have more important things to work on, but I hoped that if they could show concrete benefits even at this early stage then their work might pick up more traction. It was a fun idea to read about, at least.
I would really love for Mono to take off in a big way, I like the language features and syntax of C# more than Java, but when I have to work with other developers it is nice not to have to worry about major memory leaks across the system.
And the patent wars regarding Java just make me not want to support that development ecosystem anymore.
We should bail out of some lang as soon they loose first dollar? And MS is not loosing any dollars yet.
Dev ecosystem and market are quite healthy, even I'd like to see more small (web, startups) projects out there.
Mono has traditionally had a reputation as being a lot slower than .NET- perhaps that isn't the case any more either? It would be great to see some independent third party figures on this.
It's funny that the MS created C# is one of very few languages that lets you make native apps in iOS, Android and WP.