The catch of the question is that history has shown that there does not exist a business case for optimizing software to be faster than human psychology can withstand, regardless of the computational power of the computer. If C# is mainly used for business applications, then in 2016 the most stringent limit is set by mobile phones and tablets.
Scientific software and technical software (simulations, control, medical equipment, etc.) tends to have higher requirements to robustness, efficiency, lack of flaws than business software does, but if the parties, who finance the development of C#, use C# mainly for business applications, then there is no motivation for being any more stringent with C# requirements than the typical business application use case requires. By stringent I mean RAM usage, application start-up-time (Java apps were slow to start), speed optimizations, the amount of thought put into stdlib API design to keep the API as small as possible, while minimizing data copying, RAM-allocation requests, execution time that is spent on executing constructors and object initialization in general, while also allowing succinct application code for as many application types as possible, maximizing data locality(in RAM), etc.
A counter-example is the Java VM and Java stdlib, which had a nice feature of having a kind of GUI library at its stdlib that had the property that whenever the GUI code worked without crashing on one operating system, then it actually STARTED AND WORKED on another operating system in stead of crashing, unlike wxWidgets and Qt and GTK GUI apps do. Unfortunately the Java VM was and is clearly a failure starting from the utter nonsense how its console applications (java, javac) console API was defined, combined with the slow start-up, initial sluggishness was OK for business apps, but could have been avoided by more careful software design (which they actually, eventually, fixed, but with a delay of at least multiple years).
All in all, the pattern with the Java, Windows, business software development in general is that technical quality has lower priority than shipping features and the only actual requirement to technical quality of business software seems to be human psychology, not what is technically available to software developers.
I understand that I may be just letting off steam here, getting a bit off-topic from my original question, but honestly, as a person, who loves automation and loves tools that check for flaws of developers and prevent me from accidentally doing some stupid mistakes that I self recognize as a mistake (not the cases that some “best practice” “guru” thinks to be a mistake at some book and I intentionally consider not to be a mistake), I still have not understood the efforts, where amateurs, who do not spend at least a few weeks on studying the basics of software development(that cover the basics of algorithmic complexity, memory allocation related issues, how to modularize one’s work, basic OO, some basics about threads and lack of reliability of internet connections and the possible issues with application state, data consistency, a little bit of security, regular expressions, etc.), are encouraged to write software applications. I’m all for the idea that just like I do not have to be a cook to create myself a sandwich and it helps me a lot in my life to be able to create myself sandwiches, people with no IT background should learn to write simplistic scripts, know some basics about software development. Even secretaries benefit by writing their own libraries/macros for spreadsheet applications. But the idea that amateurs could be made to create any at least remotely decent applications is just beyond me. Scripting a game, fine, but anything application-like has just too many aspects to consider for an amateur.
That is to say, anything that is purely business oriented seems to have very low demand for technical excellence and is therefor a bad investment, in terms of library development, for technically more skillful people. What is the plan for C#?
In the past it was to compete with Java, because the people at Sun were stupid enough to stop Microsoft from using Microsoft Java, but the main audience of Microsoft has always been business software users, not scientists and engineers. The idea that scientists and engineers will stick to Fortran and C++ for speed does not necessarily hold, because a lot of data nowadays is in text form and the “Big Data” movement requires text processing, which is nasty in C/C++/Fortran, to say the least. The GNU R, Scilab, etc. are clearly not the fastest possible choices and, as demonstrated by the vast variety of scientific Python libraries, a proper programming language is required to create more complex data processing/analyzing routines. Java is out of the game thanks to Oracle, but if the C# will not pick up some of the 2016 scientific Python users/developers, then C# will become another COBOL, not another Fortran. (Fortran is terribly archaic, but thanks to the scientific users its libraries have such a high quality that Fortran is still relevant and will probably stay relevant in numeric computation.) Even the C and C++ have survived mainly because engineers, non-business-software-developers, find those languages to be useful. Pascal, Delphi as mostly business software oriented languages have practically died, with a small exception of
To put my post into another perspective, in addition to vendor independent funding, safety from patent trolls, a project also needs TECHNICAL and SCIENTIFIC users or it will die off like the Delphi programming language.
Thank You for reading my post. I hope to read comments that tell that I’m all wrong and mistaken and that my initial question (the title of this thread) has wrong presumptions.