Is C# only for business apps or also for something else? If Yes, then what?


#1

The catch of the question is that history has shown that there does not exist a business case for optimizing software to be faster than human psychology can withstand, regardless of the computational power of the computer. If C# is mainly used for business applications, then in 2016 the most stringent limit is set by mobile phones and tablets.

Scientific software and technical software (simulations, control, medical equipment, etc.) tends to have higher requirements to robustness, efficiency, lack of flaws than business software does, but if the parties, who finance the development of C#, use C# mainly for business applications, then there is no motivation for being any more stringent with C# requirements than the typical business application use case requires. By stringent I mean RAM usage, application start-up-time (Java apps were slow to start), speed optimizations, the amount of thought put into stdlib API design to keep the API as small as possible, while minimizing data copying, RAM-allocation requests, execution time that is spent on executing constructors and object initialization in general, while also allowing succinct application code for as many application types as possible, maximizing data locality(in RAM), etc.

A counter-example is the Java VM and Java stdlib, which had a nice feature of having a kind of GUI library at its stdlib that had the property that whenever the GUI code worked without crashing on one operating system, then it actually STARTED AND WORKED on another operating system in stead of crashing, unlike wxWidgets and Qt and GTK GUI apps do. Unfortunately the Java VM was and is clearly a failure starting from the utter nonsense how its console applications (java, javac) console API was defined, combined with the slow start-up, initial sluggishness was OK for business apps, but could have been avoided by more careful software design (which they actually, eventually, fixed, but with a delay of at least multiple years).

All in all, the pattern with the Java, Windows, business software development in general is that technical quality has lower priority than shipping features and the only actual requirement to technical quality of business software seems to be human psychology, not what is technically available to software developers.

I understand that I may be just letting off steam here, getting a bit off-topic from my original question, but honestly, as a person, who loves automation and loves tools that check for flaws of developers and prevent me from accidentally doing some stupid mistakes that I self recognize as a mistake (not the cases that some “best practice” “guru” thinks to be a mistake at some book and I intentionally consider not to be a mistake), I still have not understood the efforts, where amateurs, who do not spend at least a few weeks on studying the basics of software development(that cover the basics of algorithmic complexity, memory allocation related issues, how to modularize one’s work, basic OO, some basics about threads and lack of reliability of internet connections and the possible issues with application state, data consistency, a little bit of security, regular expressions, etc.), are encouraged to write software applications. I’m all for the idea that just like I do not have to be a cook to create myself a sandwich and it helps me a lot in my life to be able to create myself sandwiches, people with no IT background should learn to write simplistic scripts, know some basics about software development. Even secretaries benefit by writing their own libraries/macros for spreadsheet applications. But the idea that amateurs could be made to create any at least remotely decent applications is just beyond me. Scripting a game, fine, but anything application-like has just too many aspects to consider for an amateur.

That is to say, anything that is purely business oriented seems to have very low demand for technical excellence and is therefor a bad investment, in terms of library development, for technically more skillful people. What is the plan for C#?

In the past it was to compete with Java, because the people at Sun were stupid enough to stop Microsoft from using Microsoft Java, but the main audience of Microsoft has always been business software users, not scientists and engineers. The idea that scientists and engineers will stick to Fortran and C++ for speed does not necessarily hold, because a lot of data nowadays is in text form and the “Big Data” movement requires text processing, which is nasty in C/C++/Fortran, to say the least. The GNU R, Scilab, etc. are clearly not the fastest possible choices and, as demonstrated by the vast variety of scientific Python libraries, a proper programming language is required to create more complex data processing/analyzing routines. Java is out of the game thanks to Oracle, but if the C# will not pick up some of the 2016 scientific Python users/developers, then C# will become another COBOL, not another Fortran. (Fortran is terribly archaic, but thanks to the scientific users its libraries have such a high quality that Fortran is still relevant and will probably stay relevant in numeric computation.) Even the C and C++ have survived mainly because engineers, non-business-software-developers, find those languages to be useful. Pascal, Delphi as mostly business software oriented languages have practically died, with a small exception of

http://www.freepascal.org/

To put my post into another perspective, in addition to vendor independent funding, safety from patent trolls, a project also needs TECHNICAL and SCIENTIFIC users or it will die off like the Delphi programming language.

Thank You for reading my post. I hope to read comments that tell that I’m all wrong and mistaken and that my initial question (the title of this thread) has wrong presumptions. :smiling_imp:


#2

The catch of the question is that history has shown that there does not
exist a business case for optimizing software to be faster than human
psychology can withstand, regardless of the computational power of the
computer. If C# is mainly used for business applications, then in 2016
the most stringent limit is set by mobile phones and tablets.

First, I really don’t understand what history has shown means but there are many business cases to encourage companies to optimize their software to be faster and utilize the full potential and power of computers.

Languages aren’t really limited to specific hardware or operating system so I think that in this case you’re actually referring to the .NET framework as in the tooling and libraries and not really to C#, the language.

Scientific software and technical software (simulations, control, medical equipment, etc.)
tends to have higher requirements to robustness, efficiency, lack of
flaws than business software does, but if the parties, who finance the
development of C#, use C# mainly for business applications, then there
is no motivation for being any more stringent with C# requirements than
the typical business application use case requires. By stringent I mean
RAM usage, application start-up-time (Java apps were slow to start),
speed optimizations, the amount of thought put into stdlib API design
to keep the API as small as possible, while minimizing data copying,
RAM-allocation requests, execution time that is spent on executing
constructors and object initialization in general, while also allowing
succinct application code for as many application types as possible,
maximizing data locality(in RAM), etc.

You’re mixing at least 3 things and put them all in the same bucket! so let’s clarify these things first:

  1. There is the C# the language that allows you to express logic and intents into source code.

  2. There is the C# compiler that takes the C# source code and generate IL code that the platform knows how to execute.

  3. There is the CLR that basically handle the actual execution of managed code.

Now, I didn’t note these 3 things to educate you, I’m sure you know that fairly well but you can’t take everything and call it C# and start throwing words because not everything is dependent on the language itself, in fact, most of the optimization is being done at compile time by the JIT at the CLR level, not to mention that .NET languages are constrained by the CLR much like Java (Scala, Groovy, Kotlin) is constrained by the JVM unlike C++ where the language is generally constrained only by the compiler itself and nothing more!

Sometimes, depends on the feature, an improvement to something, especially when it comes to performance and more specifically to hardware resources and efficiency it requires improvements to the CLR and/or adding/updating API at the framework level to provide access to the feature itself and many times this will happen first before adding the feature as a citizen in the language, in this case C#.

A counter-example is the Java VM and Java stdlib, which had a nice
feature of having a kind of GUI library at its stdlib that had the
property that whenever the GUI code worked without crashing on one
operating system, then it actually STARTED AND WORKED on another
operating system in stead of crashing, unlike wxWidgets and Qt and GTK
GUI apps do. Unfortunately the Java VM was and is clearly a failure
starting from the utter nonsense how its console applications (java,
javac) console API was defined, combined with the slow start-up, initial
sluggishness was OK for business apps, but could have been avoided by
more careful software design (which they actually, eventually, fixed,
but with a delay of at least multiple years).

I don’t know what was your point here but if you will elaborate, maybe I can clarify it or at least share my point of view.

All in all, the pattern with the Java, Windows, business software
development in general is that technical quality has lower priority than
shipping features and the only actual requirement to technical quality
of business software seems to be human psychology, not what is
technically available to software developers.

Again, can you be more specific? you seems to say things but don’t provide any data to really base your arguments.

I understand that I may be just letting off steam here, getting a bit
off-topic from my original question, but honestly, as a person, who
loves automation and loves tools that check for flaws of developers and
prevent me from accidentally doing some stupid mistakes that I self
recognize as a mistake (not the cases that some “best practice” "guru"
thinks to be a mistake at some book and I intentionally consider not to
be a mistake), I still have not understood the efforts, where amateurs,
who do not spend at least a few weeks on studying the basics of software
development(that cover the basics of algorithmic complexity, memory
allocation related issues, how to modularize one’s work, basic OO, some
basics about threads and lack of reliability of internet connections and
the possible issues with application state, data consistency, a little
bit of security, regular expressions, etc.), are encouraged to write
software applications. I’m all for the idea that just like I do not have
to be a cook to create myself a sandwich and it helps me a lot in my
life to be able to create myself sandwiches, people with no IT
background should learn to write simplistic scripts, know some basics
about software development. Even secretaries benefit by writing their
own libraries/macros for spreadsheet applications. But the idea that
amateurs could be made to create any at least remotely decent
applications is just beyond me. Scripting a game, fine, but anything
application-like has just too many aspects to consider for an amateur.

What’s your point here?

I mean this seems like a rant about amateurs that are capable of writing code and are doing this probably for fun and you seems to roll your eyes because they can? even if they aren’t doing it for fun why this needs to be your own business?

That is to say, anything that is purely business oriented seems to have
very low demand for technical excellence and is therefor a bad
investment, in terms of library development, for technically more
skillful people. What is the plan for C#?

That’s not contradictory,
that’s an assumption probably based on your own experience and observation but it doesn’t have to be like this and I really fail to see the relation to C#, the language.

To find what’s the plan check GitHub! maybe raise an issue and ask them about it or look at the docs.

In the past it was to compete with Java, because the people at Sun were
stupid enough to stop Microsoft from using Microsoft Java, but the main
audience of Microsoft has always been business software users, not
scientists and engineers. The idea that scientists and engineers will
stick to Fortran and C++ for speed does not necessarily hold, because a
lot of data nowadays is in text form and the “Big Data” movement
requires text processing, which is nasty in C/C++/Fortran, to say the
least. The GNU R, Scilab, etc. are clearly not the fastest possible
choices and, as demonstrated by the vast variety of scientific Python
libraries, a proper programming language is required to create more
complex data processing/analyzing routines. Java is out of the game
thanks to Oracle, but if the C# will not pick up some of the 2016
scientific Python users/developers, then C# will become another COBOL,
not another Fortran. (Fortran is terribly archaic, but thanks to the
scientific users its libraries have such a high quality that Fortran is
still relevant and will probably stay relevant in numeric computation.)
Even the C and C++ have survived mainly because engineers,
non-business-software-developers, find those languages to be useful.
Pascal, Delphi as mostly business software oriented languages have
practically died, with a small exception of

Where do you get these assumptions from?

It’s funny that you speak about scientists and engineers and you have no data or real analysis to base your arguments but anyway, the language be it C#, C++, Java, Python, Lua, JavaScript doesn’t define your expertise in the software industry.

Just because you use C++ doesn’t mean you’re engineer and just because you use Python or Haskell doesn’t mean you’re a scientist or researcher.

To really answer your question, I really invite you to have a look at the following issue at the Roslyn project on GitHub State / Direction of C# as a High-Performance Language

Another thing to look forward to is .NET Native, I don’t know but maybe at some point they will expand their support and allow us to use this for anything beyond UWP.


#3

Thank You for Your answer.

The reason, why I threw the C# language, the C# compiler and the CLR to a “single pot” is that for Java, Python, Ruby, Perl the end result seems to be that the virtual machine is optimized for running only one programming languages and the other programming languages, for example, Scala or JRuby for JavaVM, are kind of attached there like Frankestein limbs. OK, I understand that in the case of the CLR one might argue that historically it might have been designed also by keeping the C++ in mind, but to say it in a bit exaggerated manner, the C++ is not “as” stack based like the Forth programming language is. So, You were right, I did mean the whole triple, when I used the expression “C#”. May be I should have been more precise and pointed that out myself. Thank You for the clarification.

What regards to my statements about different professions preferring different programming languages, then I did not claim that using programming language X makes somebody obtain profession Y. However, I do maintain that due to historical reasons different schools of specialists tend to prefer different programming languages. People, who run scientific computations on clusters tend to prefer something faster than my 2016 favorite scripting language Ruby and mathematicians from the branch of statistics prefer to use tools that have better statistics libraries, often times the GNU R. That means, if I as an applications developer want to use the libraries that have been developed by people, who have the best domain specific knowledge at the domain that is of interest to me, then I have to use the libraries that the domain experts have created. Often applications involve multiple domains, meaning, if I want to use the best components available on Planet Earth, then I have to create the architecture of my application so that it can use different programming languages simultaneously.

That leads to the question, what domain specific library in C# beats all the libraries of other programming languages of the same domain? The examples that I offered at my initial post illustrated that old, archaic, programming languages like Fortran and C++ can survive thanks to the fact that they have attracted good domain experts, who have written really high quality libraries in those programming languages, but programming languages like Delphi and Visual Basic have died, because they lacked the timeless, high quality, libraries. C++ has the added argument of GPU-s, which are essential for games and high volume number crunching.

The timeless, high quality, libraries tend to be missing from programming languages that are used mainly for business software and the reason for the lack of those timeless libraries might be that tax law, business models, business data formats, etc. become obsolete much faster than humanities understanding of the laws of nature become obsolete. Add to that the attitude of the “minimum viable product”, something that no proper scientist can tolerate about its research and no proper engineer can tolerate due to the pride of craftsmanship, and the disparity in level of polish between business applications and scientific/engineering applications gets even greater.

As of 2016_07 I suspect that libraries written in C# will probably have the best access to various cloud platforms, but the cloud API-s are, again, something that tend to change rapidly, get out of date rapidly. Thank You for the link to the C# high-performance related discussion. It looks very interesting, specially in the light of the .NET Native. As of 2016_07 I have not tried to build the CLR and C# compiler, but if all of the essential C# related tools can be built on Linux as easily as the GCC and LLVM can be built, then things seem to get really interesting indeed, specially if the set of C# libraries, not necessarily C# “stdlib”, starts to have some best-on-planet-Earth-in-its-domain libraries. However, the question, what might that domain be, still remains? The cloud part is problematic due to privacy and centralization issues, but may be that can be countered by storing blobs that have been encrypted at the client side.

Well, one way or the other, thank You for Your answer and my take away from Your answer is that the C# might actually have a chance at natural sciences side and if it does, then that makes C# relevant in the future. I guess that time will tell. Sounds good.


#4

Does it really have to beat these library in performance? I won’t go into language X beats Y argument because it’s silly and futile, you pick a technology for better and worse and for a reason!

I love the .NET stack and what it represents, I’m coming from C++ and Java, therefor I picked C# and I still use C++ and love it, I dropped Java because I dislike the language syntax, dislike the lack of operators overloading, dislike the lack of properties and dislike the lack of innovation.

Scala is okay but again I don’t like its black magic, I dislike its aggressive type inference and dislike the fact that it actually pretend to know what I want it to do.

Indeed, the .NET framework has a long way to go to provide you more control over performance but the language I chose to express code in that ecosystem is definitely C# because it’s awesome! and it’s getting more love with each release, more functional features and more succinct syntax, you can’t go wrong with it.

I would really love to see more of C# in the gaming industry and the science department, if you have ideas, you can always post them at GitHub and be part of this community! :smiley:

The timeless, high quality, libraries tend to be missing from
programming languages that are used mainly for business software and the
reason for the lack of those timeless libraries might be that tax law,
business models, business data formats, etc. become obsolete much
faster than humanities understanding of the laws of nature become
obsolete. Add to that the attitude of the “minimum viable product”,
something that no proper scientist can tolerate about its research and
no proper engineer can tolerate due to the pride of craftsmanship, and
the disparity in level of polish between business applications and
scientific/engineering applications gets even greater.

If they are missing why don’t you start working on it? I mean if you miss a high performance library for specific domain or field and you need it! then if it doesn’t exist you should work on it and when you get to a bottleneck or you found that something is working inefficiently whether it’s the IL that the compiler generates, maybe it’s some silly implementation at the API level of the framework that can be improved or maybe it’s just a CLR constraint where you can go to the CoreCLR repo and file a bug or make a suggestion!

I mean, if we don’t do anything to improve things, we certainly can’t expect anyone else to do it for us! that’s just the way things are. :slight_smile:


#5

C# is used in the scientific community, e.g. https://en.m.wikipedia.org/wiki/BioMA
mostly when a framework provides services and minimizes the need for coding via code generation.
The point is that often scientist still think in “fortran terms” so why bother with OOP, component oriented programming, design patterns etc., so languages like Phyton or R result attractive.


#6

who said Delphi is dead? Embarcadero (who got all Borland software dev tools) has released 64-bit compiler for it and compilers for Android and iOS. Now they prepare a Linux version of the Delphi IDE too


#7

btw, Unity 3D which supports C# doesn’t look like a tool to make business apps (although you could too), it is used in many games


.NET Foundation Website | Blog | Projects | Code of Conduct