https://en.wikipedia.org/wiki/Occam%27s_razor#Software_Development
Anyone have any thoughts on this topic, feel free to share.
https://en.wikipedia.org/wiki/Occam%27s_razor#Software_Development
Anyone have any thoughts on this topic, feel free to share.
Well, it is interesting that they first were trying to gauge the complexity objectively, then made it a subjective matter. Of course, it is just a practical matter of parsimony: you use the programming language you can understand best and use most efficiently.
That being said, in a microcosm of that, it's why I got a bit addicted to list (and dictionary) comprehensions in python. This is where you reduce a lot into as few lines of code as possible, such as:
They all generally give the same result, but you can tighten the code up. You can get pretty wild with it.
Although, one other issue crops up, that might not be immediately obvious outside of coding: readability. For instance, in python there is PEP 8, which is used as a standard style-guide. It has formatting standards which people follow (and suggest) to promote readability. Maintaining readability is important for when others read your code, and even in order to keep track of your own code.
So, while it's neat that you can really pare down code with an eye toward just how long it takes to process your code, you also have to have an eye on how readable it will be for those that debug it later (which, I guess, somewhat relates to the complexity as a function of the user's understanding/subjectivity).
As a total aside: A co-worker and I were talking about processing time and parsimony of code and wondered if using comments or docstrings affects processing time in a significant way. It is a good habit to put comments and docstrings in your code, with the same considerations one does when considering readability -- to help keep track of the work being done and what's going on, for yourself or whomever is going to be looking at the code.
As a total aside: A co-worker and I were talking about processing time and parsimony of code and wondered if using comments or docstrings affects processing time in a significant way. It is a good habit to put comments and docstrings in your code, with the same considerations one does when considering readability -- to help keep track of the work being done and what's going on, for yourself or whomever is going to be looking at the code.
While commenting won't affect processing time in a normal case, the number of lines of code will. This is why all major corporations (microsoft, NSA, adobe) write their code in a single line.
While commenting won't affect processing time in a normal case, the number of lines of code will. This is why all major corporations (microsoft, NSA, adobe) write their code in a single line.
That sounds hideous to have to read. Do they format/reformat to work on it?
https://en.wikipedia.org/wiki/Occam%27s_razor#Software_Development
Anyone have any thoughts on this topic, feel free to share.
For your stache, a regular Razor from gilette is enough.
I do. I've been talking about Occam's razor on this forum for maybe around 5 years, and every time my thoughts have gone unchallenged my confidence in Occam's razor grows.
Occam's razor is a probabilistic statement which is usually misunderstood by a bunch of people who quote William Ockham. It's more likely that, given no additional information, the next person you meet has blue eyes (simple hypothesis) instead of green eyes, red hair, and broken jaw (complicated hypothesis). In some cases, it's obvious which hypothesis is more complicated or, more accurately, less probable. However, there are also times when it's not so obvious; that is where a lot of people misunderstand what Occam's razor actually means.
But software development? How does it determine what a "correct programming language" to use is? Correct implies that there's a standard, but it's not defined in that wikipedia page. What's the standard we use to determine how correct something is? I'm not sure what problem they want to solve if there's no problem formulation.
Nevertheless, I do follow the "KISS" paradigm, which they mention on that page about the rule of least power. All my projects start with no rules (the simplest possible setup) and rules get added only as they become necessary as evidenced by mistakes or other things. Presuming that unnecessary rules slow down progress, this is usually a close-to-optimal method given a number of unknown variables, and I've found it to empirically work more efficiently than alternative methodologies which establish a bunch of paradigms, stylistic notions, and rules right off the bat to pre-empt mistakes that may or may not happen. The philosophy is vaguely similar to Good's philosophy of "make it work first and then make it better" which builds incremental changes upon incremental changes.
However, this is the first time I'm hearing discussion regarding choosing the programming language based on Occam's razor. The rationale above is more closer to emulating the scientific process of evidence-gathering and Bayesian philosophy, which naturally encodes the Occam's razor, in minimizing wasted time.
As a total aside: A co-worker and I were talking about processing time and parsimony of code and wondered if using comments or docstrings affects processing time in a significant way. It is a good habit to put comments and docstrings in your code, with the same considerations one does when considering readability -- to help keep track of the work being done and what's going on, for yourself or whomever is going to be looking at the code.
While commenting won't affect processing time in a normal case, the number of lines of code will. This is why all major corporations (microsoft, NSA, adobe) write their code in a single line.
Lol.
As a total aside: A co-worker and I were talking about processing time and parsimony of code and wondered if using comments or docstrings affects processing time in a significant way. It is a good habit to put comments and docstrings in your code, with the same considerations one does when considering readability -- to help keep track of the work being done and what's going on, for yourself or whomever is going to be looking at the code.
For the most popular programming language, it does not. If it did, it would take 2 minutes to make a script that first removes all the docstring and comments and then compiles/runs the script. The modern C compilers, for example, have been developed since the 1970s so that they can optimize C code in various ways. There's no way that the army of professional programmers that developed the compilers couldn't figure out how to automatically discard dosctring/comments throughout the several decades of compiler development, when I, as an amateur coder, can do that in 5 minutes with a bash script. Same argument applies to most of the other programming languages too.
"the correct programming language to use is the one that is simplest while also solving the targeted software problem."
Such a general statement..... seems generally correct, but of course there are always exceptions. Lets say the team knows Java but the "best" language for the task is C++. Sacrificing some optimization speed for development time to market may be acceptable. It's a give and take.