I swear, if hardcore Python advocates were in the construction industry, they'd insist that the fastest way to build a skyscraper would be first to engineer it for and build it out of wood, because building it out of wood is faster than steel, and then tear it down and redo the engineering and construction out of steel.
You don't write the same program twice if you can avoid it. You just don't. It's nothing more than extra work. I've been down this path way too many times.
And don't get me wrong - I don't dislike python, I use it all the time. It's one of my three go-to languages (bash for very simple problems, python for problems of moderate complexity / performance / memory needs, C++ for problems of great complexity with significant performance / memory needs). But discovering that you chose the wrong language and have to switch languages is the absolute worst. And it's all too easy to gloss over a language's shortcomings until
you yourself hit them.
BTW, to anyone who wants to pretend that Python isn't a hog....
>>> a=1
>>> sys.getsizeof(a)
28
>>> a=[]
>>> sys.getsizeof(a)
72
>>> a=[1]
>>> sys.getsizeof(a)
80
>>> a={}
>>> sys.getsizeof(a)
248
Numpy helps, but it's still insufficient for many tasks.
I could do a performance benchmark for you if I wanted, too. C++ is generally about 10x faster, give or take half an order of magnitude depending on the task. There's a lot of things for which
even C (let alone Python) can't do that C++ can due to greater ability to inline code in algorithms (for example, try benchmarking qsort vs. std::sort).
One of the worst aspect of Python in particular is you have no granular control over your data structures. Python lists are equivalent to std::vector. You have no ability to choose between, say, std::vector, std::array, std::deque, std::list, and std::forward_list (Numpy's are like std::array, but for a very limited functionality subset). These not only have different memory footprints, but
outright different O(N)s for different operations! And here we're not even going into the fact that you can template C++ data structures over different allocators or extend them to new functionality. The same applies to dictionaries vs. std::map or std::hashmap or std::multimap - very different memory and performance footprints, and if python doesn't have a builtin means for doing something (for example, "find the
nearest entry to a particular key"... eg. std::lower_bound, std::upper_bound), you're utterly screwed from a performance standpoint.
When it comes to performance and memory, Python is like, "
Hey, want to drive in a nail? Here's a hammer! Hey, want to drive in a screw? Here's a hammer! Hey, want to tighten a bolt? Here's a hammer!"
The whole thing about Python being "rapid prototyping" vs. C++ is IMHO an oversold sales pitch. And remember, I'm a person who likes Python here! Python is great until it utterly screws you over. And even when that doesn't happen, Python has a lot of things which tend to make it more likely to bite you the more complicated the program gets. The lack of variable typing for example make things simpler early on but can come back to bite you later on - the longer and more complicated the program, the more likely that's going to happen. A complex program should not only be strictly typed, but all variables outright sanity checked at the start of each function.
Mind you, I'd definitely recommend Python as a "language for beginners" over C++, and as a language for people whose primary job isn't programming.
(I also often find that people bashing C++ in favour of Python are unfamiliar with the features of
modern C++ standards)
Sign on the door at the Honda dealership stating that they've moved, that's all. Stopped by the car lot at the port just because
It'll be interesting for sure to monitor that lot starting in a couple weeks!