In this article I give a simple example of how data types help to write better code.
Imagine you're developping a 2D game. It's a super simple game: there are squares that appear on the screen and you have to click on them to gain points. It's just for the sake of the explanation, but you can imagine it's an implementation of some kind of Mogura Tataki if you please. The squares are all canonicals (from [-1,-1] to [1,1]) to make things simpler, and have various size and location.
To detect a hit and display the squares on the screen, you'll probably need to convert between screen coordinates system and square coordinates system.
And to test for a hit, the canonical shape makes it easy to check in square coordinates system.
Now everytime the player hits the screen, convert from screen coordinates to square coordinates and test for a hit.
It compiles from first try, not even a warning, you rock ! Run it, hit it, and... ok, smashed the first one, eh I was pretty sure I hit that second one too, oh wait what the hell is happening, sometime it works and sometime it doesn't ???
Obviously, all my eagle-eye readers will have spotted the problem since 5 minutes ago. We want to convert from *screen* coordinates to *square* coordinates. Sorry, my mistake:
Recompile it, retry it. Hum, still not good. Did I inverse coordHit and coordSquare when calling FromScreenToSquareCoordSys() ? I've already forgotten, let me check again the function definition, a double*, yeah but... Or maybe I mistook the conversion so lets check the body of the function too... No, looks good. Hum, search, search, debug, search, and ... ah what a dumbass I am, I'm not using the correct variable when testing for the hit !
Ok, now it's working.
Of course, you would never had made such simple mistakes, right ? Nobody never mistakes, only others do! Of course, the same way you immediately saw the problem in that on-purpose ridiculously over-simplified example, you will immediately see the same problem in a real-world application where parameters fly here and there through tons of transformations, hundreds of functions and thousands of line of code. Then, fine, you can keep on coding with no type checking languages and don't bother your mighty brain with such trivial questions.
For others who see coding as thinking and not type racing, lets take a minute to see how we could have avoided the mistakes above. A better use of types provides a solution. The mistakes was to use coordinates from one coordinate system instead of another. It clearly shows that, even if in both case coordinates are a pair of real values that can be implemented as a double, they are indeed two different concepts. Expressing this by creating a type for each of them:
allows to clearly and unambiguously specify in the interface of the functions what they expect:
Now if you mistake when using them:
The type checking will immediately tell you where you're wrong and what you should have used:
I hope this simple and easy to understand example will encourage you to see language without type checking as unsafe and inefficient: they let you write stupid things and make it hard to debug them. I hope also it will encourage those who disregard the fields of code design and code architecture to change their mind.
Edit on 2023/06/23: another video on the subject here.