lxe 2 days ago

I think the author makes a good point about understanding structure over symbol manipulation, but there's a slippery slope here that bothers me.

In practice, I find it much more productive to start with a computational solution - write the algorithm, make it work, understand the procedure. Then, if there's elegant mathematical structure hiding in there, it reveals itself naturally. You optimize where it matters.

The problem is math purists will look at this approach and dismiss it as "inelegant" or "brute force" thinking. But that's backwards. A closed-form solution you've memorized but don't deeply understand is worse than an iterative algorithm you've built from scratch and can reason about clearly.

Most real problems have perfectly good computational solutions. The computational perspective often forces you to think through edge cases, termination conditions, and the actual mechanics of what's happening - which builds genuine intuition. The "elegant" closed-form solution often obscures that structure.

I'm not against finding mathematical elegance. I'm against the cultural bias that treats computation as second-class thinking. Start with what works. Optimize when the structure becomes obvious. That's how you actually solve problems.

  • godelski 2 days ago

      Mathematics is not the study of numbers, but the relationships between them
      - Henry Poincaré
     
    I want to stress this because I think you have too rigid of a definition of math. Your talk about optimization sounds odd to me as someone who starts with math first. Optimization is done with a profiler. Sure, I'll also use math to find that solution but I don't start with optimization nor do I optimize by big O.

    Elegance is not first. First is rough. Solving by math sounds much like what you describe. I find my structures, put them together, and find the interactions. Elegance comes after cleaning things up. It's towards the end of the process, not the beginning. We don't divine math just as you don't divine code. I'm just not sure how you get elegance from the get go.

    So I find it weird that you criticize a math first approach because your description of a math approach doesn't feel all that accurate to me.

    Edit: I do also want to mention that there's a correspondence between math and code. They aren't completely isomorphic because math can do a lot more and can be much more arbitrarily constructed, but the correspondence is key to understanding how these techniques are not so different.

  • saulpw 2 days ago

    Some people like Peter Norvig prefer top-down, hackers like me and you prefer bottom-up. Many problems can be solved either way. But for some problems, if you use the wrong approach, you're gonna have a bad time. See Ron Jeffries' attempt to solve sudoku.

    The top-down (mathematical) approach can also fail, in cases where there's not an existing math solution, or when a perfectly spherical cow isn't an adequate representation of reality. See Minix vs Linux, or OSI vs TCP/IP.

    • lxe 2 days ago

      Fair point about problem-fit - some problems do naturally lend themselves to one approach over the other.

      But I think the Sudoku example is less about top-down vs bottom-up and more about dogmatic adherence to abstractions (OOP in that case). Jeffries wasn't just using a 'hacker' approach - he was forcing everything through an OOP lens that fundamentally didn't fit the problem structure.

      But yes, same issue can happen with the 'mathematical' approach - forcing "elegant" closed-form thinking onto problems that are inherently messy or iterative.

      • senderista 2 days ago

        I don’t think his failure had anything to do with OOP. He failed because he deliberately refused to think systematically about the problem before writing code, per the dictates of TDD.

    • kenjackson 2 days ago

      I'd argue that everyone solves problems bottoms up. It's just that some people have done the problem before (or a variant of it) so they have already constructed a top-down schema for it.

      • js8 2 days ago

        No, there's a difference. The difference is whether you work in constraint space (top down) or in solution space (bottom up). Top down is effectively adding constraints until there is a single solution.

    • liquid_bluing 2 days ago

      The hacker’s mentality is like that of the painter who spends months on a portrait in order to produce a beautiful but imperfect likeness, marked with his own personal style, which few can replicate and people pay a lot for. The mathematical approach is to take a photo because someone figured out how to perfectly reproduce images on paper over a hundred years ago and I just want a picture, dammit, but the camera’s manual is in Lojban.

      IMO, the mathematical approach is essentially always better for software; nearly every problem that the industry didn’t inflict upon itself was solved by some egghead last century. But there is a kind of joy in creating pointless difficulties at enormous cost in order to experience the satisfaction of overcoming them without recourse to others, I suppose.

  • orforforof 2 days ago

    I really enjoyed the book Mathematica by David Bessis, who writes about his creative process as a mathematician. He makes a case that formal math is usually the last step to refine/optimize an idea, not the starting point as is often assumed. His point is to push against the cultural idea that math == symbols. Sounds similar to some of what you're describing.

    • whilenot-dev 2 days ago

      I really didn't like that book. Its basic premise was that we should separate the idea of mathematics from the formalities of mathematics, we should aim to imagine mathematical problems visually. The later chapters then consist of an elephant drawing that isn't true to scale and tell me why David Bessis thought it would be best to create an AI startup, that just put the final nail in the coffin for me. There's some historical note here and there, but that's it - it really could've been a blog post.

      Every single YouTube video from tom7[0] or 3blue1brown[1] do way more on transmitting the fascinations of mathematics.

      [0]: https://www.youtube.com/@tom7

      [1]: https://www.youtube.com/@3blue1brown

    • krikou 2 days ago

      Indeed, this is a fantastic book.

      I could relate how he described the mathematical experience with what I feel is happening in my head/brain when I do programming.

    • thenobsta 2 days ago

      Amazing book. I love how he brings math into something tacit and internal.

  • vatsachak 2 days ago

    I have math papers in top journals and that's exactly how I did math;

    Just get a proof of the open problem no matter how sketchy. Then iterate and refine.

    But people love to reinvent the wheel without caring about abstractions, resulting in languages like Python being the defacto standard for machine learning

    • wiz21c 2 days ago

      Now there's engineering and math. Engineering use maths to solve problems and when writing programs, you usually tinker with your data until the math tools pops in your mind (e.g. first look at your data then conclude that a normal distribution is the way to think about them). BAsically, one uses existing math tools. In math it's more about proving something new, building new tools I guess.

      Sidenote: I code fluid dynamics stuff (I'm trained in computer science, not at all in physics). It's funny to see how the math and physics deeply affect the way I code (and not the other way around). Math and physics laws feels unescapable and my code usually have to be extremely accurate to handle these laws correctly. When debugging that code, usually, thinking math/physics first is the way to go as they allow you to narrow the (code) bug more quickly. And if all fails, then usually, it's back to the math/physics drawing board :-)

  • ViscountPenguin 2 days ago

    3Blue1Brown has a great video which frames this as a cultural problem that also exists in mathematics pedagogy:

    https://www.youtube.com/watch?v=ltLUadnCyi0

    Personally, I find a mix of all three approaches (programming, pen and paper, and "pure" mathematical structural thought) to be best.

  • zdkaster 2 days ago

    I completely agree. Start with what works, rough, understand it a bit deeper develop better solutions. Any trial-error, brute force or inelegant makes more natural for practioner. I think this aligns with George Pólya https://en.wikipedia.org/wiki/How_to_Solve_It book. The brute force is more productive and will build better intuition when you will realize the pattern and so elegant will come.

  • MITSardine 2 days ago

    Math isn't about memorizing closed-form solutions, but analyzing the behavior of mathematical objects.

    That said, I mostly agree with you, and I thought I'd share an anecdote where a math result came from a premature implementation.

    I was working on maximizing the minimum value of a set of functions f_i that depend on variables X. I.e., solve max_X min_i f_i(X).

    The f_i were each cubic, so F(X) = min_i f_i(X) was piecewise cubic. X was dimension 3xN, N arbitrarily large. This is intractable to solve as, F being non-smooth (derivatives are discontinuous), you can't well throw it at Newton's method or a gradient descent. Non-differentiable optimization was out of the question due to cost.

    To solve this, I'd implemented an optimizer that moved one variable at a time x, such that F(x) was now a 1d piecewise cubic function that I could globally maximize with analytical methods.

    This was a simple algorithm where I intersected graphs of the f_i to figure out where they're minimal, then maximize the whole thing analytically section by section.

    In debugging this, something jumped out: coefficients corresponding to second and third derivative were always zero. What the hell was wrong with my implementation?? Did I compute the coefficients wrong?

    After a lot of head scratching and code back and forth, I went back to the scratchpad, looked at these functions more closely, and realized they're cubic of all variables, but linear of any given variable. This should have been obvious, as it was a determinant of a matrix whose columns or rows depended linearly on the variables. Noticing this would have been 1st year math curriculum.

    This changed things radically as I could now recast my maxmin problem as a Linear Program, which has very efficient numerical solvers (e.g. Dantzig's simplex algorithm). These give you the global optimum to machine precision, and are very fast on small problems. As a bonus, I could actually move three variables at once --- not just one ---, as those were separate rows of the matrix. Or I could even move N at once, as those were separate columns. This could beat all the differentiable optimization based approaches that people had been doing on all counts (quality of the extrema and speed), using regularizations of F.

    The end result is what I'd consider one of the few things not busy work in my PhD thesis, an actual novel result that brings something useful to the table. To say this has been adopted at all is a different matter, but I'm satisfied with my result which, in the end, is mathematical in nature. It still baffles me that no-one had stumbled on this simple property despite the compute cycles wasted on solving this problem, which coincidentally is often stated as one of the main reasons the overarching field is still not as popular as it could be.

    From this episode, I deduced two things. Firstly, the right a priori mathematical insight can save a lot of time in designing misfit algorithms, and then implementing and debugging them. I don't recall exactly, but this took me about two months or so, as I tried different approaches. Secondly, the right mathematical insight can be easy to miss. I had been blinded by the fact no-one had solved this problem before, so I assumed it must have had a hard solution. Something as trivial as this was not even imaginable to me.

    Now I try to be a little more careful and not jump into code right away when meeting a novel problem, and at least consider if there isn't a way it can be recast to a simpler problem. Recasting things to simpler or known problems is basically the essence of mathematics, isn't it?

sfpotter 2 days ago

I agree with the thrust of the article but my conclusion is slightly different.

In my experience the issue is sometimes that Step 1 doesn't even take place in a clear cut way. A lot of what I see is:

  1. Design algorithms and data structures
  2. Implement and test them
Or even:

  1. Program algorithms and data structures
  2. Implement and test them
Or even:

  1. Implement
  2. Test
Or even:

  1. Test
  2. Implement
:-(

IMO, this last popular approach gets things completely backwards. It assumes there is no need to think about the problem before hand, to identify it, to spend any amount of time thinking about what needs to happen on a computer for that problem to be solved... you just write down some observable behaviors and begin reactively trying to implement them. Huge waste of time.

The point also about "C-style languages being more appealing" is well taken. It's not so much about the language in particular. If you are able to sit down and clearly articulate what you're trying to do, understand the design tradeoffs, which algorithms and data structures are available, which need to be invented... you could do it in assembly if it was necessary, it's just a matter of how much time and energy you're willing to spend. The goal becomes clear and you just go there.

I have an extensive mathematical background and find this training invaluable. On the other hand, I rarely need to go so far as carefully putting down theorems and definitions to understand what I'm doing. Most of this happens subliminally somewhere in my mind during the design phase. But there's no doubt that without this training I'd be much worse at my job.

  • josteink 2 days ago

    > Or even: 1. Test 2. Implement IMO, this last popular approach gets things completely backwards. It assumes there is no need to think about the problem before hand

    I think you misunderstand this approach.

    The point of writing the tests is to think about the desired behaviour of the system/module you are implementing, before your mind gets lost in all the complexities which necessarily happens during the implementation.

    When you write code, and hit a wall, it’s super easy to get hyper-focused on solving that one problem, and while doing so: lose the big picture.

    Writing tests first can be a way to avoid this, by thinking of the tests as a specification you think you should adhere to later, without having to worry about how you get there.

    For some problems, this works really well. For others, it might not. Just don’t dismiss the idea completely :)

    • sfpotter 2 days ago

      I'm being a little rhetorically over the top. Of course, sometimes it's what you have to do.

      In fact, right now I'm doing exactly Test/Implement because I don't know how else to solve the problem. But this is a last resort. Only because the first few attempts failed and I must solve this problem have I resorted to grinding out individual cases. The issue is that I have my back against the wall and have to solve a problem I don't understand. But as I progress, eventually I will understand the problem, and then my many cases are going to get dramatically simplified or even rewritten.

      But all the tests I've created along the way will stay...

  • taeric 2 days ago

    Reminds me of the attempt to TDD a way to a sudoku solver. Agreed that it is a bit of a crazy path.

    Not that Implement/Test can't work. As frustrating as it is, "just do something" works far better than many alternatives. In particular, with enough places doing it, somebody may succeed.

pedromsrocha 2 days ago

I disagree with the author’s claim that there are no black boxes in mathematics. In fact, this is exactly what lemmas and theorems serve as: a statement (like a typing interface or function signature) together with a proof (a “program”) that satisfies that interface. In large-scale mathematics we rarely unfold every proof; we use those results as black boxes — otherwise the work would be unsustainable.

I also disagree with the broader implication that the languages of programming and mathematics (i.e., logic) are inherently distant. On the contrary, they share deep structural isomorphisms as evidenced by the Curry–Howard correspondence.

amarant 2 days ago

I already regret reading this article. Don't get me wrong, it's well written, and I agree with most of it. But every time I read an article like this I get stuck in analysis paralysis for any code I need to write afterwards and that's just not very productive.

Here's hoping my recognising the issue will soften the blow this time! Mayhaps this comment might save someone else from a similar fate

  • geekologist 2 days ago

    Don't know how far along your career you are, but as a youngin', the occasional thought piece like this that introduces interesting new ideas and challenges me to reevaluate how I approach things have proven to be quite formative in retrospect

godelski 2 days ago

Are people not reading the article or are they so primed into thinking that math is a certain way that the authors words are missed?

  > The natural language which has been effectively used for thinking about computation, for thousands of years, is mathematics. Most people don’t think of math as free or flexible. They think of scary symbols and memorizing steps to regurgitate on tests. Others hear math and think category theory, lambda calculus, or other methods of formalizing computation itself, but these are hardly necessary for programming itself.
I very much agree with the author here. It's also just a fact that this was the primary language for centuries. We didn't have programming languages in time of Newton but we did have computation

  > It’s not that programming languages aren’t good enough yet. It’s that no formal language could be good at it. Our brains just don’t think that way. When problems get hard, we draw diagrams and discuss them with collaborators.
This is math. It's not always about symbols and numbers. It's about the relationships. It's not about elegance, even if that's the end goal. Math always starts very messy. But the results you see are usually polished and cleaned up.

I think if you carefully read the author then many of you might be surprised you're using math as your frame of thinking. The symbols and rigor can help but mathematical thinking is all about abstraction. It is an incredibly creative process. But I think sometimes we're too afraid of abstraction that we just call it different names. Everything we do in math or programming is abstract. It's not like the code is real. There's different levels of abstraction and different types of abstraction, but all these things have their uses and advantages in different situations.

woopsn 2 days ago

I would submit once you obtain a certain level of experience it becomes IDEAL to begin with implementation, in case a mathematical analysis may be either trivial or impossibly non-trivial... Of course if you're dealing in exchange rates and risk management, understand the math!

Twey a day ago

This article comes across as rather defeatist:

> Another limitation of programming languages is that they are poor abstraction tools

> Programming languages are implementation tools for instructing machines, not thinking tools for expressing ideas

Machine code is an implementation tool for instructing machines (and even then there's a discussion to be had about designing machines with instruction sets that map more neatly to the problems we want to solve with them). Everything we've built on top of that, from assembly on up, is an attempt to bridge the gap from ‘thinking tools for expressing ideas’.

The holy grail of programming languages is a language that seamlessly supports expressing algorithms at any level of abstraction, including or omitting lower-level details as necessary. Are we there yet? Definitely not. But to give up on the entire problem and declare that programming languages are inherently unsuitable for idea expression is really throwing the baby out with the bathwater.

As others in the comments have noted, it's a popular and successful approach to programming today to just start writing code and seeing where the nice structure emerges. The feasibility of that approach is entirely thanks to the increasing ability of programming languages to support top-down programming. If you look at programming practice in the past, when the available implementation languages were much lower-level, software engineering was dominated by high-level algorithm design tools like flowcharts, DRAKON, Nassi–Shneiderman diagrams, or UML, which were then painstakingly compiled by hand (in what was considered purely menial work, especially in the earlier days) into computer instructions. Our modern programming languages, even the ‘low-level’ ones, are already capable of higher levels of abstraction than the ‘high-level’ algorithm design tools of the '50s.

loourr 2 days ago

I've felt for a long time that math notation is just a really bad programming language.

I think a corollary to this is that we should teach math with code.

mewpmewp2 2 days ago

I wish there was a better explanation on what the exact problem was that they were trying to solve. I couldn't understand the problem - if I did I would have proposed my own solution, and then compared to the thinking process proposed to validate if that could've worked better for me, but I can't be bothered to follow the thinking process in the symbols like this without even knowing what we are solving for.

Was it about how to design a profitable algorithm? Was it about how to design the bot? was it about understanding if results from the bot were beneficial?

If that I would just backtest the algorithm to see the profit changes on real historical data?

> Definition: We say that the merchant rate is favorable iff the earnings are non-negative for most sets of typical purchases and sales. r'(t) is favorable iff e(P, S) >= 0.

If I understand the definition correctly, I would say that this is likely even wrong because you could have an algorithm that will be slightly profitable 90% of the time, but the 10% of the time it loses everything.

A correct solution to me is to simulate large numbers of trades based on as realistic data as you can possibly get and then consider the overall sum of the results, not positive vs negative trades ratio.

zkmon 2 days ago

That's still a chaotic composition of thoughts, not driven by any identified structure or symmetry of the situation.

Why a program is needed? What constraints lead to the existence of that need? Why didn't human interactions need a program or thinking in math? Why do computers use 0s and 1s? You need to start there and systematically derive other concepts, that are tightly linked and have a purpose driven by the pre-existing context.

bigger_cheese 2 days ago

I'm an engineer I think there is definitely some pain points translating math to code.

I've written some nasty numerical integration code (in C using for loops) for example I'm not proud of it but it solved my issue. I remember at the time thinking surely there must be a better way for computers to solve integrals.

  • godelski 2 days ago

    I struggled with this originally and it took years for it to click. But when it did I became both a much better programmer and mathematician for it.

    I think what helps is to take the time to sit down and practice going back and forth. Remember, math and code are interchangeable. All the computer can do is math. Take some code and translate it to math, take some math and translate it to code. There's easy things to see like how variables are variables, but do you always see what the loops represent? Sums and products are easy, but there's also permutations and sometimes they're there due to lack of an operator. Like how loops can be matrix multiplication, dot products, or even integration.

    I highly suggest working with a language like C or Fortran to begin with and code that's more obviously math. But then move into things that aren't so obvious. Databases are a great example. When you get somewhat comfortable try code that isn't obviously math.

    The reason I wouldn't suggest a language like Python is because it abstracts too much. While it's my primary language now it's harder to make that translation because you have to understand what's happening underneath or be working with a diffident mathematical system and in my experience not many engineers (or many outside math majors) are familiar with abstract algebra and beyond so these formulations are more challenging at first.

    For motivation, the benefits are that you can switch modes for when a problem is easier to solve in a different context. It happens much more than you'd think. So you end up speaking like Spanglish, or some other mixture of languages. I also find it beneficial that I can formulate ideas when out and about without a computer to type code. I also find that my code can often be cleaner and more flexible as it's clearer to me what I'm doing. So it helps a lot with debugging too

    Side note: with computers don't forget about statistics and things like Monte Carlo integration. We have GPUs these days and that massive parallelism can often make slower algorithms faster :). When looking at lots of computational code it's not written for the modern massively parallel environment we have today. Just some food for thought. You might find some fun solutions but also be careful of rabbit holes lol

aryehof 2 days ago

But many computer applications are models of systems real or imagined. Those systems are not mathematical models. That everything is an “algorithm” is the mantra of programmers that haven’t been exposed to different types of software.

  • godelski 2 days ago

    Math is just a language. We use math in physics a lot, so I find it a weird claim to say that math doesn't apply to the real world.

  • groundzeros2015 2 days ago

    Real world systems are chaos. Math is the most successful method for conceptualizing them into a model we can reason about - such as in a computer.

jongjong 2 days ago

I have a love-hate relationship with Math.

I love the logical aspect and the visualization aspect like writing down a formula and then visualizing/imagining a graph of all possible curves which that formula represents given all possible values of x or z. You can visualize things that you cannot draw or even render on a computer.

I also enjoy visualizing powers and logarithms. Math doesn't have to be abstract. To me, it feels concrete.

My problem with math is all to do with syntax, syntax reuse in different contexts and even the language of how mathematicians describe problems seems ambiguous to me... IMO, the way engineers describe problems is clearer.

Sometimes I feel like those who are good at math are kind of biased towards certain assumptions. Their bias makes it easier for them to fill in gaps in mathematical language and symbolism... But I would question whether this bias, this approach to thinking is actually a universally good thing in the grand scheme of things. Wouldn't math benefit from more neurodiversity?

I remember at school, I struggled in maths at some points because I could see multiple interpretations of certain statements and as the teacher kept going deeper, I felt like I had to execute a tree search algorithm in my mind to figure out what was the most plausible interpretation of the lesson. I did much better at university because I was learning from books and so I could pause and research every time I encountered an ambiguous statement. I went from failing school math to getting distinction at university level maths.

begueradj 2 days ago

Effort was made to write this article. Deep insight in several statements.

shevy-java 2 days ago

> Programming languages are implementation tools for instructing machines, not thinking tools for expressing ideas.

I completely disagree with that assumption.

Any function call that proceeds to capture logic, e. g. data from reallife systems, drones or robot, or robots in logistics - you will often see they proceed in a logic chain. Sometimes they use a DSL, be it in rails, but also older DSLs such as the sierra game logic and other DSLs.

If you have a good programming language it is basically like "thinking" in that language too. You can also see this in languages such as C, and the creation of git. Now I don't think C is a particularly great language for higher abstractions, but the assumption that "only math is valid and any other instruction to a machine is pointless", is simply flat out wrong. Both is perfectly valid and fine, they just operate on a different level usually. My brain is more comfortable with ruby than with C, for instance. I'd rather want languages to be like ruby AND fast, than have to adjust down towards C or assembly.

Also the author neglects that you can bootstrap in language xyz to see if a specific idea is feasible. That's what happened in many languages.

  • rramadass a day ago

    You have misunderstood what the author meant.

    It is that Mathematics is far more general and uses a myriad of notations developed over hundreds of years and adapted to various sub-fields/domains/models as necessary. This makes it far more flexible and powerful than any programming language. That is why Multi-Paradigm languages became a thing i.e. there is a need for programming languages to provide a larger set of computation models which can then be exploited by the programmer to map his domain models (mathematical or not).

    For example; why do many(most?) programmers have difficulty in transcribing algorithms given in pseudocode to their favourite language? Simply because they have not understood the algorithm at the fundamental mathematical level but have only picked up the patterns through which it is expressed in their language. Note that this is the default way our brain works and how we manage real-world complexity without really understanding everything (satirically phrased as "monkey see, monkey do"). But we can use mathematical methods and reasoning to minimize going off the rails because it forces us to make explicit our assumptions, definitions and proofs which is at the heart of problem-solving. So we use all the mathematical tools we have at hand to structure and solve a problem and only later map it to our programming language. But note that as we gain more experience this mapping becomes intuitive and we can directly think and express it in our favourite programming language.

    See also my comment here - https://news.ycombinator.com/item?id=45934301

    Some References:

    Notation as a tool of thought by Kenneth Iverson - https://dl.acm.org/doi/10.1145/358896.358899

    Predicate Logic as Programming Language by Robert Kowalski - https://www.researchgate.net/publication/221330242_Predicate...

almostgotcaught 2 days ago

what compels software people to write opinion pieces. like you don't see bakers, mechanics, dentists, accountants writing blog posts like this...

Edit: to everyone responding that there are trade mags - yes SWE has those too (they're called developer conferences). In both categories, someone has to invite you to speak. I'm asking what compels Joe Shmoe SWE to pontificate on things they haven't been asked by anyone to pontificate on.

  • constantcrying 2 days ago

    >I'm asking what compels Joe Shmoe SWE to pontificate on things they haven't been asked by anyone to pontificate on.

    The Internet is absolutely full of this. This is purely your own bias here, for any of the trades you mentioned try looking. You will find Videos, podcast and blogs within minutes.

    People love talking about their work, no matter their trade. They love giving their opinions.

  • dinkleberg 2 days ago

    What an insane statement. What compels anyone to write an opinion piece? They have an opinion and want to share it! Why in god's name should someone have to be invited to share their opinion, on their own website no less.

    • almostgotcaught 2 days ago

      Have you really never heard the expression "opinions are like ...." or did you not understand the point of the expression?

  • WorldMaker 2 days ago

    Mathematicians certainly write volumes of opinion pieces. The article you are complaining about starts from the presumption that software could benefit from more mathematical thinking, even if that doesn't explain broader general trends.

    (But I think it does apply more generally. We refer to it as Computer Science, it is often a branch of Mathematics both historically and today with some Universities still considering it a part of their Math department. Some of the industry's biggest role models/luminaries often considered themselves mathematicians first or second, such as Turing, Church, Dijkstra, Knuth, and more.)

    • almostgotcaught 2 days ago

      > Computer Science

      do we really have to retread this? unless you are employed by a university to perform research (or another research organization), you are not a computer scientist or a mathematician or anything else of that sort. no more so than an accountant is an economist or a carpenter is an architect.

      > The article you are complaining about starts from the presumption that software

      reread my comment - at no point did i complain about the article. i'm complaining that SWEs have overinflated senses of self which compel them to write such articles.

      • tkfoss 2 days ago

            > what compels software people to write opinion pieces. like you don't see bakers, mechanics, dentists, accountants writing blog posts like this...
        
            >> Computer Science
            > do we really have to retread this? unless you are employed by a university to perform research (or another research organization), you are not a computer scientist or a mathematician or anything else of that sort. no more so than an accountant is an economist or a carpenter is an architect.
        
            >> The article you are complaining about starts from the presumption that software
            > reread my comment - at no point did i complain about the article. i'm complaining that SWEs have overinflated senses of self which compel them to write such articles.
        
        
        You're incredibly tiring commenter, to a point I already recognize your nickname. What compels YOU to be this way?

        I wish there was a block button for the "overinflated senses of self".

        • almostgotcaught 2 days ago

          what compels me to be which way? to call out obnoxious things in the people i'm surrounded by? dunno probably my natural revulsion to noxious things?

        • yowlingcat 2 days ago

          I observe this compulsion a lot and in my opinion, it's almost always coming from resentment driving ego in an attempt to compensate for insecurity and self-loathing, which ultimately ends up misdirected towards others. It's almost always accidentally entertaining, but sadly ends up diminishing rather than elevating discourse.

          To refute GP's point more broadly -- there is a lot in /applied/ computer science (which is what I think the harder aspects software engineering really is) that was and is done by individuals in software just building in a vacuum, open source holding tons of examples.

          And to answer GP's somewhat rhetorical question more directly - none of those professions are paid to do open-ended knowledge work, so the analogy is extremely strained. You don't necessarily see them post on blogs (as opposed to LinkedIn/X, for example), but: investors, management consultants, lawyers, traders, and corporate executives all write a ton of this kind of long-form content that is blog post flavored all the time. And I think it comes from the same place -- they're paid to do open-ended knowledge work of some kind, and that leads people to write to reflect on what they think seems to work and what doesn't.

          Some of it is interesting, some of it is pretty banal (for what it's worth, I don't really disagree that this blog post is uninteresting), but I find it odd to throw out the entire category even if a lot of it is noise.

      • JadeNB 2 days ago

        > do we really have to retread this? unless you are employed by a university to perform research (or another research organization), you are not a computer scientist or a mathematician or anything else of that sort. no more so than an accountant is an economist or a carpenter is an architect.

        Doing math or science is the criterion for being a mathematician or scientist, not who employs you how.

        • almostgotcaught 2 days ago

          I would love for you to show me any novel theorems you've proven or publishable experiments you've run during your career as a swe.

          • JadeNB 2 days ago

            I haven't, because I'm not an SWE. I'm sure some SWE has, but I can't point to them as an example. But, even in the extremely unlikely case that that's never happened, the reason such a person isn't a mathematician or scientist is because they didn't do math or science, not directly because of their job.

      • WorldMaker 2 days ago

        Those who do not learn history are doomed to repeat it (poorly). Same for anyone doing software that entirely ignores Computer Science. You are missing core skills and reinventing well known wheels when you could be busy building smarter things.

        > no more so than an accountant is an economist or a carpenter is an architect

        I know many accountants who would claim you can't be an a good accountant without being an economist. Arguably that's most of the course load of an MBA in a nutshell. I don't know a carpenter who would claim to be an architect, usually when carpentry happens is after architecture has been done, but I know plenty of carpenters that claim to be artists and/or artisans (depending on how you see the difference), that take pride in their craft and understand the aesthetic underpinnings.

        > reread my comment - at no point did i complain about the article. i'm complaining that SWEs have overinflated senses of self which compel them to write such articles.

        You chose which article to post your complaint to. The context of your comment is most directly complaining about this specific article. That's how HN works. If you didn't read the article and feel like just generically complaining about the "over-inflated senses of self" in the software industry, perhaps you should be reading some forum that isn't HN?

  • exe34 2 days ago

    Bakers certainly write books and magazines[0] on baking, as well as interminable stories about their childhood. Mechanics: [1]. I could only find one obvious one for dentists: [2]. Somebody else did accountants in the thread. I think it's a human thing, to want to share our opinions, whether or not they are well supported by evidence. I suspect software people write blogs because the tech is easier for them given their day job.

    [0] https://www.google.com/search?q=blaking+magazine [1] https://www.google.com/search?q=mechanics+magazines [2] https://dentistry.co.uk/dentistry-magazine-january-2023-digi...

  • lxe 2 days ago

    > you don't see bakers, mechanics, dentists, accountants writing things like this...

    There are literally industry publications full of these.

    • almostgotcaught 2 days ago

      Yes and you have to be invited to publish in a place. Meaning at least one other person has to believe your opinion is significant........

      • JadeNB 2 days ago

        > Yes and you have to be invited to publish in a place. Meaning at least one other person has to believe your opinion is significant........

        I don't think that this is true. The vast majority of technical math publications, for example, are reviewed, but not invited. And expository, and even technical, math is widely available in fora without any refereeing process (and consequent lack of guarantee of quality).

  • SatvikBeri 2 days ago

    Accountants certainly do. They've had trade magazines with opinion pieces since well before the internet.

  • TrackerFF 2 days ago

    You're getting a lot of flak for this, but I think it is a legitim question to ask. I have many different hobbies, and have worked in different industries, but software development / programming is sort of unique in how much people discuss it online.

    My takes are:

    1) There are a lot of IT workers in the world, and they're all online natives. So naturally they will discuss ideas, problems, etc. online. It is simply a huge community, compared to other professions.

    2) Programming specifically is for many both a hobby and a profession. So being passionate about it compels many people to discuss it, just like others will do about their own hobbies.

    3) Software is a very fast-moving area, and very diverse, so you will get many different takes on the same problems.

    4) Posting is cheap. The second you've learned about something, like static vs dynamic typing, you can voice your opinion. And the opinions can range from beginners to CS experts, both with legit takes on the topic.

    5) It is incredibly easy to reach out to other developers, with the various platforms and aggregators. In some fields it is almost impossible to connect / interact with other professionals in your field, unless you can get past the gatekeepers.

    And the list goes on.

  • 1-more 2 days ago

    If they'd had opinion pages at the time, the inventors of nixtamalization would have and should have written something like this.

    • tickerticker 2 days ago

      LOL. Have not seen the "nixta" word since 10 years ago when i was researching how to make grits.

  • alfiedotwtf 2 days ago

    lol, your original comment is you pontificating

  • pissmeself 2 days ago

    [dead]

    • goatlover 2 days ago

      Some do get their egos stroked on their shows.

rramadass a day ago

The author is obviously inspired by Alexander Stepanov's book From Mathematics to Generic Programming (which he links to at https://www.fm2gp.com/). Stepanov's basic thesis is that all problems can be modeled in (abstract) Algebra i.e. by defining "objects and operations on objects" following structures like Groups/Rings/Fields/etc. and it is from this vantage point that we should start problem-solving.

He explained this in his first book Elements of Programming (now freely available at https://www.elementsofprogramming.com/) and then simplified the basic ideas into the above book. In his interviews he often mentions George Chrystal's Algebra books as foundational. These are the ideas that he used to implement STL in C++.

Also related (maybe?) is Paul Halmos and Steven Givant's book Logic as Algebra. MAA review at https://old.maa.org/press/maa-reviews/logic-as-algebra

samdoesnothing 2 days ago

I mean, maybe if your background is mathematics this would make sense. But for a lot of us it isn't, we're more linguistically oriented and we certainly are not going to come up with some pure mathematical formula that describes a problem, but we might describe the problem and break it down into steps and then implement those steps.

  • Mikhail_Edoshin 2 days ago

    Step-by-step computation is very close to step-by-step reasoning, which is math.

  • never_inline 2 days ago

    What does "linguistically oriented" even mean?

    Can you give an example of how you "linguistically" approach a problem?

    I mean, even in math, description of the problems are written in natural language, but they have to be precise.

gpjanik 2 days ago

"Think in math, write in code" is the possibly worst programming paradigm for most tasks. Math notations, conventions and concepts usually operate under the principles of minimum description lenght. Good programming actively fights that in favor of extensibility, readability, and generally caters to human nature, not maximum density of notation.

If you want to put this to test, try formulating a React component with autocomplete as a "math problem". Good luck.

(I studied maths, if anyone is questioning where my beliefs come from, that's because I actually used to think in maths while programming for a long time.)

  • Zacharias030 2 days ago

    The people I know who „think in math“ don’t think in the syntax of the written notation, including myself.

    • gpjanik 2 days ago

      If you're adding some computational/problem breakdown/heuristic steps on top/instead of mathematical concepts, then you're doing the opposite of what the author proposes.

      Scientific conensus in math is Occam's Razor, or the principle of parsimony. In algebra, topology, logic and many other domains, this means that rather than having many computational steps (or a "simple mental model") to arrive to an answer, you introduce a concept that captures a class of problems and use that. Very beneficial for dealing with purely mathematical problems, absolute distaster for quick problem solving IMO.

      • ccortes 2 days ago

        > then you're doing the opposite of what the author proposes

        No, it’s exactly what the author is writing about. Just check his example, it’s pretty clear what he means by “thinking in math”

        > Scientific conensus in math is Occam's Razor, or the principle of parsimony. In algebra, topology, logic and many other domains, this means that rather than having many computational steps (or a "simple mental model") to arrive to an answer, you introduce a concept that captures a class of problems and use that.

        I don’t even know what you mean by this.

  • godelski 2 days ago

      > Math notations, conventions and concepts usually operate under the principles of minimum description lenght.
    
    I think you're confusing the fact that math is typically written by hand on paper or a board. That optimization is not due to math, it is due to the communication medium.

    If we look at older code we'll actually see a similar thing. People are limited in character lengths so the exact same thing happened. That's why there's still conventions like 80 char text width, though now those things serve as a style guide rather than a hard rule. It also helps that we can auto complete variables.

    Your criticism is misplaced.