There are subfields of computer science dedicated to this question. A good starting point for the theory would be Pessimal algorithms and simplexity analysis, which lays out two concepts:
The time & space simplexity of an algorithm indicates best-case lower bounds on resource usage, and
An algorithm is pessimal if no equivalent algorithm wastes more time/space/etc.
For example, common folklore is that sorting has O(n lg n) time complexity, depending on assumptions. In the paper, they give that sorting has Ω(n ** (lg n / 2)) time simplexity; any algorithm which takes more time, like bogosort, must do so through some sort of trickery like non-determinism or wasting time via do-nothing operations.