One answer is that lying is usually wrong because it usually has bad consequences, but sometimes lying isn't wrong because any bad consequences are outweighed by good consequences. This makes sense of the intuition that it's wrong to lie on tax forms, but not wrong to lie about hiding Jews in the attic.
This view of moral judgments is called consequentialism. While many philosophers and non-philosophers are attracted to the basic idea of consequences determining right and wrong, the devil is in the details.
What counts as a good consequence (or bad consequence)?
If we use consequences to determine moral judgments, the consequences themselves must not ultimately call out for moral evaluation. Otherwise, we'd be stuck in a loop! There are two ways out of the loop:
- The consequences which ultimately determine the morality of our actions must be so unquestionably morally good (or bad) that there's no need to justify them. Pleasure and pain have long been put forward as moral basics. From Plato's Protagoras: "Then you think that pain is an evil and pleasure is a good: and even pleasure you deem an evil, when it robs you of greater pleasures than it gives, or causes pains greater than the pleasure." In other words, the only time we do question the goodness of pleasure is when its consequence is less pleasure or more pain overall.
- The determining consequences could be morally neutral in themselves. It strikes me as a little weird to call pain "morally bad" or pleasure "morally good." Sure, we all have strong motivation to seek pleasure and avoid pain for ourselves, but is this a moral motivation? I don't avoid pain for myself because pain is morally bad! It's possible to characterize morally good acts as those which bring about more pleasure than pain overall (and morally bad acts as predominantly causing pain) without putting a moral label on pleasure and pain themselves.
Consequences for whom?
Suppose we do settle on pleasure versus pain as the consequences by which true moral judgments are determined. One particularly self-centered way of judging actions is whether they ultimately bring about mostly pleasure or mostly pain for one's self: ethical egoism. As flimsy as that sounds, it is enough to evaluate some desired actions as morally bad and some undesired actions as morally good. For example, I may want to drink whiskey at every opportunity, but if this would bring about more pain than pleasure for me in the long run, it's wrong for me to drink whiskey at every opportunity. Or maybe I find exercise completely unpleasant, but if regular exercise would bring me more pleasure than pain in the long run, it's right for me to exercise.
Morality that's purely self-regarding strikes a lot of people as a contradiction. The Golden Rule isn't, "Do unto others...if it helps you out." Most forms of consequence-based morality take into account consequences to other people. The trouble is figuring out how to do this without extremely counter-intuitive results. Jeremy Bentham's classical utilitarianism draws an analogy between the way individuals seek to increase their happiness over their pain to a community of individuals increasing their group happiness over their group pain. This sounds reasonable at first, but it has troublesome implications like it being better for a few people to be in extreme pain if it means a small increase of happiness for enough other people. Modern utilitarians try to find new ways of counting up consequences that don't have such immoral-seeming results, or in some cases they'll question whether our gut reactions are justified.
What we should and shouldn't expect.
Consequence-based morality has a hard time justifying natural rights, as opposed to legally granted rights. Jeremy Bentham famously called natural rights "nonsense upon stilts" (insult standards have fallen in the modern age!). This is because rights are only good so long as they have good consequences and we can always make up a special circumstance in which respecting a right has bad consequences.
Another basic concept in other views of morality which isn't handled as easily by consequentialism is giving people what they deserve. If a particular action will benefit a habitually bad person at the expense of a habitually good person — and the benefit is slightly greater than the expense — then it would be a good action, all else being equal.
It may normally lead to better consequences if we respect rights and give people what they deserve, but these can't be fundamental moral elements if consequences are the only things that ultimately determine right and wrong.