In today's world, computers can be readily programmed to medically assist people. With some work, they can probably even be programmed to invoke decision-making processes to the end of generating high quality sum outputs in real-life medical situations. Really, all we'd need would be a set of algorithms, a tentative set of values and desirable outcomes, and data on what kinds of variables and parameters exist in any given scenario that we're going to assess. The computer would then be able to calculate which outcomes have the highest value, and would subsequently perform the necessary actions to generate those outcomes. So, for example, if someone with an injury were to request to be attended to, but voiced his plan to massacre a school full of people, the computer would be able to realize that the most suitable action would be to NOT dress the man's wounds -- that is, if said wounds rendered him unable to carry out his plans.
Does the computer feel anything for those in danger of being massacred while performing this calculation? No, but it knows that the people involved DO feel, and that this is valuable. Therefore, the computer would be able to maximize the outputs of the situation, but without any of the setbacks of the exclusive preferences that emotions foster. Incidentally, you don't have to feel pain in order to understand that it's negative, and thus, something to rid the world of (for the same tautological reasons that the consequents of wetness and solidity are inherent in the definitions of water and solids, respectively).
Empathy is no good for the same reasons that racism, preferring the taste of unhealthy foods to healthy foods, and being sexually attracted to people whom we know aren't very mentally worthwhile are all no good. Even when we can't help feeling a certain way, we should still double-check the feeling by way of a mental algorithm before we decide to act on it.