In a nutshell: I'm currently against it. Feel free to try to convince me of its worth, but keep in mind the high probability that I've already heard your argument in some form, somewhere.
Voluntary human extinction implicitly assumes that the rather logical notion of reducing suffering in the absence of consent is equally as valid as the subjective notion that one's own life is not worth living. To me, the idea that everyone must be convinced that their own personal lives are horrible is just as repugnant and idiotic as the idea that children should accept the fact that they emerged involuntarily. This is only the start of my contention, however, as I don't even think that the choice to continue living, once born, is entirely psychological, or an individual choice to be made.
Proponents of voluntary extinction make claims of either 1. the amount of suffering introduced by our existence at the expense of other life (the amount of resources that you consume that could go to a deer or cat instead, for example), or 2. the amount of potential suffering that we could unknowingly introduce by accident, via sensation and deprivation, simply by existing. This doesn't make sense to me, as 1. implies that we are currently capable of defining every variable involved in determining the outcome of the equation, and 2. ignores all of the suffering that we may be able to prevent by existing, given the possibility of eternity, and of the existence of sentience in multiple locales.
In the case of 1., it is certainly possible that automated, technological means of redesigning the natural world could emerge at some point, capable of removing negative sensation from that environs. In both cases, given that we can't predict future suffering with any degree of accuracy for now, it makes more sense to voluntarily exist to the end of learning more about our predicament than it does to voluntarily disappear from the universe outright. How irresponsible the alternative must be, if it indeed turns out that trillions of planets contain or will contain mass-energy configurations similar in content and substance to whales and buffalo, and that we can do something about it!
We may suffer as a result, but we've chosen to -- rationally, based on a thorough assessment of our circumstance and the need to withhold judgment in the absence of a more all-encompassing value equation. We may accidentally impose harm onto other sentient creatures as a consequence of our existence as well, but this is necessary if we are ever to determine the scope of reality as we know it, and, thus, the suffering contained therein.
Note, also, that artificial intelligence and the eventual replacement of the central nervous system with a superior, efficient body alert system may be possible, meaning that, in the future, humans (or intelligences, more accurately) may become physically incapable of suffering. The fundamentals of life are probably already understood in our current time, but again, that says nothing of our scope of the problem, so why shouldn't we augment our bodies while in pursuit of a working picture and understanding of what, elsewhere, warrants solutions?
But what if everyone decides that they, personally, cannot handle the horrors of life in the meantime? What if there, eventually, are no volunteers for the job at all? This is why I made the above statement that whether someone should kill himself is not a decision to be made individually. In our present time, this is true thanks to the potential existence of friends and relatives, who may suffer greatly as a consequence of a person's suicide; eventually, it may be true in the face of sentience -- and, thus, value -- emerging over and over again (even if only in different iterations of the universe, given that possibility as proposed by M-theory) in a state of ignorance.