Acceptable, but I think some kind of gaussian blurring combined with edge enhance would be a reasonable choice for at least natural photographs. Ultimately we DO know that the image is almost certainly not composed of tiny squares of colors in the real world, perfectly aligned to fall into the camera's pixel grid, after all.
So it is my faith that we can do better than that, and not even try very hard; the trouble may be in the corner cases where it would appear that we do worse. For instance, over/undersaturation is a risk for edge enhancements that are somehow tangentially involved in algorithm like this. How would you represent color that has negative luminosity? You can't, so for some cases the algorithm would have to degrade naturally.