Franky is not an amputee, he's a robot.
He becomes more of a robot as the story progresses and this is universally painted in a good light.
Imo the humanization of robots/androids is problematic.
Still not sure what to think about the pacifista and a character willingly agreeing to be turned into one and then be 'multiplied' by the scientist.
An then there is that flashback character and his sentient androids who travel to the moon, only to he found by Enel later.
Implying that robots, human-made mechanical creatures, are capable of critical thinking and of feeling affection is, again, dangerous.
It opens the door to all sorts of amoral things that are not necessarily science fiction anymore.
These inventions are being made as we speak and they threaten our perception of what is really human, and what merely appears as such.
It's nice to find someone in this forum with similar thoughts on this matter; I also find the things you mentioned problematic.
It frightens me the thought that maybe in the future it will become more and more common for people to deliberately cut off their limbs or deliberately remove their natural internal organs to exchange them for machines while believing that they are improving themselves by doing it.
This is intrinsically disgusting because it means the rejection of the human nature. And the moment people begin to disregard the normal standard of human nature that has been accepted as such for millenniums (like the fact that it's the normal for humans to have two eyes, one nose, one mouth, two legs, two arms, have flesh, feel pain when hurt, etc.), it becomes easier for people to accept the destruction of humanity; It may sound cool to some people the idea of having two robotic arms with ten times the strength of human arms. But then it leads to the thought "why stop there?" and "who said that we should have arms to begin with?". If someone is already accepting to have his arms substituted by machines, then why not substitute the whole rest of the body with machine? Why not just take out the brain and insert it into a gigantic dinosaur robot, for example? And who said that humans need a body to begin with? Why not just take out the brain and connect it to some computer to make people "live" in a "virtual reality" forever?
So my point is that the moment you accept the mixture between the organic human body with machine it becomes hard to draw the line between what is acceptable and what is not. So the only way to preserve sanity and normality is to never accept any kind of mixture between a human body and a machine. And therefore it should never be presented in a good light, like Oda did in Franky's case.
And yes, Enel's side story is another problematic thing. In this case it wasn't about a human mixing his body with machine, but about machines being presented as beings capable of having emotions, which is also bad.
Not sure if you would agree with me on this part, but I believe that transhumanism is a lot related to materialism, mainly because of the belief that our conscience (first person perspective) is just a product of a material (the brain), instead of being a transcendental thing. So as people believe that conscience is just a product of matter anyway, they consequently begin to think that it may be possible to artificially create conscience with technology.
Just the fact that people are already accepting the use of the term "artificial intelligence" is already a problem; There's no such thing as artificial intelligence; Only living beings, who have a first person perspective, can have intelligence; A robot designed to look like a human being doesn't have intelligence, just like a door doesn't have intelligence, just like a watch doesn't have intelligence, just like a television doesn't have intelligence, and just like any other computer doesn't have intelligence either. Just like a watch is created to display the time according to the design made by someone so that the pointers would move with certain established conditions, a computer is created to do a lot more complex things according to other established conditions by the person who designed it. So it's silly to think that one can be called "intelligence" just because its mechanism is more complex that the other's, because in the end they are all just mechanisms designed by someone to do certain things (such as showing images or showing results of sums of numbers, for example) according to established conditions that were set in the mechanism. It's not like the computer has a conscience by itself; So it's not like the computer can think about things and reach a conclusion about some matter through its own first person perspective, because it doesn't have one, just like a door, a gun, and a watch don't have one either. It's only the creator of the machine who has intelligence, and not the machine itself.
And it's obviously also ridiculous to think that a machine is closer to a human being just because it was designed with the shape of a human body, and with its exterior part painted to resemble the appearance of a human being. It's just a soulless object all the same, just like any drawing on a paper, or any sculpture.
And yet, as more technology is developed, it seems like it's becoming closer and closer to possible to create a robot, with the appearance of a human being, that will be able to walk on the street as if it were a human being...
Imagine in the future it becoming hard to tell if you're really talking to a human being or to a soulless object that has no first person perspective but has just been programmed to walk and talk in a very realistic way to look like a human being...
This is very very scary...
So what Oda did there with Enel's story on the moon was very bad, as he helped to spread confusion about what is a human and what is just an object. Not only he drew those small robots showing human expressions, but he drew one of them clearly showing signs of having a conscience/first person perspective when it was shown having remembrance of the time it was created by that old man.