Inventing an entirely new set of standard "interfaces" (used broadly) that are robot specialized, then installing them literally everywhere in place of the existing human specialized interfaces, has both developmental and deployment problems.
It's a huge upfront cost, for one. "What do you mean I need to rebuild my kitchen, laundry room, lawn shed, bedroom, bathroom, and utility rooms before I can use your home assistant robot? I'll just go buy the Honda one that has hands. Also, I kind of like cooking and would like the option of doing it myself sometimes."
And, well, at the end of the day why is the non-humanoid one better? Even granting your premise that it is more "efficient" in the context of a house designed around it (as opposed to designed around the person living in it and paying for it), as soon as you want to do something the designers failed to anticipate you are boned. If the robot is mechanically humanoid, new "tasks" can be added with just a software update (or even by having the person demonstrate what they want). If it's an eldritch monstrosity that can only operate in spaces designed around it, you are stuck unless you call a mechanic. And what happens if the robot breaks? If the house is still human adapted, you can cope. But if I have to get the robot repaired before I can operate the microwave....
Moreover, there is an issue with getting people to "trust" the robots, and having the robots be things that are more or less human looking is a good way to build trust.
These are not hypotheticals I'm spitballing, by the way. These are things robotics researchers are actively studying.
Long thread and I've just started reading it, but I'd like to comment on the need or convenience of humanoid-shaped robots. It's a bit off-topic re: Aurora, sorry. If someone else has remarked on these same points already, please forgive me.
1. Trust issues: there's the "uncanny valley" problem: as the similarity to humans grow, after some point, there's a human tendency for severe distrust to grow as well. There's also the question if you *should* trust robots, and how far. Part of the problems with AI (and computers in general) is that humans tend to trust whatever result they show, until they have overwhelming evidence that the results are *wrong*, and then you may simply not trust them any more. When the proper attitude should be that you should trust only so far, and verify whenever necessary... there's *always* a margin of error in any AI (and in fact any human) judgment.
2. Interfaces are always evolving. Just as an example, TVs no longer have button panels on the front for turning them on/off, switching channels etc. Now you do that through a remote control. When *these* are broken, which happens every so often to me, you have to use clunky menus and hidden buttons, when there's a button at all. Also, bluetooth is an already established technology that could be used by any robot to access most devices they'd need to operate: much better, and faster, than buttons. Alexa is an immobile tower which pretends to be a personal assistant (I find them hugely annoying). It doesn't need buttons to buy things for you from Amazon.
3. The humanoid shape may have been selected by evolution, but it is by no means the optimal shape, or even a good shape, for modern physical tasks. It's way more complicated to control, balance etc than, say, a wheeled barrel with a telescopic eye and a robotic arm. And, if really necessary, you could switch those wheels for any of a number of stair-climbing solutions available already. Robots don't share our biological constraints. There are reasons for the current pursuit of humanoid shapes in robots, but I'd say they have much more to do with human psychology than practical concerns.
4. A generic AI, that would be able to do tasks not previously foreseen and modeled, with a wider scope than a specialized AI, is way beyond reach right now, in spite of all the hype around singularity and such. And, arguably, such generic AIs would have to be sentient, at least to a point, if they needed to correctly interpret context in human-level conversation. Which opens the question of whether they would be so advantageous over human intelligence to justify their existence. And they would have to be hugely complicated, their non-human reasoning would be very difficult to explain to humans, and that would mean they'd most likely be felt as a threat to humans, through the "uncanny valley" effect, even if they were not humanoid-shaped.
5. Now back to Aurora: I find the idea of a robotic population extremely interesting. They wouldn't need to be humanoid-shaped, at all, for that. Nor "generic AIs" in the above sense. I could imagine them downloading behaviour models for the usual tasks they'd be asked to perform, as necessary. Operating mines, factories etc. Well, if you *do* want your robots to be humanoid, capable of arguing with you on the best way of doing something, then you can RP that. That's the beauty of Aurora: there's lots of flexibility in what you can represent within the game mechanics.
However I do agree that one of the current game's interesting aspects is that I do have to deal with population shortages. My current game starts with just 150m pop, it's very low/slow tech, and I have a NPR sharing my system
The population limit is fun. Being able to build robots to replace pop would remove some of that. I'm not sure if robots would add enough to the game to be worth it.