Ok...
You still are mixing up the concepts here, technology being one of them as an enhancement in efficiency.
There is always a break even number of people/resources needed to actually perform any project. I'm more or less talking about anything we humans do in life. If you and your husband/wife make dinner will it go twice as fast if you do it together (even if you had twice the space and tools to do it)?
Any resources you throw at a project after that will have a diminishing effect.
In real life nothing can be increased in a linear fashion once you reached critical mass for performing something. Yes you can finish something faster but there is always a limit on how fast something can be done thus throwing more resources at is is wasteful.
In Aurora I would deem a single lab as being the critical mass, it is after all about 1.000.000 people involved in one way or another. Obviously only a fraction is actually top scientists.
You can never compare output from a centralized industrial complex in regard to producing "stuff" (as you previously did), this has nothing to do with human organization and intellectual innovation or human efficiency.
I don't think I'm mixing up anything. I have shown several very specific examples where large organizations have succeeded in producing valuable research above and beyond what any smaller organization did. Isn't that the point here? What exactly am I mixing up? I am trying to compare large and small organizations of similar technology levels, is there a case where I haven't?
Be careful of using small examples, they are very easy to bend and twist to suit your needs. Here's one to illustrate: you can your friend need to move two different 200 lb objects. Each of you can only lift and move 100 lbs (discount dragging or machines, stick to the example). How much faster can the two of you move the objects if you work together, one at a time, versus splitting up and tackling each object individually?
There are many times where having more resources helps move things faster as you surpass the "minimum". If you can cover overhead easier with more people or a project is modular there are many areas where having experts work on each module or getting your work rate significantly faster than the overhead/upkeep rate can bring benefits. If it costs $1 million a day just to maintain a project, throwing $2 million at it will bring much faster results than just $1 million.
We can debate back and forth on what exactly the critical mass is. We really don't know, and for all intents and purposes anything we say is completely arbitrary. We have no experience with TN materials, alien life, large-scale spaceborne travel or system-traversing jumps. These projects could take millions of people, they could not. How about we focus on gameplay rather than realism, since we left realism behind some time ago? Sound reasonable? At worst Steve can decide the scale he wants.
We can most certainly combine large-scale production with research. We are talking about organizing large groups of people (or aliens) here, that's the core problem. Communication, workload sharing, time management, knowledge sharing, all of these reduce or increase efficiency. That's what we're trying to tease out here. Although they may have different end goals they share similar structural problems in reaching those goals, namely organizing their workforces effectively. They also share similar benefits to organizing into larger conglomerates.
Here seems to be the crux of the issue for you (correct me if I'm wrong): massive numbers of labs produces results far quicker than you feel they should. In many cases I agree with that; throwing 50 labs at a project should not go 50 times faster for a 1000 RP project; there's just no way to get everything together and working that quickly before the project is finished. My issue with your proposal is that it makes a very broad statement about research efficiency which I believe has not been shown historically; in many cases gathering more resources for a project has made it faster or possible. Is there some alternative(s) to resolve your issues while still encouraging more labs on projects that deserve them? Here are some:
Lab spool-up/spool-down time. A project could start get 1 lab a month working and, after the project is done, free up 1 lab a week (or whatever). If a project can only get so many labs "in-action" in a certain amount of time it discourages throwing large numbers of labs at a small project as many will not end up working. The spool-down time ensures that your labs are committed to the project. This still strongly encourages large numbers of labs for big projects as they will have time to spool up and operate at maximum efficiency.
Treat labs more like shipyards. There can be a cost associated with moving labs around or changing projects that encourages an appropriate number of labs scaled to the size of the project. Throwing large numbers of labs at small projects becomes cost-prohibitive.
Scale lab costs with the number of labs on a project. While they may not have reduced efficiency it may cost more to integrate and keep labs working together. This encourages smaller groups of labs, but utilizes wealth (which is very liquid) rather than research output (which is very difficult to modify quickly).
Reduce the number of labs available to researchers. As the first option listed, it would create a cap for the number of labs per project. Alternatively, make it a soft cap where more labs reduces efficiency if you go over the cap.
Those are some quick ideas. How do you feel about those? I think most will more strongly encourage scaling lab allocation to project size.