We utilize a format very similar (identical?) to Neil's. I'll lay it out in some detail in case it helps your thinking.
Our enterprise geodatabase (EGDB) is composed of Feature Data Sets (FDS) that delineate various groups of data.
These FDSs then are typically published as group layer in an mxd that is then published as a map service.
The mxd is pointing, like Neil, at a file.gdb up on the ArcGIS Server.
This allows for speed and guarantees no data corruption of the EGDB and keeps the data read only.
(Feature Services are another game and require an EGDB as pointed out by Neil. In our case, we want these for offline work and the data can remain read only. So we're replicating to a secondary read only EGDB where we then creating Feature Services.)
We are in essence duplicating our EGDB into a file.gdb with Python scripts.
We've been at this for over a decade so pretty much all of our layers are already into the file.gdb and follow the same FDS layout. And almost all the layers are on mxds and published via a map service.
We do have two large file.gdbs that are published. One is all of our assets (pipes, valves, etc...) and the other is all of our 220,000+ service points (or meters.) These are split this way basically due to how we obtain and process the underlying data and its data sources. We are considering splitting our large asset file.gdb into smaller separate ones. e.g. one for water assets, one for wastewater, etc... We think this might make sense with Portal.
I think a big key here is proper use of the FDS to keep data organized.
We also deal with a lot of external data from related tables that is typically brought in and updated via links/views using scripts and arcobject code. There are numerous tables and even Feature Classes that are not inside a FDS.
But the more organized you can keep your data, the easier it is to keep your map services well defined and organized.
But there are still new layers or unpublished layers that occasionally come up.
Typically though they fit into our existing FDS layout.
So adding them to a published map service is nothing more than adding them onto an mxd and republishing it.
We also have a lot of data from other local entities that is scripted and stored in our EGDB. Since we don't edit this data, I am considering pulling it out of the EGDB and keeping it in file.gdbs for both the servers and the local data editors. The sticking point is the large number of existing mxds that would need modification.
I think that having a standard set of map services is what you are looking for to keep things organized. Don't create a new map service when someone wants a new layer. Just add it to an existing map service; unless it's a new layer that doesn't fit your existing layouts. We just had such a case so we had to create a new map service which is publishing previously unpublished data from an existing FDS.
At the end of the day, it's really all about the layout of your EGDB. If you have a well organized one, then the organization of your published services should flow from it.
I have heard others describe the Data Store as sort of a wild wild west approach to a database and think that's a fairly good analogy. Since it's a black box to us, we're leaving it for end users to create and publish their own smaller unique data sets. Any data that is considered to be critical to the organization belongs in our EGDB as the system of record.
Our Portal is new and I have been struggling with how to provide our users to access to our normal data sets.
I could tie into our existing 10.1 ArcGIS Servers (AGS) but have decided that the best long term way to go is to create another Federated AGS that is part of the Portal site. The purpose of which is to publish our standard map and geoprocessing services that are then made available to Portal users.