I’ve found that if I move all my shp's & dbfs into the Gdb environment, they get indexed in a way that Pro likes. By taking the time to index all datasets (spatially and attributes) within a Gbd I’m able to utilize an increase in performance. A rudimental example would be in a zip+9 dataset of 60 million records, a query selection of zip code 55401 in a gdb takes 0.14 seconds.
In a shp which obviously couldn’t hold that much data due to the 2gig limit, querying 55401 for a shp table of 1.5 million records takes 8.15 seconds. It's a basic sample but it is what it is.
While we haven’t gone all in on Pro, it appears that if all your data is indexed within a Gdb, most processes are very fast compared to non Gdb datasets. Note that we're using mostly vector datasets, so I can't speak to raster data.
However, working with data output from Gdb to a non Gdb format, remains problematic for the moment, but with further experience, we're hopeful in overcoming those issues in the future.