Technically, this works. I'm not always the one to do things the "right" way either, but there are a few reasons not to do it this way:
- you need to manually count the field index
- this method reads every feature in the feature class. Using a where clause only reads the matching features, through the magic of SQL, which I don't understand, but from the help, "When a query is specified for an update or search cursor, only the records satisfying that query are returned." Anyhow, it seems to take about 3x longer to read through everything, but it likely depends on the data and how many matches there are. Also, to be fair, the results below are representative of several runs, except the first run, which adds about 3s in actually constructing the table view. So, I suppose the main point is that this method is just fine for small data sets, but likely worse for large data sets.
import timeit
fc = 'bc_geoname_albers' # ~45,000 features
# SQL method
start_time = timeit.default_timer()
arcpy.MakeTableView_management(fc, "myTableView", "CGNDBKEY LIKE '%C%'")
Count = int(arcpy.GetCount_management("myTableView").getOutput(0))
print (Count,str(timeit.default_timer() - start_time))
# Count method
start_time = timeit.default_timer()
with arcpy.da.SearchCursor(fc, "*") as cursor:
Count = 0
for row in cursor:
if "C" in str(row[4]):
Count+=1
print (Count,timeit.default_timer() - start_time)
(13531, 0.64809157931) # SQL method: 0.65s
(13531, 1.87391371297) # Count method: 1.87s