American Grand Jury Foundation
The grand jury
Civic commentaries
Civics and Home schooling
Local government facts
About us
Terms of use
Contact us

Local Government Facts

The Conviction Rates of California
County District Attorneys

The conviction rates of county district attorneys are a common, though occasionally controversial, measure of effectiveness in the administration of criminal justice. We post this study to help California grand jurors and other citizens become familiar with this fairly simple statistic and to help them obtain and use it in their communities. We display these rates for a recent ten-year period. Anyone who wishes to verify the data in this study may obtain them in their original form at this site: (select County Reports 6A).

Notwithstanding debates about their validity, conviction rates sometimes receive publicity in California, as typified by these recent headlines:

  • County D.A. again is tops in felony conviction rates
  • County district attorney defends low trial conviction rates
  • “My office is proud of its conviction rate,” D.A. boasts

When district attorneys run for office, they occasionally include conviction rates in their campaign materials. One brochure, for example, highlighted a candidate’s favorable conviction rates under this heading: “D.A. Smith—A Man of Convictions.”

Conviction rates may also be found in district attorneys’ official Web sites. Recently, for example, a district attorney claimed that, “Since taking office in 2003 [the district attorney’s staff] has increased the over-all felony conviction rate from 52% in 2003 to 67% in 2006.” However, the conviction rate still continued to be the lowest in the State for the ten-year period, notwithstanding its local improvement (see entry for Vallejo County in Table 1).

Although one can find some information on the Internet about conviction rates and their role in the criminal justice system, impartial, thorough, and well-documented research into their validity is scarce. Nevertheless, with all their actual or reputed shortcomings and defects, conviction rates are useful for exploratory inquiries into local, state, and federal systems of criminal justice.

At the time we obtained the conviction rates for this display, 2005 was the most recent year for which they were available from the database cited above. We show the numbers for a ten-year period because we believe that span of time is adequate for revealing trends or patterns.

We display the data Statewide and by region of the State. One can find regional formats other than what we use. State government, nonprofit organizations, private enterprises, and academic institutions, for example, use different regional formats for their particular purposes. The one used for our display is, we believe, satisfactory for the objective of this series of data-dredger reports. Our objective is not to draw conclusions about the data in this study with respect to a particular local government function. Rather, the goal is to help grand jurors and other citizens understand how they might use facts for evaluating various local government functions and services in California.

Many California citizens who are acquainted with the varying social, economic, and political features of their State probably would agree that our regional format makes sense, though if one saw the actual county names, one might contend that some counties should be placed in different regions. One county in our display is, in effect, a region in itself: Los Angeles. Because of its obvious identity, Los Angeles is the only entry in the display that we identify by its actual name. Readers who wish to know why we usually do not provide the true names of jurisdictions, with the exception of Los Angeles, will find the explanation in our introduction to this series of displays (see Purpose of this Section).

To conserve space, we have not listed the whole numbers from which conviction-rate percentages were calculated. These numbers can be obtained from the same source that supplied the percentage calculations. The percentage calculations we show are those of the Criminal Justice Statistics Center of the Department of Justice, State of California.

The fictitious county names and the data for Los Angeles County are displayed by rank order in Table 1, “California Conviction Rates for 56 Counties: 1996-2005.” The same data are displayed alphabetically but in regional formant in Table 2, “California Conviction Rates by 56 Counties by Region: 1996-2005.” This table includes a listing of the regional averages by rank order for the five regions of California. Table 3, “Conviction-Rate Averages and Rank Orders for the Five Regions: 1996-2005,” lists both statistics for each region rather than the individual counties.

In Figure 1, “Ten-Year Averages for Conviction Rates of 56 California Counties: 1996-2005,”one sees rank orders for conviction rates throughout the decade. In this display, the conviction rates are shown from high to low. Thus, Alvarado County achieved the highest ten-year average conviction rate in contrast to Vallejo County’s lowest ten-year average. Notice that column and row averages are shown in all tables.

Numbers showing the ten-year averages for each county are shown in the “Averages” column next to the rank-order column of each table. The first of these columns contains the averages for each county. Next to this column is the rank order for each county. We use rank order in the same way that one hears it referred to in conversations about sports: “Our team is the number one team in our state” or “Your team is the lowest scoring team in the league.” Thus, the counties and their conviction rates are listed in descending rank order, beginning with the highest and ending with the lowest.

We again emphasize that this report, as will often be the case with others in the series, has an instructional, rather than investigatory purpose, namely to encourage civil grand jurors and other citizens to use facts, rather than opinions, in thinking about local government and discussing its effectiveness. If we were publishing this report for purposes of civic action, we would have taken precautions to test the validity of the data. For example, we might have sent a list of their conviction rates to all of the district attorneys in California, inviting them to comment about the percentages for their counties. We might also have requested that they provide any numbers missing for their counties. However, our purpose is tutorial. By presenting some basic skills and methods of data dredging and display, we hope to encourage grand jurors and other citizens to use simple statistics to study local government topics of their own choice. Therefore, we urge readers to focus on the methods rather than the topics in our series.

Our data displays ordinarily use descriptive, rather than inferential statistics. Even though percentages and rank ordering are simpler than more powerful statistical calculations, they are helpful for preliminary studies of local government facts. In the recommendation section below, we suggest several more rigorous ways to treat the data.

An additional objective in this series is to illustrate the distinctions between facts, findings, conclusions, and recommendations. These are terms civil grand jurors in California often use, sometimes inconsistently, in reporting the results of their investigations. We do not claim that the distinctions we make in explaining these terms are the only ones possible. The word “findings,” for example, is subject to considerable interpretation. Our use of this word is based on a standard legal reference work, Words and Phrases. As we state in Grand Juries in California, thewords“facts,” “findings,” “conclusions,” and “recommendations” vary in the precision of their use in civil grand jury final reports. For this study’s purpose, the following section will include illustrative examples of the facts, findings, conclusions, and recommendations that could be generated from the data presented in the following pages.

Facts, Findings, Conclusions, and Recommendations


The conviction rates in this report are facts. The essential nature of a factual report, as the semanticist S. I. Hayakawa uses the phrase, is not that it is correct or incorrect but that it is verifiable. For example, if the following tables had included the actual names of counties, and one wished to determine if one or more conviction rates was accurate for a particular county, one could contact the district attorney of the county in question.

The main focus of Table 1 is the rank order of California counties. You will see, for example, that the Alvarado County District Attorney’s office reported the highest ten-year average conviction rate in the study period (92.1%). Conversely, the lowest ten-year average rank is that of Vallejo County (59.1%).

Notice that CJSC did not report conviction rate information for Weber and Wolfskill counties during the study period. This explains why only 56 of California’s counties are included in Figure 1. Both counties are in the Northern and Mountain region. You will also see missing data for certain years for six other counties: Flood, Harpending, Miller, Norris, Norton, and Older.

Table 1 also shows the average conviction rates for each county for the entire ten-year period. The Statewide county averages for each of the ten years are likewise displayed. The ranks in this table, as is the case for Table 2, are for the ten-year averages.

The second display, Table 2, presents the same information shown in Table 1 except that the percentages for the counties are grouped into five regions of the State. Rank orders for the counties of each region are also shown in this table, though the rankings and averages are now for regions rather than the entire State.

Table 3 depicts the extent to which the ten-year average of the conviction rates varied by region during the study period. The extent of the variation, while not as extreme as could be imagined, is nevertheless sufficient to suggest that further research into these data might be fruitful.

Figure 1 shows that the data vary in a gentle S-shaped pattern. The variation is sufficient, as we suggest below, to justify further analysis.


Before we discuss several findings for illustrative purposes, we offer suggestions for a preliminary inspection of displays of facts such as one sees in Tables 1 and 2. In our experience, one should not jump into a mass of numbers without first becoming familiar with them. One way to do so is to acquaint yourself with the layout and terminology of Tables 1 and 2, for example, row and column labels. See if there are any signs of missing data, as the double hyphens (--) designate. Also, notice the sizes and patterns of Statewide and regional averages. Look for examples of stability or oddities in rows (the horizontal displays) and columns (the vertical displays).

In Table 2, acquaint yourself with the numbers of counties in each region. Notice the trends of the numbers in the region-average rows and compare these numbers in each of the five regions. In this table you will also see a display of the averages for each region for the ten-year period. Notice the rank order for each region and compare regional averages with the State averages. Review also Figure 1 and ask yourself what the Statewide range in the ten-year conviction rates (Alvarado, 92.1, to Vallejo, 59.1) might imply for the administration of criminal justice in the Golden State.

Now that you have reconnoitered the data, go back to the individual county averages to see what you can make of them. As you scan the data, try to identify your county. When you have done this, check the Criminal Justice Statistics Center Web site to find the actual numbers for your county. This can often be an eye-opening experience for people who wish to discredit the use of facts by arguing that “I didn’t need this information.… I already knew what was going on in my county. After all, the District Attorney and I are good friends.”


Table 1

In some ways, Table 1 is less interesting than Table 2. Its principal focus is the rank ordering of counties. You will see that the range in ten-year averages for the high and low ranking counties is from a high value of 92.1% to the low of 59.1%. This spread of about 30 points reveals the Statewide range of variation in conviction rates. However, with the exception of the lowest ranking county (Vallejo), the intervals of the ten-year averages between each county over the study period from low to high are fairly small. There were two sets of ties for rank order: Liedesdorff, London, and Lux at 79.7%; and Oliver and Pickett at 76.3%.

Sutro County is an example of a slow, steady decline through 2005. Before 1999 the County’s average percentages were generally in the low 70s; thereafter, its average percentages were in the high and mid-60s. This pattern is somewhat similar to that of Pixley County, which shows a slight decline in average conviction rates during the ten years.

Notice that the trend in the State averages row holds steadily through the decade. Folsom County, to cite one example, follows this pattern during that period, though with one striking decline in 1999. Probably the most interesting pattern in the display is that of the lowest ranking county, Vallejo. That jurisdiction is well below the Statewide average throughout the ten-year period; the sharp decline seems to have begun in 1999 and continues through the end of the row. Possibly this pattern is related to highly publicized local controversy between the former district attorney of the County and a candidate for that office who replaced him in that period. Miller County is also an example of rancorous local controversy during the study period. Here are other suggestions for reconnoitering these data and deciding what they might mean:

  • Which counties display the most consistent rates downward and what might this imply?
  • What explanations might there be for counties in which conviction rates steadily rise throughout the study period?
  • In which counties do rates fall and rise every year or so in a stair-step pattern? What do you think might be explanations for this type of pattern?

As you will see in Table 1, average percentages vary somewhat. The range of averages in Table 1 is from the high 90% to a low average slightly less than 60%. Such variation by no means proves the validity of conviction rates. It does suggest, however, that, with all their limitations, these figures have some value, if for nothing else than starting a productive conversation with the district attorney of one’s county of residence.

Table 2

Table 2 raises the data slightly to an explanatory, rather than descriptive, level. One can see that organizing the data by region introduces a variety of patterns into the display. One cannot tell, of course, from only these data what these variations imply about the effectiveness of the district attorneys’ offices in the individual counties. Possibly budget problems are at work. Therefore, county prosecutors in the comparatively “wealthier” Bay Area counties might have proportionately bigger budgets than their colleagues in the Northern and Mountain counties.

This advantage could also explain why the highest two regional averages are in Southern California. Similarly, in the Bay Area regional display, the two highest ranks are for relatively affluent counties. As previously mentioned, however, one Bay Area county is not only the lowest ranking jurisdiction in that region, but the lowest Statewide. This finding raises an interesting point about data dredging: conceding that, in a fairly large collection of facts, some are correct and others are not, would the plusses and minuses throughout cancel each other? If so, what might this imply for the validity of one extreme, but consistently so, instance?

If you were reviewing a display with the actual county names, you could compare the figures for your county to selected other counties of about the same population size, land mass, economic characteristics, and so on. Possibly you would discover that yours is one of two counties that have strikingly different conviction rates, though you and other citizens in both counties might think of them as similar in other respects.

For an example of this kind of pattern, notice Burbank and Shafter Counties with their markedly different ten-year average conviction rates (87.3% and 74.9%, respectively). If you visited these counties, you would find it difficult to discern when you had driven out of one county and entered the other. Not only do the counties share parts of their boundaries, but they are similar in their demographic characteristics and economies. A similar pattern can be seen in the rank order of Alvarado County (#1) compared to Stanford County (#53). These adjoining counties are similar in their land-use, population-size, ethnic composition, and budget-size characteristics, but they are far apart in their rank orders.

Californians with long memories might recall that, in past years, many citizens believed that the further north one travels in the State, the more likely one will find signs of a “law and order” culture, in terms of sentence-to-prison rates, conviction rates, and the like. A look at Table 2 suggests that this distinction is fading, judging by a comparison of the regional averages of the Farm Belt and the Northern and Mountain counties to the Southern California region and Los Angeles County.


The amount of variation in the data is sufficient to justify the use of statistics with greater explanatory power to analyze them. For example, by using the years in which district attorneys retired or won or lost elections, one could study the effects of turnover in office on conviction rates. One could also use various political facts such as citizens’ political-party registrations or district attorneys’ political party affiliations to test their effects on the data. In general, we would not expect these influences to be strong, except for one or two counties. However, our expectation might be proven to be invalid if we used a more powerful statistical formula than arithmetic averages.

The length of service of incumbents might also influence the rates. One would expect long tenure in office to be associated with higher conviction rates. Similarly, social-economic status variables such as ethnicity, age of the population, average incomes, education, and so on, might also explain some of the variation.

Staffing numbers, workload statistics, and authorized budget amounts are three other examples of measurable facts than can be used in a statistical search for reasons why variation occurs among conviction rates. Similarly, one would think that a district attorney’s office large enough to have specialized assignments (for example, rape, robbery, and other felony crimes against the person) would have higher conviction rates for such offenses in contrast to the absence of crime-specific sub-units in their offices.

The conviction rates in our displays are based on prosecutions of all felony crimes. A study of conviction rates for specific crimes, say, robbery, aggravated assault, or arson, may or may not show the same variation that the data for all felony crimes displayed. One might also obtain a different picture of conviction rates if they were studied in relation to arrest rates. In theory, for example, if all local law enforcement officers conducted consistently high-quality investigations and wrote high-quality crime reports about their investigations, district attorneys would find it difficult to reject them for prosecution.

In contrast, if some felony crimes are poorly investigated and reported, district attorneys probably select only the best law enforcement reports to use in prosecutions, thereby increasing their conviction rates. In the administration of criminal justice, it is well known that the quality of law enforcement investigations and reports varies considerably among law enforcement agencies, thereby affecting conviction rates.

One factor that would be difficult to study is how and why district attorneys decide to prosecute or not prosecute felony cases or enter into plea-bargaining negotiations. Prosecutors have discretion in such matters in deciding to prosecute suspected felons. Faced with a large workload of felony cases, they might sift through their caseload to decide which cases to take to court. On occasion, they might decide not to prosecute “in the interest of justice,” a term that varies considerably in application. Obviously, if a law enforcement investigative report reveals what the prosecutor believes are extenuating circumstances, he or she might decide not to take the case to court.

For these and many other reasons, the rate of arrests prosecuted no doubt varies among counties. For example, of 1,000 felony cases that a law enforcement agency refers to district attorneys for prosecution, only a percentage of the cases will go to trial. That percentage is the rate of prosecution (number of police arrests for felony crimes divided by the number of cases accepted for prosecution). A more sophisticated study of conviction rates would include this concept.

Most readers have heard about “plea bargaining,” sometimes referred to as “case settlement.” Simply defined, these terms refer to agreements between prosecutors and defendants in which the latter agree to plead guilty to a crime that carries a less severe penalty than the offense the prosecutors could charge them with. For example, instead of charging a defendant with burglary, a prosecutor might agree to drop that charge if the defendant agrees to plead guilty to a “lesser included offense.” An attempted commission of a particular felony usually carries a less severe penalty than actually committing it. More information about this practice is available in several of the publications listed under Background Reading. Here are examples of crimes and their “lesser included offenses” that might be involved in a plea bargain:

Lesser Included Offense
Burglary Trespassing
First-degree murder (premeditated) Aggravated assault
Grand theft Petit theft
Robbery Display of weapon
Sexual battery of victim under 12 Aggravated assault


Research into conviction rates is confounded by another reality of the criminal justice system, namely that almost every one of its components affects conviction rates in some way. Probation officers, parole officers, social work agencies, defense attorneys, and the courts themselves might influence these rates to a greater or lesser extent. Comprehensive research into the causes of variation in conviction rates would include more factors than are discussed here. Persons interested in such matters will find them discussed in some of the citations in the Background Reading section below.


In a publicly distributed grand jury report, recommendations would be written for and addressed to the department heads of specific agencies. For obvious reasons, we will not attempt to write recommendations in the manner one might find them written in a civil grand jury final report. Instead, we offer illustrative comments below about data in Tables 1 and 2 that grand jurors and other citizens might consider for further study.

The high rank-order conviction rates for Alvarado and Bidwell Counties are unexpected. Not only are their rankings high, but they are consistently so throughout the study period. Both are rural counties north of the imaginary north-south dividing line created by the Tehachapi mountains north of Los Angeles County. One of the counties is of mid-sized population, and the other is relatively small. Quite likely, if one were to survey informed observers of criminal justice in California, few respondents would predict that either county would rank high in their conviction rates.

Citizens of these two counties, including grand jurors, would do well to find the explanation for the high conviction rates associated with these two counties. Possibly the district attorney’s office of each county has unusually effective trial attorneys or, perhaps, law enforcement officers generally write exceptionally good investigative reports or that, in both cases, the district attorneys are blessed with exceptionally high budgets. It could be that district attorneys in these two counties have developed innovative trial strategies that could be exported to other counties. Another explanation might be that the district attorneys of those counties prosecute only “iron-clad” cases. At the far end of the negative scale, it might also be the case that the district attorneys of these counties engage in some fudging when they calculate their conviction rates. Whatever the reason, an inquiry into the meaning of these unusual rates might be fruitful from the perspective of self-government.

At the other end of the spectrum, grand jurors and other citizens in counties with low rank orders should seek the explanations for these numbers. The district attorneys of those counties in all likelihood might be able to offer acceptable reasons for these figures. For example, grand jurors might discover that the district attorney’s office in their county has too few staff to handle the felony crime prosecutorial caseload. If this is so, grand jurors could consider including recommendations in their final reports to correct this situation and thereby possibly alleviate the incidence of felony crimes in their communities.

Citizens and grand jurors in eight counties should obtain explanations for why some of the data for these counties (and, in two cases, all of them) are missing from the CJSC database for conviction rates. Without such information, the news media, grand juries, and citizen activists are prevented from obtaining vital information concerning crime and the criminal justice systems in their counties. Although employees in the Criminal Justice Statistics Center report that their agency imposes no penalties for not providing such data, there are other provisions in government codes pertaining to public officials who do not perform duties that are required by law.

Implications of the Conviction Rates

Conviction rates can, from time to time, contribute to civic discussions in American communities, appearing in the form of political campaigns, newspaper stories, and statements by officials in the criminal justice system. Ample evidence exists to argue against their use, without further analysis, for drawing conclusions about district attorneys’ offices and other segments of the criminal justice system. Nevertheless, conviction rates do enter into debates about the extent of crime in daily life. In this sense, they might be regarded as a physician thinks about the use of thermometers: They are an important beginning step in deciding whether additional diagnosis is needed.

Background Reading

Here are some suggestions for additional reading about some of the topics discussed in this report.

J. Mark Ramseyer, Eric Bennet Rasmusen, and Manu Raghav, “Convictions versus Conviction Rates: The Prosecutor’s Choice” (April 10, 2008). Harvard Law and Economics Discussion Paper No. 611, available at

One of this study’s findings is that “Prosecutorial budgets are positively associated with conviction rates but not with prosecution rates (prosecutions/arrests) [and that] conviction rates are negatively associated both with the number of cases prosecuted and with the crime rate.” Also included is a discussion of the apparent higher conviction rates achieved by Japanese prosecutors in contrast to those of the United States.

See the Web site of the American Prosecutors Research Institute of the National District Attorneys Association for various publications related to conviction rates, county prosecutors, etc.: Note in particular, “Do Lower Conviction Rates Mean Prosecutors’ Offices Are Performing Poorly?” March 2007. Includes suggested “Performance Measures” such as “ratio of convictions to cases charged,” “sentence length,” “average case processing time,” “pleas to lesser charges, “Gun, Gang, and Robbery Crime rates,” etc.

For a brief but helpful discussion of the politics of elections and conviction rates, see Ric Simmons, “Election of Local Prosecutors” at (posted August 9, 2008). The author contends that “the skyrocketing crime rates of the 1960s, along with a handful of other societal changes … has had a profound effect on elections for local prosecutors.” The result has been “a subtle shift away from the prosecutor’s goal of ‘doing justice’ [to] an increased focus on conviction rates.”.

George Fisher, Plea Bargaining’s Triumph: A History of Plea Bargaining in America (Stanford, CA: Stanford University Press, 2003), 397 pp. See review by Candace McCoy, School of Criminal Justice, Rutgers University (Newark): This review includes a list of related references. The reviewer’s generally favorable comments about Fisher’s book suggest that it would be worthwhile reading, particularly because its scope extends beyond prosecutors into the complex worlds of probation, the courts, and public defenders.

Richard T. Boylan, “Salaries, Turnover, and Performance in the Federal Criminal Justice System,” Journal of Law and Economics, 47, no. 1 (April 2004): 75-92. Richard T. Boylan, “What Do Prosecutors Maximize? Evidence from the Careers of U.S. Attorneys,” American Law and Economics Review, 7, no. 2, (Fall 2005): 379-402. Both of Professor Boylan’s studies are imaginative examples of using more powerful statistics than what has been used in this study.

Readers who would like an introduction to factuality will find no better brief treatment of the subject than Chapter 3, Reports, Inferences, Judgments” in S. I. Hayakawa, Language in Thought and Action (New York: Harcourt, Brace, Jovanovich, 1972).

September 26, 2008

Return to top

Return to
Local Government Facts main page

©2008-2017 American Grand Jury Foundation, All Rights Reserved