csi effect - New York County Lawyers Association [PDF]

Dec 3, 2010 - Does expert's report list all publications in last ten years? b. ..... impressions on bullets are class ch

7 downloads 4 Views 2MB Size

Recommend Stories


New York County Lawyers Association Building
Keep your face always toward the sunshine - and shadows will fall behind you. Walt Whitman

Monroe County, New York
You often feel tired, not because you've done too much, but because you've done too little of what sparks

New York Puja Association
Respond to every call that excites your spirit. Rumi

new york council of defense lawyers
Your task is not to seek for love, but merely to seek and find all the barriers within yourself that

New York State Bar Association
In the end only three things matter: how much you loved, how gently you lived, and how gracefully you

American Immigration Lawyers Association
Don't ruin a good today by thinking about a bad yesterday. Let it go. Anonymous

WaudWare and New York Produce Trade Association
Never let your sense of morals prevent you from doing what is right. Isaac Asimov

new york state outdoor education association
We can't help everyone, but everyone can help someone. Ronald Reagan

Association of Concrete Contractors of New York
Nothing in nature is unbeautiful. Alfred, Lord Tennyson

Albany County Commercial Transportation Access Study Albany County, New York
Life is not meant to be easy, my child; but take courage: it can be delightful. George Bernard Shaw

Idea Transcript


n s t i t u t e

I N Y C L A - C L E

Forensic Evidence – Evidence in C riminal Trials : D emystifying the CSI E ffect Prepared in connection with a Continuing Legal Education course presented at New York County Lawyers’ Association, 14 Vesey Street, New York, NY scheduled for December 2, 2010.

Program Co-Sponsor:

NYCLA Criminal Justice Section M o d e r at o r :

Mark B. Rosen, John Jay College of Criminal Justice F ac u l t y :

Steve Bojekian, former Chief Bergen County Sheriff’s Department Adina Schwartz, John Jay College of Criminal Justice Peter Valentin, Detective, Connecticut State Police Jack Walsh, retired captain NYPD and John Jay College of Criminal Justice

3 TRANSITIONAL and Non-transitional MCLE CREDITS: This course has been approved in accordance with the requirements of the New York State Continuing Legal Education Board for a maximum of 3 Transitional and Non-Transitional credit hours: 1 Ethics; 1 Professional Practice. In addition to being accredited in the State of New York, this program has been approved by the Board of Continuing Legal Education of the Supreme Court of New Jersey for 3 hours of total CLE credit. Of these, 1 qualify as hours of credit for ethics/professionalism, and 0 qualify as hours of credit toward certification in civil trial law, criminal trial law, workers compensation law and/or matrimonial law.

Information Regarding CLE Credits and Certification

Forensic Evidence in Criminal Trials: Demystifying the CSI Effect December 3, 2010 6:00PM to 9:00PM The New York State CLE Board Regulations require all accredited CLE providers to provide documentation that CLE course attendees are, in fact, present during the course. Please review the following NYCLA rules for MCLE credit allocation and certificate distribution. i.

You must sign-in and note the time of arrival to receive your course materials and receive MCLE credit. The time will be verified by the Program Assistant.

ii.

You will receive your MCLE certificate as you exit the room at the end of each day. The certificates will bear your name and will be arranged in alphabetical order on the tables directly outside the auditorium.

iv.

If you arrive after the course has begun, you must sign-in and note the time of your arrival. The time will be verified by the Program Assistant. If it has been determined that you will still receive educational value by attending a portion of the program, you will receive a pro-rated CLE certificate.

v.

Please note: We can only certify MCLE credit for the actual time you are in attendance. If you leave before the end of the course, you must sign-out and enter the time you are leaving. The time will be verified by the Program Assistant. If it has been determined that you received educational value from attending a portion of the program, your CLE credits will be pro-rated and the certificate will be mailed to you within one week.

vi.

If you leave early and do not sign out, we will assume that you left at the midpoint of the course. If it has been determined that you received educational value from the portion of the program you attended, we will pro-rate the credits accordingly unless you can provide verification of course completion. Your certificate will be mailed to you within one week. Thank you for choosing NYCLA as your CLE provider!

New York County Lawyers’ Association Continuing Legal Education Institute 14 Vesey Street, New York, N.Y. 10007 • (212) 267-6646

Forensic Evidence in Criminal Trials: Demystifying the CSI Effect December 2, 2010 6:00PM to 9:00PM

Program Chair:

Mark B. Rosen, John Jay College of Criminal Justice

Faculty:

Adina Schwartz, John Jay College of Criminal Justice Peter Valentin, Detective, Connecticut State Police Steve Bojekian, former Chief Bergen County Sheriff's Department Jack Walsh, retired captain NYPD and John Jay College of Criminal Justice AGENDA

5:30 PM –6:00 PM

Registration

6:00 PM – 6:10 PM

Introductions and Announcements

6:10 PM – 6:15 PM

Forensic Science Defined

6:15 PM – 6:50 PM

National Academy of Sciences Report

6:50 PM – 7:00 PM

BREAK

7:00 PM – 7:30 PM

Crime Scene Typical Forensic Issues

7:30 PM – 7:45 PM

Cross Examinations

7:45 PM – 8:00 PM

The Fantasy of the CSI Genre on TV

8:00 PM – 8:05 PM

BREAK

8:05 PM – 8:25 PM

Real-Life Departmental, Command and Crime Scene Forensic Issues

8:25PM – 8:35 PM

The Lab

8:35 PM – 8:55PM

Practicum 911 Call

8:55PM -

Questions

New York County Lawyers’ Association Continuing Legal Education Institute 14 Vesey Street, New York, N.Y. 10007 • (212) 267-6646

Forensic Evidence in Criminal Trials: Demystifying the CSI Effect Thursday, December 2, 2010 6:00 PM – 9:00 PM

Table of Contents

Cross Examination Outline Schwartz Affidavit Daubert Clarification CSI Test Opinion Evidence US v St. Gerard

Mark B. Rosen, Esq  Mark B. Rosen , Esq. P.C.  564 First Avenue  Ste 23L  New York, NY 10016  [email protected]

 

1

 

I.

 

Personal information a.

Name

b.

Home address

c.

Business address(es)

d.

Current employer(s) i.

Identity of employer

ii.

Nature of employer’s business

2

iii. iv. v. vi.

viii.

1.

 

How long employed there Job title(s) and duties

Organizational chart (how many personnel, doing what) vii.

e.

Employer’s affiliations with parties

Expert’s reporting relationships (up & down) Usual hourly rate billed by employer for expert’s services

ix.

Same, paid by employer to expert

x.

Any document retention policies?

Any document retention policies re litigation matters?

Member of any professional organizations?

3

II.

Education a.

For each college and graduate institution: i. ii. iii. iv. v. vi. vii.

Years attended Major or concentration Degree Subject of thesis or dissertation Any courses in computer forensics? Honors, prizes, fellowships, etc. Teaching work while attending (as graduate instructor,

etc.)

 

4

b. III.

Licenses and certifications a.

Issuing authority

b.

Any tests or training?

c.

Dates issued

d.

Periodicity of renewal

e.

Requirements for renewal

f. IV.

 

Professional seminars, continuing education, etc.

Any disciplinary actions, revocations, etc.?

Employment history

5

a.

For each position: i. ii. iii. iv. v. vi. vii. viii. ix.

 

Identity of employer Nature of employer’s business Employer’s affiliations with parties How hired Dates employed there Job title(s) and duties Organizational chart Reporting relationships Nature of compensation (salary, etc.)

6

x. V.

Why left

Publication history a.

Does expert’s report list all publications in last ten years?

b.

Which publications from list are germane to expert’s work in this case?

c.

Any previous publications (before the last ten years) germane to expert’s work in this case?

d.

For each publication identified as pertinent: i.

Did any other publisher or journal reject the

publication?

 

7

ii.

Was the publication peer-reviewed?

1.

By whom?

2.

Any comments during the peer-review process?

3.

Any revisions resulting from the peer-review? iii. iv.

Drafts retained? Anything in the article that the expert would now want to

change or revise? 1.

If yes, has author done that? If not, why not? v.

Identification, qualifications, roles of co-authors and

other persons who assisted vi.

 

What generated expert’s interest in area?

8

vii.

How would expert summarize the publication’s thesis or

content? viii. 1.

Further research:

Did publication identify the need for any? a.

2.

Was there, in any case, a need for any? a.

3.

 

Why or why not?

Why or why not?

Has any needed further research been done? a.

If yes, identify

b.

If no, why not?

9

ix. x. xi.

Does expert know how frequently publication is cited? Has anyone requested right to reprint? Is expert familiar with any literature expressing contrary

views or reaching contrary findings? 1.

Citations

2.

How and why does expert differ with literature expressing contrary views? xii.

VI. a.

Any compensation for authoring the publication?

Career as expert wit

Does expert’s report identify all cases where expert has testified at trial or by deposition in last four years?

 

10

b.

Any other cases in last four years where expert prepared report but did not end up testifying?

c.

Any previous cases (prior to the last four years) similar to this one?

d.

For each case: i. ii.

Subject matter of testimony

iii.

For whom did expert testify?

iv. v. vi.

 

Subject matter of case

What law firm retained expert? Did expert prepare report? Who assisted?

11

vii. viii.

Did expert testify at trial?

ix. x. VII.

 

Was expert deposed?

Any motions to exclude expert’s testimony? How was expert compensated?

Expert’s retention a.

Date retained in this case

b.

Persons making initial and subsequent contacts to retain expert

c.

Compensation arrangements

d.

Expert’s understanding of expert’s assignment

12

VIII.

Expert’s opinions a.

What opinions has expert formed?

b.

For each opinion: i. ii. iii. iv. v. vi. vii.

 

Who assisted expert? What advice or assistance did counsel offer? What data did expert review? How did expert decide what data to review? What data did expert end up relying on? If any data were not used, why? Does report include (not just “list”) all data considered?

13

1.

If not, were data retained? viii.

Where did expert obtain data, and who assisted?

ix. x. xi. 1.

What literature did expert review? On what literature does expert place reliance? For each publication on which expert relies:

Who published? a.

If a trade or industrial organization, e.g., explore its interests and constituencies

2.

Can expert summarize pertinent information, theory, or methodology from publication? How did publication assist expert?

 

14

3.

Is expert familiar with authors’ reputation, experience, etc.?

4.

Was publication peer-reviewed? xii.

Can expert state every methodology relied upon in

forming opinion, whether or not covered in literature just described? xiii. 1.

For each such theory or method:

Is the expert an expert in it? a.

 

Any education, experience, etc.?

2.

Can expert identify any literature supporting it?

3.

Can expert summarize method or theory?

4.

Has method or theory been tested?

15

5.

Can method or theory be falsified (and how)?

6.

Does method or theory have a known error rate?

7.

Is method or theory generally accepted?

8.

Identify any standards controlling the method’s application

9.

Is expert aware of any dissenting views? a.

Citations to literature

10. How would expert respond to dissenting views? 11. What is the context in which the theory or method is usually applied? a.

 

Litigation contexts

16

b.

Others

12. Are there certain kinds of question that this theory or method cannot answer (e.g., causation, fault) xiv.

Has expert ever done work in this area outside the

litigation context? 1.

“Well, sir, how would you define [this area]?” xv.

Would any theories or methods not used by expert be

potentially pertinent? 1.

Why did expert not employ these? xvi.

Does report recite every step taken by expert to reach

his or her opinion? With enough specificity that another expert in same field could duplicate results?

 

17

xvii.

Does report contain complete statement of all bases and

reasons for opinion? xviii. xix.

Were there any false trails or abandoned approaches? Anything in the report concerning this opinion that the

expert would want to change or revise? xx. xxi.

Any further research desirable? What is the expert not opining on (liability, causation,

etc.)? IX. X.

Expert represented by counsel at deposition? Communications with attorneys a.

 

Statement of expert’s assignment

18

b.

Any restrictions on assignment

c.

Any communications re Daubert?

d.

Documents supplied to expert

e.

Involvement of attorneys in drafting opinion

f.

Deposition preparation

 

 

19

ADDITIONAL  SUBJECTS   

  1. What are the primary publications in your field?  2. Which of those are peer reviewed?  3. Have you submitted articles to any?  4. Have they been accepted for publication?  5. Do you have any state funded grants to support your work or research?  6. Do you have any Federally funded grants to support your work or research?  7. Have you ever submitted an application for a state or federal grant? IF YES…  a. Was the grant scored?  b. Was the grant funded?  c. What was the grant for?  d. Did you perform the work?  e. Did you close out the grant?  f. Did you publish your findings?  8. Do you know if any of your work has been cited by anyone else?  a. Where  

 

20

b. How often?  9.  Do you know what the  (ECR)EXPECTED CITATION RATE is for  (whatever Journal the expert lists)?  a. What is the ECR for your article?  b. Do you know what an IMPACT FACTOR IS?  c. What is the impact factor for  (his/her article) 

 

21

STATE OF NEW YORK SUPREME COURT : COUNTY OF BRONX ______________________________________________ THE PEOPLE OF THE STATE OF NEW YORK AFFIDAVIT -vsIndictment #: 4519-07 DAJON GIVENS, Defendant. ______________________________________________ ADINA SCHWARTZ, being duly sworn, deposes and says: 1.

I am a Professor in the Department of Law, Police Science and Criminal Justice

Administration at John Jay College of Criminal Justice and in the Criminal Justice Ph.D Program of the Graduate Center, City University of New York (CUNY). John Jay College is the only liberal arts college in the United States devoted to criminal justice, and the CUNY Criminal Justice Ph.D. Program is the only Criminal Justice Ph.D. program in the country that has a forensic science track. 2.

As a faculty member at John Jay College, I teach many current and future law

enforcement agents and significant numbers of current and future forensic scientists and forensic computing investigators. My duties include teaching evidence law to undergraduates and Criminal Justice Masters students at John Jay College. I teach a course, “Science, Experts and Evidence in the Criminal Justice System,” for students in the forensic science track of the CUNY Criminal Justice Ph.D. Program. PUBLICATIONS 3.

I have published several articles on firearms and toolmark identification and,

more generally, on the forensic identification sciences and on standards for the admission of scientific evidence. My article, A Systemic Challenge to the Reliability and Admissibility of 1

Firearms and Toolmark Identification (6 Columbia Science & Technology Law Review 1 [March 28, 2005] [see http://www.stlr.org/cite.cgi?volume=6&article=2][hereinafter referred to as “A Systemic Challenge”]), was cited in United States v Mikos (539 F3d 706, 711 [7th Cir 2008]) and United States v Mouzone (2009 WL 3617748 at *5 [D.Md. 2009][Magistrate’s Report and Recommendations]). The recommendations in regard to firearms identification were adopted in United States v Willock (2010 WL 1233992 [D.Md. 2010]); United States v Monteiro (407 F Supp2d 351, 360-61 [D.Mass. 2006]); and United States v Green (405 F Supp 2d 104, 122 n.33 [D. Mass. 2005]). 4.

A further article, Commentary on Nichols R.G., Defending the Scientific

Foundations of the Firearms and Tool Mark Identification Discipline: Responding to Recent Challenges J. Forens. Sci. 2007 May; 52(3): 586-94 (52[6] Journal of Forensic Sciences1414 [November 2007]), was cited in United States v. Mouzone (2009 WL 3617748 at 16). Another of my articles, A “Dogma of Empiricism” Revisited: Daubert v. Merrell Dow Pharmaceuticals, Inc. and the Need to Resurrect the Philosophical Insight of Frye v. United States, 10 Harvard Journal of Law and Technology 149 (1997), was cited by the United States District Court for the Central District of California, the Alaska and Minnesota Supreme Courts, and the Florida Second District Court of Appeals. The article was also cited in the two recent National Research Council reports that consider the scientific basis for firearms and toolmark identification: Committee to Assess the Feasibility, Accuracy, and Technical Capability of a National Ballistics Database, Ballistic Imaging (2008) (“NRC Ballistic Imaging Report”) and Committee on Identifying the Needs of the Forensic Science Committee, Strengthening Forensic Science in the United States: A Path Forward (2009) (“NRC Forensic Science Report”).

2

5.

Links to A Systemic Challenge are posted on numerous websites, among them, the

website of the Scientific Working Group on Firearms and Toolmarks (“SWGGUN”), http://www.swggun.org/resources/viewpoints.htm, The Weekly Detail, the Internet Newsletter for Latent Print Examiners, http://www.clpex.com/Articles/TheDetail/200299/TheDetail206.htm, and the website of ballistics consulting company Athena Research & Consulting LLC, http://www.athenahq.com/News/News%20Home.htm. 6.

The SWGGUN website also includes links to my articles, A Challenge to the

Admissibility of Firearms and Toolmark Identifications: An Amicus Brief Prepared on Behalf of the Defendant in United States v. Kain, Crim. No. 03-573-1 (E.D. Pa. 2004), 4 Journal of Philosophy, Science &Law 1 (December 7, 2004), at http://www6.miami.edu/ethics/jpsl/archives/all/kain.html (“A Challenge”) and Challenging Firearms and Toolmark Identification-Part One,” XXXII (8) The Champion 10 (Oct. 2008). 7.

Firearms and toolmark examiners Bruce Moran and John Murdock have

distributed and discussed A Challenge in workshops that they teach to firearms and toolmark examiners throughout the country. 8.

In their chapter, “Scientific Issues,” in 4 David L. Faigman, et. al, Modern

Scientific Evidence 592, 627 (2008-09), examiners Alfred Biasotti, John Murdock and Bruce R. Moran refer readers to the 2005 version of my since-updated chapter, “Firearms and Toolmark Identification,” in Jane Campbell Moriarty, Psychological and Scientific Evidence in Criminal Trials, West (2004 edition & ann. supp. 2006), Volume 2: 12-50 through 12-91, “[f]or a much less sanguine view” of the scientific issues about firearms and toolmark identification. 9.

My articles, Challenging Firearms and Toolmark Identification-Part One,” supra,

and Challenging Firearms and Toolmark Identification-Part Two,” XXXII (9) 44 3

(November/December 2008), were cited in SWGGUN’s response to the NRC Forensic Science Report’s call for the development of standardized protocols. SWGGUN Systemic Requirements/Recommendations for the Forensic Firearm and Toolmark Laboratory, http://www.swggun.org/systemic.htm. 10.

I have served as a defense expert or consultant on firearms and toolmark

identification in forty-nine cases, and I have testified in both state and federal courts. 11.

A copy of my curriculum vitae is attached as Exhibit A.

FUNDAMENTAL SCIENTIFIC DOUBTS REGARDING THE ACCURACY AND RELIABILITY OF FIREARM AND TOOLMARK EXAMINATION 12.

This affidavit is respectfully submitted in order to apprise the Court of

fundamental doubts as to the reliability of both the underlying premises and the methodology of firearms and toolmark identification, especially as voiced by two committees of distinguished scientists in the recent National Research Council Reports on Ballistic Imaging and Forensic Science. 13.

This affidavit is submitted as well in order to inform the Court of significant

disagreements within the discipline of firearms and toolmark identification. 14.

Firearms identification, often improperly termed “ballistics,” is part of the

forensic science discipline of toolmark identification. 1 An underlying assumption of toolmark identification is that a tool, such as a firearm barrel, leaves a unique toolmark(s) on an object, such as a bullet. An equally crucial premise is that toolmarks are reproducible. As the National Research Council stated in its Report on Ballistic Imaging: “To be useful for identification, the 1.

Properly speaking, ballistics deals with the motions of projectiles. See Paul C. Giannelli, Ballistics Evidence: Firearms Identification, 27 Crim. L. Bull. 195,197 (1991). 4

characteristic marks left by firearms must not only be unique but reproducible – that is, the unique characteristics must be capable of being deposited over the multiple firings so that they can be found on recovered evidence and successfully compared with those on other items.” Committee to Assess the Feasibility, Accuracy, and Technical Capability of a National Ballistics Database, Ballistic Imaging (2008) (“NRC Ballistic Imaging Report”) at 72. 15.

The distinctions among various “classes” of toolmarks are key to understanding

why firearms and toolmark examiners’ method for reaching identification conclusions is unreliable. Toolmarks are either “striated” toolmarks which consist of patterns of scratches or striae produced by the parallel motion of tools against objects (e.g., the marks gun barrels produce on bullets), or “impression” toolmarks produced on objects by the perpendicular, pressurized impact of tools (e.g., firing pin impressions or breech face marks produced on cartridge cases by the firing pins or breech faces of guns). Both types of toolmarks have class, subclass, and individual characteristics. 16.

The distinctively designed features of tools are reflected in the class

characteristics of the toolmarks produced by all tools of a certain type. For example, the rifling impressions on bullets are class characteristics reflecting the number, width and direction of twist of the lands and grooves in the types of barrels that fired them. 17.

By contrast to class characteristics, microscopic individual characteristics (e.g.,

the striations or lines within rifling impressions) are what are purported to be unique to the toolmarks each individual tool produces and to correspond to random imperfections or irregularities on tool surfaces produced by the manufacturing process and/or subsequent use, wear, corrosion or damage.

5

18.

It is the isolation and identification of individual markings that theoretically

allows a firearms and toolmark examiner to reach identification conclusions. 19.

Although firearms and toolmark examiners frequently state that every tool

produces toolmarks with unique individual characteristics, in addition to being scientifically questionable (see paras. 11-13 above), this statement is inconsistent with established knowledge within the discipline that not all manufacturing processes result in firearms or other tools with such differentiated surfaces that each tool produces toolmarks with unique, individual characteristics. Indeed, firearms and toolmark examiners themselves concede that some guns and tools are not capable of producing unique toolmarks when they leave the assemblyline. 20.

Some manufacturing processes create batches of tools with similarities in

appearance, size, or surface finish that set them apart from other tools of the same type. The toolmarks produced by tools in the batch have matching microscopic characteristics, called subclass characteristics, that distinguish them from toolmarks produced by other tools of the type, but are common to all toolmarks produced by tools in the batch. The individual characteristics which firearms and toolmark examiners consider to be unique to the toolmarks produced by individual guns or other tools may or may not also be present on the toolmarks produced by newly manufactured tools in the batch. While wear and tear on some tools may cause the subclass characteristics on their toolmarks to be completely replaced by individual characteristics, in other tools, subclass characteristics may persist alongside individual characteristics. 2

2

See, e.g., AFTE, Theory of Identification as it Relates to Toolmarks, 30(1) AFTE J. 86, 88 (1998); Alfred A. Biasotti & John Murdock, “Criteria for Identification” or “State of the Art” of Firearms and Toolmark Identification, 16 AFTE J. 16, 17 (1984); M.S. Bonfanti & J. DeKinder, The Influence of Manufacturing Processes on the Identification of Bullets and Cartridge Cases 6

21.

Firearms examiners compare evidence toolmarks on ammunition components

recovered from crime scenes with test toolmarks that they produce on other ammunition components by firing or otherwise using a particular gun. If the same “class characteristics” are found on both the evidence and test toolmarks (for example, the same rifling impressions on a test fired bullet and an evidence bullet recovered from a crime scene), a firearms and toolmark examiner uses a comparison microscope to compare the toolmarks’ “individual characteristics” (for example, microscopic striations within rifling impressions on the known and questioned bullets). The object is to determine whether the individual characteristics are so similar that one and the same tool (for example, a particular gun barrel) must have produced both the test and the evidence toolmarks. 22.

Where a crime scene does not yield any gun whose class characteristics match

those of the ammunition components recovered from the scene, firearms and toolmark examiners sometimes compare the class and individual characteristics on various ammunition components recovered from the crime scene and/or other crime scenes or the suspect’s home or possessions. The object is to determine whether the individual characteristics are so similar that various ammunition components must all have been fired, or cycled through, the same gun.

FINDINGS OF THE NATIONAL RESEARCH COUNCIL

A Review of the Literature, 39 Science & Justice 3, passim (1999); The AFTE Committee for the Advancement of Firearm & Toolmark Identification, The Response of the Association of Firearm and Tool Mark Examiners to the National Academy of Sciences 2008 Report Assessing the Feasability, Accuracy, and Technical Capability of a National Ballistics Database August 20, 2008, 40 (3) AFTE J. 234, 244 (2008) (stating that “once subclass characteristics (if any) are taken into account, toolmarks are unique”). 7

23.

In 2008, the National Research Council Committee to Assess the Feasibility,

Accuracy, and Technical Capability of a National Ballistics Database found that the basic premises of firearms and toolmark identification were not scientifically established. “Finding: The validity of the fundamental assumptions of uniqueness and reproducibility of firearmrelated toolmarks has not yet been fully demonstrated” (see NRC Ballistic Imaging Report, at 3, 81). 24.

According to the Committee, extensive research would be needed to validate the

assumptions. Additional general research on the uniqueness and reproducibility of firearmrelated toolmarks would have to be done if the basic premises of firearms identification are to be put on a more solid scientific footing. *** Fully assessing the assumptions underlying firearms identification would require careful attention to statistical experimental design issues, as well as intensive work on the underlying physics, engineering and metallurgy of firearms, but is essential to the long-term viability of this type of forensic evidence. (see NRC Ballistic Imaging Report, at 82). 3 25.

Similarly, in 2009, the National Research Committee on Identifying the Needs of

the Forensic Science Community found that: Although some studies have been performed on the degree of similarity that can be found between marks made by different tools and the variability of marks 3

See also United States v. Mouzone (2009 WL 3617748 at 18 [D.Md. 2009][stating that the NRC Ballistic Imaging Report “made clear [that] despite the many studies conducted by toolmark examiners” the basic premises of uniqueness and reproducibility had not been scientifically established and major research was needed); see transcript of United States v. Damian Brown et al., (05 Cr. 538 at 13 [S.D.N.Y. June 9, 2008][statement by Judge Rakoff that “[t]wice in that report (NRC Ballistic Imaging Report) in bold face so that no one can miss it, the authors of the report who appear to include quite a few notable scientists as well as others, state, ‘Finding: The validity of the fundamental assumptions of uniqueness and reproducibility of firearms-related toolmarks has not yet been fully demonstrated.’ So, that goes to the most basic premise before we get into anything else, the most basic premise on which this, what you (the Assistant United States Attorney) call ballistic science is premised, yes?”]). 8

made by an individual tool, the scientific knowledge base for toolmark and firearms analysis is fairly limited. For example, a report by Hamby, Brundage and Thorpe includes capsule summaries of 68 toolmark and firearms studies. But the capsule summaries suggest a heavy reliance on the subjective findings of examiners rather than on the rigorous quantification and analysis of sources of variability. (see Strengthening Forensic Science in the United States: A Path Forward 2009 [hereinafter “NRC Forensic Science Report”] at 155 & 155 n.68]. 4

26.

These negative conclusions about the underlying premises of firearms and

toolmark identification are particularly worthy of note because the National Research Council is the operating agency of the National Academy of Sciences, an independent body of distinguished scientists that Congress established in 1863 for the purpose of advising federal government agencies on scientific and technical questions. 5

4

Citing J.E. Hamby, D.J. Brundage, and J.W. Thorpe, 2009. The identification of bullets fired from 10 consecutively rifled 9mm Ruger pistol barrels – A research project involving 468 participants from 19 countries. Available online at http://www.ftiibis.com/DOWNLOADS/Publications/10%20Barrel%20Article-%20a.pdf); see also id. at 154 (quoting the NRC Ballistic Imaging Report’s findings that the fundamental premises of the uniqueness and reproducibility of firearms toolmarks have not been scientifically established). 5

See NRC Ballistic Imaging Report, supra, at iii; NRC, Welcome to the National Research Council, http://sites.nationalacademies.org/nrc/index.htm; National Academy of Sciences, About the NAS, http://www.nasonline.org/site/PageServer?pagename=ABOUT_main_page. See also Thomas L. Bohan, Review of Strengthening Forensic Science in the United States: A Path Forward, 55(2) J. Forens. Sci. 560 (Mar. 2010) (explaining the significance of the National Academy of Sciences and the relations between the National Academy of Sciences and the NRC, and stating that the NRC Forensic Science Report “is not a pronouncement from the ivory tower, but rather the product of a multi-year study by a diverse group of legal scholars and scientists selected by the country’s most respected scientific organization: the National Academy of Sciences”). NRC committees are staffed by top scientists and professionals who work on a voluntary basis. See NRC, Welcome to the National Research Council, supra. The appointment process is designed to ensure that committee members have an “appropriate range of expertise for the task” and bring “a balance of perspectives” to a project. See Committee Appointment Process, 9

27.

It is crucial to recognize that even if the necessary research were done to show

that guns produce unique and reproducible toolmarks, this would not suffice to set firearms and toolmark identification on firm scientific foundations. 28.

Even assuming arguendo that the toolmarks produced by firearms are

reproducible and unique, firearms and toolmarks examiners have no reliable method for determining whether the similarities between toolmarks are so great that they must have been produced by the same gun. 29.

According to the National Research Committee on Identifying the Needs of the

Forensic Science Community, “A fundamental problem with toolmark and firearms analysis is the lack of a precisely defined process [for reaching identifications]” (see NRC Forensic Science Report, at 155). 30.

The NRC Forensic Science Report further concludes that “the decision of the

toolmark examiner remains a subjective decision based on unarticulated standards and no statistical foundation for estimation of error rates” (see NRC Forensic Science Report at 153-54). 31.

Similar criticisms of the method that firearms and toolmark examines use for

reaching identification conclusions are advanced in the NRC Ballistic Imaging Report (Id. at 82 [“Conclusions drawn in firearms identification should not be made to imply the presence of a firm statistical basis when none has been demonstrated” and criticizing firearms and toolmark examiners’ absolute identification conclusions for “cloak[ing] an inherently subjective

http://www8.nationalacademies.org/cp/information.aspx?key=Committee_Appointment; NRC Ballistic Imaging Report, supra, at iii. 10

assessment of a match with an extreme probability statement that has no firm grounding and unrealistically implies an error rate of zero”). 6

TECHNICAL AND STATISICAL BARRIERS TO IDENTIFYING THE “ONE-ANDONLY” WEAPON THAT FIRED A PROJECTILE 32.

Even assuming arguendo that toolmarks can be proven to be unique and

reproducible, three major difficulties stand in the way of firearms and toolmark examiners’ goal of identifying one and only one tool as the source of a particular toolmark(s). 33.

A first barrier tends to be obscured by firearms and toolmark examiners’

ambiguous use of the term “individual characteristics.” Examiners sometimes use the term to refer to the entire unique microscopic marks that are theoretically produced by individual tools. At other times, the term “individual characteristics” is used to refer to the component microscopic marks, which are not in themselves unique to any tool, that come together as a pattern to comprise the microscopic marks that are allegedly unique to particular tools.

6

See also Melendez-Diaz v. Massachusetts, 129 S.Ct. 2527, 2538 (2009) (citing the NRC Forensic Science Report’s discussion of “problems of subjectivity, bias, and unreliability of common forensic tests such as latent fingerprint analysis, pattern/impression analysis, and toolmark and firearms analysis”); United States v. Green, 405 F. Supp. 2d 104, 110 (D. Mass. 2005 ) (reasoning that “even assuming that some of these marks are unique to the gun in question, the issue is their significance, how the examiner can distinguish one from another, which to discount and which to focus on, how qualified he is to do so, and how reliable his examination is”); United States v. Monteiro, 407 F. Supp. 2d 351, 366 (D.Mass. 2006) (“The question of whether the methodology of identifying a match between a particular cartridge case and gun is reliable requires far more analysis [than the question of whether cartridge case toolmarks are unique]”). Cf. NRC Forensic Science Report, supra, at 144 (“Uniqueness and persistence [of each person’s fingerprints] are necessary conditions for friction ridge identification to be feasible, but these conditions do not imply that anyone can reliably discern whether or not two friction ridge impressions were made by the same person.”). 11

34.

The component nature of the individual characteristics of toolmarks was

recognized as early as 1935: “It is probably true that no two firearms with the same class characteristics will produce the same signature, but it is likewise true that each element of a firearm’s signature may be found in the signatures of other firearms ....” (JACK D. GUNTHER & C.O. GUNTHER, THE IDENTIFICATION OF FIREARMS 90-91 [1935], quoted in Biasotti & Murdock, supra, at 17). 7 35.

As a result of the overlap between the individual characteristics of the toolmarks

made by different tools, misidentifications may result because examiners assume that a certain amount of resemblance proves that the same tool produced both test and evidence toolmarks, when the same amount of resemblance is possible between toolmarks produced by different tools. 8 36.

Prominent firearms and toolmark examiners Alfred Biasotti, John Murdock, and

Bruce Moran state that in their experience, many of the disagreements between examiners about the conclusions warranted in a particular case “stem from one examiner ascribing too much

7

See also United States v. Monteiro, 407 F.Supp.2d at 360-361 (citing Schwartz, A Systemic Challenge, supra, at 6, and that article’s citation of Gunther and Gunther for the proposition that “[s]ome of the individual characteristics of toolmarks are comprised of non-unique marks”); J.I. Thornton & J.L. Peterson, The General Assumptions and Rationale of Forensic Identification, in D.L. Faigman, D.H. Kaye, M.J. Saks & J. Sanders (eds.), Science in the Law: Forensic Science Issues (2002) at 5-6, quoted in NRC Ballistic Imaging Report, supra, at 57 (“It should be recognized that an individual characteristic, taken in isolation, might not in itself be unique. The uniqueness of an object may be established by an ensemble of individual characteristics.”). 8

See, eg, United States v. Monteiro, 407 F Supp 2d at 361; DQ Burd & PL Kirk, Tool Marks— Factors Involved in Their Comparison and Use as Evidence, 32 J Crim. L, Criminology & Police Sci. 679 (1942); AA Biasotti, A Statistical Study of the Individual Characteristics of Fired Bullets, 4 J Forensic Sci 34 (1959) (“A Statistical Study”) (matches of 15 to 20% of the striae per land or groove impression of bullets fired from different .38 Special Smith & Wesson revolvers were frequently found). 12

significance to a small amount of matching striae and not appreciating that such agreement is achievable in known non-match comparisons” (Scientific Issues, in 4 David L. Faigman, et. al, Modern Scientific Evidence 592, 610 [2008-2009)][see also Biasotti & Murdock, supra, at 17 stating “We have come to expect to find small, isolated areas of corresponding striae agreement when comparing toolmarks known to have been produced by different working surfaces”). 37.

Starting in the 1990s, use of the Integrated Ballistics Identification System (IBIS),

a computerized comparison system for bullets and cartridge cases, led to increased awareness of the danger that examiners might erroneously conclude that toolmarks were made by the same tool when they were in fact made by different tools. Joseph J. Masson observed that as the IBIS data base grew for bullets fired from guns of a particular caliber, increasing similarities were discovered in the toolmarks on bullets known to have been fired by different guns of that caliber (Masson, Confidence Level Variations in Firearms Identification through Computerized Technology, 29 [1] AFTE J. 42 [1997]). 38.

The similarities between known non-matching toolmarks were sometimes so great

that even under a comparison microscope, it was difficult to tell the toolmarks apart and not erroneously attribute them to the same gun (Id. at 43). 39.

Studies have also found that as the IBIS database was expanded to include

increasing numbers of cartridge cases that had been test fired by guns of the same caliber and make, the top ten or even fifteen candidate matches that IBIS listed for a queried cartridge case increasingly did not include the cartridge case known to have been test fired by the same gun. 9

9

See Frederic Tulleners, Technical Evaluation: Feasibility of a Ballistics Imaging Database for All New Handgun Sales (“AB1717 Study”) (2001), at 1-4, 1-6, www.nssf.org/pdf/technicalevaluation.pdf; Jan De Kinder, Review AB1717 Report. Technical Evaluation Feasibility of a Ballistics Imaging Database for All New Handgun Sales (2002) at 3, 13

40.

The second major difficulty in the way of reliable firearms and toolmark

identifications is that the marks a tool makes change over time. This makes firearms and toolmark identification much more problematic than fingerprint identification since, except in rare cases of disease or injury, an individual’s fingerprints remain the same over time. 10 In fact, firearms and toolmark examiners do not expect the toolmarks on bullets fired from the same gun to ever be exactly alike (See, eg, Bruce Moran, Firearms Examiner Expert Witness Testimony, 32 [3] AFTE J. 231, 242 [Summer 2000]; Eliot Springer, Toolmark Examinations—A Review of Its [sic] Development in the Literature, 40 J. Forensic Sci. 964, 965 [1995]). The changes in toolmarks reflect the changes in a tool’s surfaces that occur as the tool is used and/or as damage or corrosion occur. 41.

An additional cause of differences among the toolmarks a particular gun leaves on

ammunition is that “pressures and velocities involved in the physical interaction between the

www. nssf.org/pdf/dekinder.pdf (independent review summarizing and supporting the AB1717 Study’s findings); Jan De Kinder, Frederic Tulleners & Hugues Thiebaut, Reference Ballistics Imagining Database Performance, 140 Forensic Science International 207, 211-15 (2004); NRC Ballistic Imaging Report, supra, at 239 (stating that “DeKinder et al. (2004) compellingly demonstrate that [IBIS’s] performance can degrade in databases flooded with same-classcharacteristic images”). 10 See, e.g., Emmett M. Flynn, Tool Mark Identification, 2 J. Forens. Sci. 95, 102 (1957); Giannelli, supra, at 202-03 (1991) (explaining why the analogy between firearms and fingerprint identification is misleading); Stephen Stigler, Galton and Identification by Fingerprints, 140 Genetics 857 (1995) (praising Galton for recognizing that proving that “[a]n individual’s prints [are] persistent over time” was a crucial step in establishing that a single individual can be reliably identified as the source of a particular fingerprint(s) (italics omitted)); United States v. Monteiro, 407 F.Supp.2d at 362 (“A perfect correspondence between the lines on a test fired cartridge and the evidence recovered from the scene is impossible; in the real world, there is no such thing as a ‘perfect match.’”); NRC Ballistic Imaging Report, supra, at 279 (stating that an important difference between fingerprint and facial recognition imaging systems, on the one hand, and ballistics imaging systems, on the other “emanates from the stochastic nature of ballistics; that is, noise and variation in fingerprints comes from acquisition; in ballistics there is the additional process of generating the physical characteristics that are then going to be acquired changes each time”). 14

weapon and the ammunition at firing are subject to intrinsic variation from shot to shot, thus resulting in variations of the shape, orientation, and localization of the signature markings, even for the same combination of firearm/ammunition type.” 11 In addition, the same gun is likely to leave different marks on bullets and cartridge cases of different makes. 12 42.

As a consequence of the impermanence of toolmarks, differences between

evidence and test toolmarks will sometimes be correctly attributed to changes in the surfaces of the suspect tool between the time the evidence and test toolmarks were made. At other times, such an attribution will be wrong; the evidence and test toolmarks differ because the source of the evidence mark was a tool similar, but not identical, to the suspect tool. 11

See Nicola Senin, Roberto Groppetti, Luciano Garofano, Paolo Fratini & Michele Pierni,

Three Dimensional Surface Topography Acquisition and Analysis for Firearm Identification, 51(2) J. Forens. Sci. 282 (March 2006). See also Collaborative Testing Services, Inc. (CTS), Firearms Examination Test No. 06-526 Summary Report at 40, http://www.collaborativetesting.com/reports/2626_web.pdf (2006) (comment by proficiency test taker that “[o]ur speciments 1 and 3 were excellent examples of how differences in pressure and/or primer hardness can vary samples fired from the same weapon.[sic] Item 3 displayed a hemispherical firing pin impression with F.P. drag and firing pin aperture with a shear mark. While Item 1 displayed only faint FPA and no drag. The granular breechface marks varied greatly. A probable ejector wipe on the rims [sic] edge overlapped less than 10%.”). 12

See, eg, Benjamin Bachrach, A Statistical Validation of the Individuality of Guns using 3D Images of Bullets, Grant Number 97-LB-VX-0008, Final Report at 22 (National Institute of Justice 2006), available at http://www.ncjrs.gov/pdffiles1/nij/grants/213674.pdf (“Whenever an evidence bullet is found in a crime scene and test fires are performed to attempt an identification with a suspect gun, firearms examiners traditionally try to use either the same ammunition brand or a similar ammunition as that of the evidence bullet. An important conclusion … of this study was to validate practice.”); Moran, Firearms Examiner Expert Witness Testimony, supra, at 237 (stating that “the type of ammunition can have an effect on the manner in which a bullet is marked during its passage down the gun barrel”); Tsumeu Uchiyama, Toolmark Reproducibility on Fired Bullets and Expended Cartridge Cases, 40 (1) AFTE J 3, 8-9 (Winter 2008) (finding in a study of 100 rounds fired from the same Hi-Point 9mm Luger pistol that “[i]t was generally difficult to find matching striations between different brands of bullets” and that with regard to firing pin impressions, “similarity between cartridges from different manufacturers was low, even between successively fired cartridge cases”); De Kinder, Tulleners & Thiebaut, supra, at 15

43.

In his classic 1955 study, described in 1997 as “the most exhaustive statistical

empirical study [of firearms and toolmark identification] ever published,” Alfred A. Biasotti found matches of 21-38% on bullets fired from the same .38 Special Smith & Wesson revolver and frequently found matches of 15-20% of the striae per land or groove impression on bullets fired from different .38 Special Smith & Wesson revolvers (Biasotti, A Statistical Study, supra, at 37-38; Ronald G. Nichols, Firearms and Toolmark Identification Criteria: A Review of the Literature, 42 J. Forensic Sci. 466, 467 [1997]). 44.

This near-complete overlap in the amount of similarity in toolmarks produced by

the same and different guns strongly suggests that examiners can make misidentifications by wrongly attributing differences between toolmarks made by different tools to changes in the same tool over time. 45.

As Judge Gertner recognized in United States v. Green, “Just because the marks

on the [cartridge] casings are different does not mean that they come from different guns. Repeated firings from the same weapon, particularly over a long period of time, could produce different marks as a result of wear or simply by accident” (405 F.Supp.2d at 108). 13 46.

A third major barrier in the way of reliable firearms and toolmark identifications

is that a tool may be wrongly identified as the source of a toolmark it did not produce if an

214 (finding that in regard to cartridge cases, “the IBIS performance is dramatically impaired when different ammunition is used”). 13

See also NRC Ballistic Imaging Report, supra, at 55 (“In the specific context of firearms and toolmark identification, derivation of an objective, statistical basis for rendering decisions is hampered by the fundamentally random nature of the firing process. The exact same conditions – of ammunition, of wear and cleanliness of firearms parts, of burning of propellant particles, and the resulting gas pressure, and so forth – do not necessarily apply for every shot from every gun.”). 16

examiner confuses subclass characteristics of toolmarks produced by more than one tool with individual characteristics of toolmarks produced by one and only one tool (see United States v. Green, 405 F. Supp. 2d at 111 [“Plainly, confusing individual characteristics with class or subclass ones could lead to false negatives, as well as false positives”]; Monteiro, 407 F. Supp. 2d at 363 [citing Schwartz, A Systemic Challenge, supra, at 8, for “highlight[ing] the complexity of comparing patterns because of the difficulty in distinguishing between class, subclass, and individual characteristics”]). 47.

On the basis of studies finding subclass, rather than individual, characteristics on

firing pin impressions and breech face marks, prominent firearms and toolmark examiners have warned that reliable firearms identifications cannot be based on either of these marks alone. 14

14

See Bonfanti & DeKinder, supra, at 5 (“A probable solution to th[e] problem [of misidentifications resulting from confusing subclass with individual characteristics] lies in a comparison of all the marks present on a cartridge case (breech face impressions, firing pin impression, ejector mark, extractor mark, and marks generally by dynamic processes).”); Ronald G. Nichols, Defending the Scientific Foundations of the Firearms and Tool Mark Identification Discipline: Responding to Recent Challenges, 52(3) J. Forens. Sci. 586, 588 (May 2007) (“Defending the Scientific Foundations”) (stating that “firearms and tool mark examiners are aware that [firing pin impressions] are not wholly reliable for identification to a specific firearm,” and that “Breech face marks can be cut, milled or stamped. In each case, subclass characteristics may be produced.”). Additionally, the microscopic striations on bullets may be subclass characteristics, rather than individual characteristics. See, e.g., Nichols, Defending the Scientific Foundations, supra, at 587 (describing studies finding subclass characteristics on bullet toolmarks); Jerry Miller, An Examination of the Application of the Conservative Criteria for Identification of Striated Toolmarks Using Bullets Fired from Ten Consecutively Rifled Barrels, 31(2) AFTE J. 125, 128 (2001) (finding both subclass and individual characteristics on the striated toolmarks on both land and groove impressions of bullets fired by used guns); Jerry Miller, An Examination of Two Consecutively Rifled Barrels and a Review of the Literature, 32(3) AFTE J. 259, 260 (Summer 2000) (the toolmarks on bullets fired from ten consecutively manufactured gang broach barrels were so similar that a false identification would have resulted if the characteristics had been incorrectly identified as individual, rather than subclass, characteristics). 17

48.

The danger that misidentifications will result from confusing subclass with

individual characteristics is particularly great because firearms and toolmark examiners have not arrived at either strict rules for determining whether a microscopic pattern on a toolmark is an individual or a subclass characteristic or strict rules as to which tools or manufacturing processes do or do not produce toolmarks with subclass characteristics. At most, publications by firearms and toolmark examiners provide rough rules of thumb about circumstances in which subclass characteristics are or are not likely to occur. 49.

If they are to avoid misidentifications based on confusing subclass characteristics

shared by more than one tool with individual characteristics unique to one and only one tool, examiners need to rely on personal familiarity with types of forming and finishing processes and their reflections in toolmarks. 15 50.

The danger that the rifling marks on bullets will have subclass, rather than

individual, characteristics was brought out in an incident in 2000, when the Georgia Bureau of Investigation Crime Laboratory was unable to identify the officer who was responsible for the accidental shooting of an innocent bystander. All of the officers involved were using Glock pistols, and the Crime Laboratory could not determine which of the Glocks had fired the bullet

15

See, e.g., Biasotti & Murdock, supra, at 18-19 (“Because what would constitute these sub-class features is a function of the relative hardness of the tool, the material, and the dynamics of the cutting process, it is not currently possible to describe them in quantitative terms. … Pending further research that would allow us to quantify these features, examiners will have to rely on a subjective evaluation ….”); Ronald Nichols, The Scientific Foundations of Firearms and Toolmark Examination – A Response to Recent Challenges, CACNews 8, 15 (2nd Quarter 2006) (“A Response to Recent Challenges”), available at www.cacnews.org/pdfs/2ndq06.pdf (“[T]here is not one conscientious firearms and toolmark examiner who would suggest that personal familiarity with tool finishing processes and their effects on tool surfaces is anything but vital to the proper understanding of subclass characteristics. Without such knowledge and appreciation of manufacturing techniques, examiners would have no way of ascertaining if subclass characteristics could exist.”); Biasotti, Murdock & Moran, supra, at 601 n.2. 18

recovered from the victim. Subsequently, problems with “matching” particular Glock pistols to particular bullets became evident throughout the State. 16 51.

The danger that subclass characteristics will be confused with individual

characteristics is heightened when, instead of comparing test and evidence toolmarks to see if a particular gun produced the marks on crime scene ammunition components, examiners compare the toolmarks on various ammunition components to see if they were produced by the same, unidentified gun. 52.

In 1984, Biasotti & Murdock contrasted “comparing bullet with bullet or toolmark

with toolmark where no gun or tool is available for study” with comparisons “[w]hen guns or tools are available [and] their working or bearing surfaces can be examined and determinations made relative to whether they are capable of producing class, sub-class or individual toolmarks” (see Biasotti & Murdock, supra, at 19).

16

See Letter from Richard Ernst, Firearms Section Manager, Georgia Bureau of Investigation

Crime Laboratory, to Fellow Firearms Examiners, dated 2/07/00, at http://www.afte.org/GlockS.htm; see also Lucien C. Haag, Identifiable Bullets from Glocks in 60 Seconds, Abstract of Presentation at AFTE Training Seminar, May 28, 2003, http://afte.org/TrainingSeminar/AFTE2003/Summaries/afte2003_wed.htm ( “It is well-known among forearm examiners that bullets … fired from any of the various calibers of Glock pistols can seldom be matched back to the firearm from which they were discharged. This appears to be due to the mirror-like finish of the hammer forged polygonal bores in these pistols.”); Brian J. Heard, Handbook of Firearms and Ballistics Examining and Interpreting Forensic Evidence at 131 (1997) (explaining that “it is extremely difficult to match bullets from polygonal rifled barrels” because “a mandril will often be used to make hundreds of barrels. In addition, as there will be little or no wear on the mandril each barrel should be virtually identical. [Moreover,] to improve manufacturing efficacy, the barrel blank is of sufficient length that three or even four barrels can be made with one pass of the mandril.”); Carolyn E. Martinez, GLOCK’s Signature Barrel – Durability of the EBIS Markings, 41(4) AFTE J. 325 (2009) (“Even using copperjacketed bullets, some Law Enforcement Agencies have reportedly banned the use of polygonally rifled barrels because of the potential difficulty in forensically identifying the rifling patterns on a bullet fired from a polygonally rifled barrel” ). 19

53.

More recently, Biasotti, Murdock and Moran stated that “[t]he examiner’s greatest

chance for success [in distinguishing between subclass and individual characteristics] is when the responsible tool is available for examination,” supra,at 601 n.2, and that when a suspect gun or other tool has not been recovered, “[i]t is not uncommon … for the examiner to write a report that states that sufficient microscopic agreement is present to suggest that the same tool made the series of toolmarks, but that a conclusive opinion can be rendered only after an examination of the responsible tool” (Id. at 605). 54.

On the basis of similar reasoning, some firearms and toolmark examiners have

criticized the Collaborative Testing Service (“CTS”) proficiency tests for asking them to make identifications in the absence of a gun. On a 2003 test, one examiner commented that: “A cast of the firearm’s breech face would have been taken to rule out any sub-class characteristics from the similar ammunition used for tests in this comparison” (CTS, Firearms Examination Test No. 03-526 Summary Report 35, at http://www.collaborativetesting.com/reports/2326_web.pdf [2003]). Another wrote that: “In an actual case, I would not except [sic] test fired cartridge cases from another agency or intraagency. I would want to examine the tool working surfaces of the firearm in order to eliminate the possibility of subclass carry over” (Id.). 55.

Drawing identification conclusions without considering the possibility of subclass

characteristics contravenes the warning, in the AFTE Theory of Identification, that “Caution should be exercised in distinguishing SUBCLASS CHARACTERISTICS from INDIVIDUAL CHARACTERISTICS” (30 [1] AFTE J. 86, 88 [1998]). 56.

Firearms and toolmark examiner Bruce Moran explained that the term “subclass

characteristics” was coined in 1989 and incorporated in the AFTE glossary definitions in 1992 after there were misidentifications of striated toolmarks in real cases in the 1980’s (Bruce Moran, 20

A Report on the AFTE Theory of Identification and Range of Conclusions for Tool Mark Identification and Resulting Approaches to Casework, 34 [2] AFTE J. 227, 227-28 [Spring 2002]). As tool making technology improves, subclass carryover among gun components and other tools is likely to increase and lead to an increasingly serious risk of misidentifications. 17 FIREARM EXAMINATION AND IDENTIFICATION IS INHERENTLY PROBABALISTIC AND IS WITHOUT ANY SCIENTIFICALLY ESTABLISHED STATISTICAL BASIS 57.

Jointly, the three major difficulties in the way of reliable firearms and toolmark

identification imply that identity determinations are inherently probabilistic. On the one hand, substantial resemblances between toolmarks produced by different tools may result from shared subclass characteristics or from similarities between the marks comprising the individual characteristics of toolmarks. On the other hand, because the individual characteristics of the toolmarks made by a particular tool change over time, even toolmarks made by the same tool may not perfectly match. The similarities between the toolmarks produced by different tools and the differences between the toolmarks produced by the same tool imply that a statistical question must be answered to determine whether a particular tool was the source of the toolmark on an object recovered from a crime scene: what is the likelihood that the toolmarks made by a randomly selected tool of the same type would do as good a job as the toolmarks made by the suspect tool at matching the characteristics of the evidence toolmark? 17

See Gene C. Rivera, Subclass Characteristics in Smith & Wesson SW40VE Sigma Pistols, 39 (3) AFTE J. 247, 250 (Summer 2007) (“As the technology in tool making improves, causing the tools to become less susceptible to wear and thus change, it will only become more problematic for the firearm and tool mark examiner.”); Biasotti, Murdock and Moran, supra at 598 (“The manufacturer’s goal is to produce many items of the same shape that are, within certain tolerances, the same size. They also want each of these items to have an acceptable surface finish or appearance. … The manufacturers are not, however, concerned that many or all of these 21

58. As stated earlier, in 1935, Gunther and Gunther recognized the probabilistic nature of firearms and toolmark identification: “It is probably true that no two firearms with the same class characteristics will produce the same signature, but it is likewise true that each element of a firearm’s signature may be found in the signatures of other firearms .... An individual peculiarity of a firearm can, therefore, be established by elements of identity which form a combination the coexistence of which is highly improbable in the signature of other firearms with the same class characteristics” (Supra, at 90-91). 59.

Although Biasotti and Murdock, supra at 17, quoted Gunther and Gunther’s

statement with approval in 1984, in 2008 and 2009, the NRC Ballistic Imaging and Forensic Science Reports respectively found that statistical foundations had yet to be developed for firearms and toolmark identifications. 60.

According to the Forensic Science Report, “the decision of the toolmark examiner

remains a subjective decision based on unarticulated standards and no statistical foundation for estimation of error rates” (supra at 153-54 [footnote omitted]). “Forensic science reports, and any courtroom testimony stemming from them, must include clear characterizations of the limitations of the analyses, including associated probabilities where possible. … In order to enable this, research must be undertaken to evaluate the reliability of the steps of the various identification methods and the confidence intervals associated with the overall conclusions” (Id. at 186).

items may bear toolmarks composed of subclass characteristics depending on the way in which they were manufactured.”). 22

61.

Moreover, the Ballistic Imaging Report concluded that:

Conclusions drawn in firearms identification should not be made to imply the presence of a firm statistical basis where none has been demonstrated. Specifically, … examiners tend to cast their assessments in bold absolutes, commonly asserting that a match can be made ‘to the exclusion of all other firearms in the world.’ Such comments cloak an inherently subjective assessment of a match with an extreme probability statement that has no firm grounding and unrealistically implies an error rate of zero. (supra, at 82). 18 62.

Some firearms and toolmark examiners concede that identity statements are

probabilistic, but nonetheless maintain that reliable conclusions can be reached through the traditional, subjective approach of relying on inarticulable, mind’s eye criteria. For example, notwithstanding his avowed recognition that identity statements are not absolute, Ronald Nichols recently stated that “[w]hile oft criticized, the concept of ‘I know a match when I see it’ has its basis in [firearms and tool mark examiners’ extensive] training” (Defending the Scientific Foundations, at 589). 63.

To the contrary, the NRC Forensic Science Report explained that extensive

empirical and statistical work is needed to support identity conclusions, and that the requisite

18

See also id. at 55 (“Ultimately, as firearms identification is currently practiced, an examiner’s assessment of the quality and quantity of resulting toolmarks and the decision of what does or does not constitute a match comes down to a subjective determination based on intuition and experience. By contrast, DNA analysis is practically unique among forensic science specialties as having a strong objective basis for determination and as being amenable to formal probability statements.”); United States v. Glynn, 578 F.Supp.2d 567, 571 (S.D.N.Y. 2008) (concluding that firearms and toolmark identification “lacked the rigor of science and that, whatever else it might be, its methodology was too subjective to permit opinions to be stated to ‘a reasonable degree of ballistic certainty’”). 23

work has not been done for any forensic science except nuclear DNA identification. “The determination of uniqueness requires measurements of object attributes, data collected on the population frequency of variation in these attributes, testing of attribute independence, and calculations of the probability that different objects share a common source of observable attributes” (supra, at 44). “[N]o forensic method other than nuclear DNA analysis has been rigorously shown to have the capacity to consistently and with a high degree of certainty support conclusions about ‘individualization’ (more commonly known as ‘matching’ of an unknown item of evidence to a specific known source)” (supra at 87). 64.

The Report made it clear that these criticisms extend to firearms and toolmark

identification. Toolmark and firearms analysis suffers from the same limitations [as other types of] impression evidence. Because not enough is known about the variabilities among individual tools and guns, we are not able to specify how many points of similarity are necessary for a given level of confidence in the result. Sufficient studies have not been done to understand the reliability and repeatability of the methods. … A fundamental problem with toolmark and firearms analysis is the lack of a precisely defined process [for reaching identification conclusions]. … … Overall, the process for toolmark and firearms comparison lacks the specificity of the protocols for, say, 13 STR DNA analysis. This is not to say that toolmark analysis needs to be as objective as DNA analysis in order to provide value. … But the protocols for DNA analysis do represent a precisely specified, and scientifically justified, series of steps that lead to results with well-characterized confidence limits, and that is the goal for all the methods of forensic science. (Id. at 154-55). 19

19

See also A. Biedermann, S. Bozza & F. Taroni, Decision Theoretic Properties of Forensic Identification: Underlying Logic and Argumentative Implications, 177 Forens. Sci. Internatl 120, 128 (2008) (criticizing Nichols for suggesting “that the study of the structure of arguments (e.g., based on models for statistical treatment) is not applicable to all forensic disciplines”); United States v. Mouzone, 2009 WL 3617748 at 20 (concluding that “there is no meaningful distinction 24

65.

The NRC Forensic Science Report’s criticisms of the traditional, “I know it when

I see it” approach to forensic identification are similar to those that some forensic scientists have advanced. Biasotti stated in 1964 that when firearms and toolmark examiners follow the traditional, subjective approach, they implicitly admit that “we lack necessary statistical data which would permit us to formulate precise criteria for distinguishing between identity and nonidentity with a reasonable degree of certainty” (The Principles of Evidence Evaluation as Applied to Firearms and Tool Mark Identification, 9 J. Forens. Sci. 428, 430 [1964][hereinafter “Principles of Evidence Evaluation”]). 66.

Other examiners have criticized the subjective approach for conflicting with the

scientific value of “as far as possible, support[ing one’s] opinion by reference to logical reasoning and an established corpus of scientific knowledge” (Christophe Champod & Ian W. Evett, A Probabilistic Approach to Fingerprint Identification Evidence, 51 J. Forensic Identification 101, 106 [2001][labeling this value “transparency”]; C. Champod, D. Baldwin, F.

between a firearms examiner saying that ‘the likelihood of another firearm having fired these cartridges is so remote as to be considered a practical impossibility’ and saying that his identification is ‘an absolute certainty.’ Neither is justified based on the testimony at the hearing or the literature and cases reviewed and discussed in the Report and Recommendation ….”); United States v. Taylor, 663 F.Supp.2d 1170, 1180 (D.N.M. 2009) (precluding Nichols from testifying that “his methodology allows him to reach his conclusion as a matter of scientific certainty,” or “that he can conclude that there is a match to the exclusion, either practical or absolute, of all other guns.”); id. at 1179 (stating that “several significant criticisms have been levied against the field [of firearms and toolmark identification]. These criticisms are serious enough that Mr. Nichols himself has felt compelled to defend his craft in writing. They are also serious enough that courts have increasingly paid attention to them.”); United States v. Lape, slip copy, 2010 WL 909756 (S.D. Ohio March 11, 2010) (“The Court is aware … that the ‘science’ of matching toolmarks to shell casings and excluding all other weapons but the one tested as possibly having made those marks is not without its critics. See, e.g., United States v. Green, 405 F.Supp.2d 104 (D.Mass.2005); see also United States v. Mouzone, 2009 WL 3617748, *20 (D.Md. October 29, 2009)…. Without further evidence, the Court is unable to determine if the report to which Officer Orick referred would even be admissible were Mr. Lape to be tried for having fired his gun into the home in question.”). 25

Taroni, & J.S. Buckleton, Firearm and Tool Marks Identification: The Bayesian Approach, 35[3] AFTE J. 307, 310-11 [2003][implying that the subjective approach to firearms and toolmark identification conflicts with the scientific value of transparency]). They have urged forensic scientists to move away from “the stereotype [of] the distinguished, greying individual on the stand saying, ‘my opinion is based on my many years of experience in the field.’” (Champod & Evett, supra, at 106). 67.

The AFTE Theory of Identification does nothing to cure the absence of statistical

empirical foundations for firearms and toolmark identifications. The Theory states that there is an exceedingly small likelihood that any tool besides the suspect tool produced the evidence toolmark(s) when the observed agreement between test and evidence toolmarks is superior to that of the best known non-match and consonant with that of the best known match (Theory of Identification as it Relates to Toolmarks, supra, at 86). 68.

However, because “there is no universal agreement as to how much

correspondence exceeds the best known nonmatching situation”, the AFTE Theory provides examiners with no objective guidance about when to declare a match. (Nichols, Defending the Scientific Foundations, supra, at 589). 20 69.

The NRC Forensic Science Report detailed the problems with the AFTE Theory.

A fundamental problem with toolmark and firearms analysis is the lack of a precisely defined process. …AFTE has adopted a theory 20

See United States v. Monteiro, 407 F.Supp.2d at 369-70 (recognizing that the AFTE Theory “leaves much to be desired. …it is not a numeric or statistical standard, but is based on the individual examiner's expertise” and criticizing the Theory for being “tautological: it requires each examiner to decide when there is ‘sufficient agreement’ of toolmarks to constitute an ‘identification’”); United States v. Taylor, 663 F.Supp.2d 1170, 1177 (D.N.M. 2009) (citing Monteiro for the proposition that “the AFTE theory is circular. An examiner may make an identification when there is sufficient agreement, and sufficient agreement is defined as enough agreement for an identification.”). 26

of identification, but it does not provide a specific protocol. … The meaning[s] of ‘exceeds the best agreement’ and ‘consistent with’ are not specified, and the examiner is expected to draw on his or her own experience. This AFTE document, which is the best guidance available for the field of toolmark identification, does not even consider, let alone address, questions regarding variability, reliability, repeatability or the number of correlations needed to achieve a given degree of confidence. (Supra at 155). 21 70.

The problems with the AFTE Theory are epitomized by Ronald Nichols’s

acknowledgement that examiners in different parts of the United States are likely to develop different conceptions of the “best known non-match.” This implies that the AFTE Theory of Identification is likely to yield different conclusions, depending on where examiners work. 22

21

See United States v. Mouzone, 2009 WL 3617748 at *14 n.14 (“Despite this pointed expression of concern by the NRC regarding a fundamental shortcoming in assessing the reliability of toolmark evidence, proponents of the AFTE methodology appear to be at a loss as to how to address it, other than by dismissing it.”).

22

See Nichols, Defending the Scientific Foundations, supra, at 590 (“For example, an examiner in California has access to certain training materials with comparing known nonmatches that establish a baseline correspondence. It is very likely that an examiner in the Northeast has different materials and will therefore develop a different experiential concept of the best-known nonmatch.”). Cf. United States v. Mouzone, 2009 WL 3617748 at *14 n.14 (stating that “the AFTE's most vocal supporter, Nichols of the ATF Bureau, attempts to address the NRC's concern about the lack of specificity in determining when “sufficient agreement” exists by acknowledging it (‘there is no universal agreement as to how much correspondence exceeds the best-known nonmatching situation,’ Nichols, supra [Defending the Scientific Foundations], at 589) but then attempting to minimize it (‘in practice this limitation is not as significant as critics contend,’ id.)”). 27

DISAGREEMENT WITHIN THE FIELD OF FIREARM & TOOLMARK EXAMINATION 71.

By contrast to examiners who follow the traditional, “I know it when I see it”

approach, some firearms and toolmark examiners base their identity determinations on the CMS (consecutive matching striae) criterion propounded by Biasotti and Murdock in 1997. 23 72.

The rivalry between CMS and the traditional, subjective approach was

emphasized by Stephen G. Bunch, Unit Chief of the FBI firearms and toolmark laboratory, in Consecutive Matching Striation Criteria: A General Critique (45[5] J. Forens. Sci. 955 [2000]). According to Bunch: Since Al Biasotti conducted his original identification-criteria research in the 1950’s, the debate over the relative virtues of objective and subjective methods in forensic firearms identification – specifically over the virtues of counting consecutive matching striations on bullets – has blown hot and cold. Recently, the debate has heated up, in part owing to the Supreme Court’s decision in Daubert v. Merrell Dow Pharmaceuticals, Inc. ... [I]n view of its increasing popularity, this paper sets out to critically examine consecutive matching striation (CMS) models ….

(Id. at 955). 73.

Many other examiners acknowledge the fundamental disagreement, within the

field of firearms and toolmark identification, as to the relative merits of CMS and the traditional, subjective approach. 24

23

See Biasotti, Murdock & Moran, supra, at 621. The development of CMS was motivated by the recognition of “[a]n almost complete lack of factual and statistical data pertaining to the problem of establishing identity in the field of firearms identification ….” Biasotti, A Statistical Study, supra, at 34. CMS was intended to be a scientifically superior alternative to the traditional, subjective approach that many examiners follow to this day. See Biasotti, Principles of Evidence Evaluation, supra, at 429 (“If we accept the present apparent state of development as adequate and believe that no objective statistical data for establishing identity can be developed, then the subject of firearms and tool mark identification will remain essentially an art limited by the intuitive ability of individual practitioners.”). 28

74.

Recently, Ronald Nichols has departed from the widespread perception of a

disciplinary divide between proponents of CMS and those who adhere to the traditional, “I know it when I see it” approach. He has insisted that “CMS is not a more objective way of performing examinations but simply a means by which an examiner can describe what he or she is observing in a striated toolmark comparison” (Nichols, Defending the Scientific Foundations, supra, at 590). At the same time, Nichols has described CMS as an attempt “to standardize the concept of the best-known nonmatch discipline- wide” (Id.). 75.

These two descriptions of CMS cannot both be true, given Nichols’s own

recognition that under the traditional approach, “difference[s] between examiners as to what constitutes the best-known non-match situation” make it “not surprising” and “not necessarily unexpected” for examiners to disagree about whether an inconclusive or an identification is the proper conclusion in a particular case (Transcript of hearing, United States v. Diaz [N.D. Cal. January 24, 2007] at 51; Ronald Nichols, A Response to Recent Challenges, supra, at 26). 76.

If, as Nichols claims, CMS is not “a different method than has been practiced

throughout the years” (Defending the Scientific Foundations, supra, at 590), the CMS identification criterion must be such a malleable standard that when examiners disagree, as they do under the traditional approach, they each can manipulate CMS to show that they are right. To

24

See, e.g., Champod, Baldwin, Taroni, Buckleton, supra, at 311-15; Kevan Walsh & Gerhard Wevers, Toolmark Identification: Can We Determine a Criterion?, 29 Interfaces 4- 5 (Jan.Mar.2002), available at http://www.forensic-science-society.org.uk/inter29pdf; Bruce Moran, Comments and Clarification of Responses from a Member of the AFTE 2001 Criteria for Identification of Toolmarks Discussion Panel, 35(1) AFTE J. 55, passim (Winter 2003). See also United States v. Glynn, 578 F.Supp.2d at 574 & n.12 (referring to CMS and stating that “[a]lthough attempts have been made to introduce … minimum standards and ‘protocols’ into ballistics analysis, such attempts have not yet met with general acceptance ….”). 29

contribute to standardization, however, the CMS criterion must be inflexible enough to settle disagreements that arise under the traditional approach. 77.

Nichols to the contrary, CMS is most favorably viewed as an attempt to use

statistical empirical studies to formulate a cut-off point of numbers of consecutive matching striae at which the likelihood that another tool would produce toolmarks that do as good a job at matching the evidence toolmark as the toolmarks produced by the suspect tool is so exceedingly small that, for all practical purposes, the suspect tool can be identified as the unique source of the evidence toolmark. 78.

In accord with this, the NRC Forensic Science Report stated that, “Recent

research has attempted to develop a statistical foundation for assessing the likelihood that more than one tool could have made specific marks by assessing consecutive matching striae, but this approach is used in a minority of cases” (supra at 154 n.63. See also, e.g., Champod, Baldwin, Taroni, & Buckleton, supra, at 310-11; Walsh & Wevers, supra, at 5). 79.

Although CMS is superior to the traditional, subjective approach in that it is at

least an attempt to develop statistical empirical foundations for firearms and toolmark identification, it is crucial to recognize that CMS is a highly imperfect attempt. 25 Major problems, which even its supporters recognize, are that the CMS identification criterion applies only to striated toolmarks, not to impression toolmarks such as firing pin impressions or breech face marks (see, e.g., Kristen A. Tomasetti, Analysis of the Essential Aspects of Striated Toolmark Examination and the Methods for Identification, 34[3] AFTE J. 289, 298 [Summer 2002]; Ronald Nichols, Consecutive Matching Striae [CMS]), 35[3] AFTE J. 298, 305 [2003]).

30

80.

In addition, the CMS criterion is intended to be applied to individual, rather than

subclass, characteristics of toolmarks. Misidentifications will result if, in applying the criterion, examiners mistake subclass characteristics for individual characteristics. CMS does nothing to decrease the difficulty of distinguishing between subclass and individual characteristics (see, eg, Jerry Miller, Criteria for Identification of Toolmarks Part II, 32 [2] AFTE J. 116, 127 [2000]; Walsh & Weavers, supra, at 4; Biasotti, Murdock & Moran, supra, at 621). 81.

There are ongoing scientific disputes about whether the objectivity of CMS is

undermined by the fact that “line counting is inherently a subjective process” (Tomasetti, supra, at 298[emphasis added]; see also Charles Meyers, Some Basic Bullet Striae Considerations, 34[2] AFTE J. 158, 158-59 [Spring 2002]; Bunch, supra, at 959; Werner Deinet, Comments on the Application of Theoretical Probability Models including Bayes Theorem in Forensic Science Relating Firearms and Tool Marks, 39[1] AFTE J. 4, 6 [Winter 2007][“Very often, two independent experts will get different results concerning the total number of striae and the number of matching striae”]). 82.

Another major scientific dispute pertains to whether the CMS criterion has been

derived from relevant and representative databases (see Walsh & Wevers, supra, at 4 [“Criticism may now focus on the applicability of transferring the (CMS) criterion from one set of class features to another. Is the method determined by studies of bullet striae also applicable to screwdriver striae or even within sets of bullets of varying quality toolmarks?”]; cf. Champod, Baldwin, Taroni, & Buckleton, supra, at 313 [“What we really needed was the highest number of

25

Cf. David Howitt, Fred Tulleners, Karen Cebra & Shiahn Chen, A Calculation of the Theoretical Significance of Matched Bullets, 53 (4) J. Forens. Sci. 868 (2008) (“The idea that there is an absolute cut-off in terms of the number of consecutively corresponding striae that 31

CMS noted in the whole bullet. We cannot see how to get this data from Mr. Biasotti’s paper …”]). 83.

These scientific reservations about CMS contrast with Bunch’s position that

because the attempt to develop statistical empirical foundations for firearms and toolmark identification would invite scrutiny by the broader scientific community, “the benefit of the doubt should go to the traditional methods” (Bunch, supra, at 962). Bunch went on to say that: Indeed, the firearm-toolmark community could commit a long-term mistake by underestimating the scope of the research required to truly validate a CMS regime. As the history of DNA validation has revealed, hard scientific research and the attendant statistical analyses can elicit equally hard questions that scrutinize every level of analysis and interpretation. … The point is, if CMS criteria are widely adopted, we can rest assured that tough questions eventually will be raised in court, possibly by high-powered scientists and statisticians hired by opposing counsel.

(Id. at 959-60). 84.

Contrary to Bunch’s negative opinion of the scientific challenges that were

brought against forensic DNA in court, the NRC Forensic Science Report found that “no forensic method other than nuclear DNA analysis has been rigorously shown to have the capacity to consistently and with a high degree of certainty support conclusions about ‘individualization’,” and that one reason for the superiority of forensic DNA is that “[f]rom the beginning, eminent scientists contributed their expertise to ensuring that DNA evidence offered in a courtroom would be valid and reliable” (supra at 87, 99). 85.

According to the Report, the vigorous courtroom challenges and the two reports

that the NRC issued on forensic DNA in 1992 and 1996 contributed to scientific progress. “As a result, principles of statistics and population genetics that pertain to DNA evidence were clarified, the methods for conducting DNA analysis and declaring a match became less 32

subjective, and quality assurance and quality control protocols were designed to improve laboratory performance” (supra at 99). 86.

In his article, Bunch also cautioned against attempts to develop statistical

empirical foundations on the ground that firearms and toolmark examiners “may fail to understand or appreciate the research and the logic of interpreting this type of evidence. Thus they may find it difficult to explain them to judge and jury. … [This] could be a blow to the profession and to the administration of justice” (Id. at 960). 87.

Contrary to Bunch, former forensic scientist Andre A. Moenssens deplored the

facts that “[m]any of the witnesses who testify as experts for the prosecution are not truly scientists, but better fit the label of ‘technicians’” and that “[s]ometimes these experts, trained in one forensic discipline, have little or no knowledge of the study of probabilities, and never even had a college level course in statistics” (Novel Scientific Evidence in Criminal Cases: Some Words of Caution, 84 J. Crim. L. & Criminology 1, 5 &19 [1993]; See also NRC Forensic Science Report, supra, at 16-17, 79 [recognizing the need for improvements in forensic science education]). 88.

In addition to their vast disagreements about identity criteria, firearms and

toolmark examiners may disagree about whether to reach an identification conclusion in a particular case. Not only different firearms and toolmark examiners, but the same examiner over the course of time, are likely to have different mind’s eye identification criteria and therefore reach different conclusions in regard to the same evidence. 26

26

See Erich D. Smith, Cartridge Case and Bullet Comparison Study with Firearms Submitted in Casework, 36(4) AFTE J. 130 (2004) (“Despite the goal for the threshold which examiners develop during training to be consistent among qualified examiners;[sic] the subjective point for 33

89.

Ronald Nichols has stated that when examiners disagree about the conclusion

warranted in a particular case, the court should recognize that the discipline lacks standards for resolving such disputes and therefore “appropriately assign… the task of weight to the jury” (A Response to Recent Challenges, supra, at 26). THE FIELD OF FIREARM EXAMINATION DOES NOT HAVE A STATISTICALLY ESTABLISHED ERROR RATE 90.

The absence of agreed-upon, objective criteria for resolving disputes about

whether identification conclusions are warranted in a particular case means that a day-to-day error rate cannot be calculated for the discipline of firearms and toolmark identification. 27 91.

An accurate estimate of day-to-day error rates cannot be derived from the only

widely used proficiency tests, the Collaborative Testing Services (“CTS”) tests. Contrary to Nichols’s position that the CTS test results “can offer to the court a reliable practical indicator of how often the profession, using accepted procedure, practices, and controls, makes a false identification,” CTS itself cautions that the results on its tests "are not intended to be an overview of the quality of work performed in the profession and cannot be interpreted as such." 28

an identification for qualified examiners may not be equal for any given group of examiners, and over time, the threshold of an individual examiner may change.”). 27 See United States v. Diaz, Slip Copy, NO. CR 05-00167 WHA, at *9 (N.D.Cal. Feb 12, 2007) (“No true error rate will ever be calculated so long as the firearm-examiner community continues to rely on the subjective traditional pattern matching method of identification.”); Richard Grzybowski, Jerry Miller, Bruce Moran, John Murdock, Ron Nichols & Robert Thompson, Firearm Toolmark Identification: Passing the Reliability Test under Federal and State Evidentiary Standards, 35 (2) AFTE J. 209, 219 (Spring 2003). 28

See Nichols, Defending the Scientific Foundations, supra, at 592; CTS, Firearms Examination

Test No. 07-526 Summary Report at 1, http://www.collaborativetesting.com/reports/2726_web.pdf (2007). See also United States v. Mouzone, 2009 WL 3617748 at 14 n.14 (summarizing criticisms of the CTS tests and stating that “It would be easier to accept Nichol’s [sic] assurances that periodic proficiency testing 34

92.

The authors of the classic study of proficiency testing of forensic scientists,

Joseph L. Peterson and Penelope Markham, explained in 1995 that there are two fundamental problems with using results on the CTS tests as a proxy for day-to-day error rates. First, the CTS tests are declared, rather than blind. Second, the tests present examiners with simpler problems than they encounter in actual casework. 29 93.

Some firearms and toolmark examiners who took the CTS tests have criticized the

tests for being too easy. 30 94.

Although a discipline-wide error rate cannot be calculated for firearms and

toolmark identification, in spring 2008, the Detroit police chief temporarily shut down the

adequately will ensure against erroneous identifications if there were not so many concerns about the effectiveness of the proficiency test.”); Transcript of hearing, United States v. Brown, No. 05 Cr. 538 at 1193 (S.D.N.Y. June 13, 2008) (Judge Rakoff concludes, on the basis of testimony about the problems with the CTS tests, that “there is no known error rate in any well developed sense” for firearms and toolmark identification). 29

See Peterson and Markham, Crime Laboratory Proficiency Testing Results, 1978-1991, I, 40 J. Forensic Sci. 994, 997 (1995) (“Peterson & Markham I”). See also NRC Forensic Science Report, supra, at 207 (stating that ASCLD/Lab recommends blind proficiency testing “as a more precise test of a worker’s accuracy”); Grzybowski et al., supra, at 219 (“Since large numbers of tests have to be produced [by CTS] to uniform requirements, most tend to be rather straightforward and of only moderate difficulty.”); Biasotti, Murdock & Moran, supra, at 61112.

30

See, e.g., CTS, Firearms Examination Test No. 06-526 Summary Report, at 42, http://www.collaborativetesting.com/reports/2626_web.pdf (2006) (comment that the 2006 CTS cartridge case test “was straight forward and very easy. It took only a few minutes to make correct associations using toolmarks devoid of sub-class influence. … I suggest that you consider making the test more of a challenge in order to determine an error rate really reflective of actual casework where borderline cases are not uncommon”); Firearms Examination Test No. 05-527 Summary Report, at 21, http://www.collaborativetesting.com/reports/2527_Web.pdf (2005) (test taker’s comment that the CTS firearms identification test administered in 2005 “was very easy. The class features of the firing pin in the firearm used to discharge Items 1 and 3 [the cartridge cases not fired from the suspect gun] was [sic] so different it could be eliminated virtually with the naked eye.”). 35

Detroit Police Firearms Unit laboratory and ordered an audit of its operations after errors were found in a double homicide case. In September 2008, the laboratory was permanently shut down after the audit by the Michigan State Police Forensic Science Division found that the Detroit laboratory had made misidentifications in three out of thirty-three adjudicated cases from the Wayne County Prosecutor’s Office (Nick Brinkley, “Detroit Police Lab Is Closed After Audit Finds Serious Errors in Many Cases,” The New York Times, September 25, 2008, available at http://www.nytimes.com/2008/09/26/us/26detroit.html). 95.

According to the Preliminary Audit Report, “In total, this equates to

approximately 10% of the completed firearms cases having significant errors. On average, the DPD firearms unit analyzes 1,800 cases per year. If this 10% error rate holds, the negative impact on the judicial system would be substantial, with a strong likelihood of wrongful convictions and a valid concern about numerous appeals” 31 96.

In December 2008, the Wayne County prosecutor responded to the audit of the

DPD Firearms Unit by creating a Forensic Evidence Review Unit to re-evaluate every case

31

Michigan State Police Forensic Science Division, Detroit Police Firearms Unit Preliminary Audit Findings as of September 23, 2008, at 3, available at http://www.policeissues.com/Detroit_PD_crime_lab.pdf. The same 10% error rate was reported in the final audit report issued in October 2008. Naomi R. Patton, “Worthy Blasts Detroit Police for Audit’s Findings on Crime Lab,” Detroit Free Press, October 29, 2008, available at http://www.sado.org/crimelab/Worthy%20blast%20Detroit%20police%2010-29-08%20DFP.pdf. Cf. NRC Forensic Science Report, supra, at 46 (“The insistence by some forensic practitioners that their disciplines employ methodologies that have perfect accuracy and produce no errors has hampered efforts to evaluate the usefulness of the forensic science disciplines.”); United States v. Glynn, 578 F.Supp.2d at 574 (deploring “the tendency of ballistics experts … to make assertions that their matches are certain beyond all doubt, that the error rate of their methodology is ‘zero,’ and other such pretensions”); Williams v. Quarterman, 551 F.3d 352, 355-56 (5th Cir. 2008) (Houston Police Department firearms and toolmark examiner mistakenly concluded that the defendant’s .25-caliber pistol, rather than the cooperating witness’s .22-caliber pistol, had fired the bullet recovered from the decedent’s head). 36

involving firearms evidence from 2003 on. In addition, the prosecutor offered to have firearms evidence retested in any other cases in which defense attorneys, defendants, inmates or their families made a request. The prosecutor expected the review process to take at least three years. 32 THE OBSTACLE OF “CONFIRMATION BIAS” 97.

An additional severe problem with firearms and toolmark identification is

confirmation bias. 33 98.

Judge Gertner’s description of the examination in Green is true of many, if not

most, firearms and toolmark examinations conducted in the United States. “The only weapon [the examiner] was shown was the suspect one; the only inquiry was whether the shell casings found earlier matched it. It was, in effect, an evidentiary ‘show-up,’ not what scientists would regard as a ‘blind’ test. [The examiner] was not asked to try to match the casings to the other test-fired Hi Point weapons in police custody, or any other gun for that matter, an examination more equivalent to an evidentiary ‘line-up.’ His work was reviewed by another officer, who did 32

Carol Lundberg, “Plan to Retest Forensic Evidence for Mich. Prisoners Will Take At Least 3 Years,” Michigan Lawyers Weekly, December 20, 2008, available at http://www.correctionsone.com/news/1768052-Plan-to-retest-forensic-evidence-for-Michprisoners-will-take-at-3-years; Ben Schmitt, “Worthy Creates Unit to Review Gun Cases,” Detroit Free Press, December 13, 2008, available at http://www.freep.com/article/20081213/NEWS02/812130333. 33

See, e.g., Michael Risinger, Michael J. Saks, William C. Thompson & Robert Rosenthal, The Daubert/Kumho Implications of Observer Effects in Forensic Science: Hidden Problems of Expectation and Suggestion, 90 Calif. L. Rev.1 (2002) (discussing confirmation bias and other cognitive biases that infect forensic scientists’ day-to-day work); NRC Forensic Science Report, supra, at 191 (“Recommendation 5: The [proposed] National Institute of Forensic Science (NIFS) should encourage research programs on human observer bias and sources of human error in forensic examinations.”). Cf. Itiel E. Dror & David Charlton, Why Experts Make Errors, 56(4) J. Forens. Sci. 600 (2006) (reporting experiments in which, when presented 37

the same thing -- checked his conclusions under the same conditions -- another evidentiary ‘show-up’” (405 F.Supp.2d at 107-08; see also United States v. Taylor, 663 F.Supp.2d 1170, 1179 [D.N.M. 2009][“The problem with this practice is the same kind of problem that has troubled courts with respect to show-up identifications of people: it creates a potentially significant ‘observer effect’ whereby the examiner knows that he is testing a suspect weapon and may be predisposed to find a match”]; cf. Evan Thompson & Jan De Kinder, Range of Exclusions, 38[1] AFTE J. 51 [Winter 2006][“Rather than exclude a particular firearm or tool, most firearm/toolmark examiners probably find it far easier to include by reporting that ‘this bullet was fired from a particular firearm’ or ‘this screwdriver was responsible for the striated toolmark left on the door’”]). 99.

Confirmation bias also arises from firearms and toolmark examiners’ practice of

peer reviewing work only when identifications are reached. “[I]f the expert doing the check only ever checks positive matches, then his perception will be that whenever he sits at the microscope to conduct a peer review of casework, he will expect to see a positive match!” (Gerard Dutton, Commentary: Ethics in Forensic Firearms Investigation, 37[2] AFTE J. 79, 82 [Spring 2005]; see also Commonwealth of Massachusetts v. Meeks and Warner, 2006 WL 2819423 [Mass.Super. Sep 28, 2006][“Ideally, (peer review)would be ‘blind’ and done with respect to every examination”]). 100.

A danger of cognitive bias also arises from the close working relationships that

firearms and toolmark examiners and other forensic scientists have with law enforcement and the prosecution. The NRC Forensic Science Report found that “[f]orensic scientists who sit

with biasing contextual information, two-thirds of experts changed the decisions that they had previously reached in regard to fingerprint identification in real criminal cases). 38

administratively in law enforcement agencies or prosecutors’ offices, or who are hired by those units, are subject to a general risk of bias” (supra, at 185). 101.

Accordingly, one of the principal recommendations of the Report was that

Congress provide financial incentives for states and localities to make their forensic laboratories independent: Recommendation 4: To improve the scientific bases of forensic science examinations and to maximize independence from or autonomy within the law enforcement community, Congress should authorize and appropriate incentive funds to the National Institute of Forensic Science (NIFS) for allocation to state and local jurisdictions for the purpose of removing all public forensic laboratories and facilities from the administrative control of law enforcement agencies or prosecutors’ offices.

(NRC Forensic Science Report, supra, 190). 102.

On the basis of the NRC Report, Justice Scalia reasoned, in Melendez-Diaz v.

Massachusetts, that “[a] forensic analyst responding to a request from a law enforcement official may feel pressure-or have an incentive-to alter the evidence in a manner favorable to the prosecution” (129 S.Ct. at 2536). FIREARM & TOOLMARK EXAMINERS’ TESTS AND STUDIES ARE NOT SUBJECT TO SCIENTIFIC PEER REVIEW 103.

Scrutiny of firearms and toolmark identification by the broader scientific

community has been stymied by a lack of ready access to firearms and toolmark examiners’ publications. 34

34

Cf. NRC Forensic Science Report, supra, at 42 (“The fact is that many forensic tests – such as those used to infer the source of toolmarks or bite marks – have never been exposed to stringent scientific scrutiny.”); Jay D. Aronson, GENETIC WITNESS: SCIENCE, LAW AND CONTROVERSY IN THE MAKING OF DNA PROFILING (2007) (explaining how input from 39

104.

Although non-members may subscribe to the paper version, the AFTE Journal is

found in only the John Jay College of Criminal Justice library and one other library on the East Coast. In the Bay Area, the nearest library with the AFTE Journal is the U.C. Davis Law School Library (cf. NRC Forensic Science Report, supra, at 150 [“Most of the research in the field (of impression evidence, including the subfield of firearms and toolmark identification) is conducted in forensic laboratories, with the results published in trade journals, such as the Journal of Forensic Identification”]; United States v. Green, 405 F.Supp.2d 104, 122 n.32 [D.Mass. 2005][ “Nor is there evidence of any peer-reviewed publications in the ballistics/toolmark field as that idea is understood in Daubert and Kumho”]).

the broader scientific community led to the establishment of firmer scientific foundations for forensic DNA). The vast majority of articles by firearms and toolmark examiners are published in the AFTE Journal, which is not available on-line to the public, even at a cost. On-line access to the AFTE Journal is restricted to AFTE members, who must be “practicing firearm and/or toolmark examiners [or] student trainees in the profession of firearm and/or toolmark identification.” The sole exceptions are Honorary Memberships “conferred upon individuals in recognition of distinguished service to the Association or to the field of firearm and/or toolmark examination” and “Technical Advisor” memberships conferred upon “[d]esignated employees of manufacturers of products used or encountered in the investigation of firearm or toolmark evidence, or specialists in closely related fields, whose area of expertise would be beneficial to the Association.” See AFTE Membership Information, at http://www.afte.org/Membership/membership.htm; AFTE News, at www.afte.org (“Monday October 13, 2008 AFTE Journals Online- The AFTE Journals previously available on CD ROM have been converted to PDF files and can now be downloaded by AFTE members via the Journal Search page”). 40

CONCLUSION 105.

For the reasons stated above, firearms and toolmark identification evidence

should be inadmissible because the field of firearms and toolmark identification is not generally accepted in the relevant scientific community. Signed under the pains and penalties of perjury on September 10, 2010.

____________________ Adina Schwartz

41

Federal Rule 701 Lay Witness Opinion • If not an expert, may only testify as to a conclusion drawn from experience using the five senses

Opinion Evidence CSI at NYCLA Mark Rosen, Esq. December 2, 2010

Mark Rosen, Esq. copyright 2010

Senses

1

Mark Rosen, Esq. copyright 2010

2

Reason for Rule 701

See it

• Hear

• Trier of fact in control of the reproduction of the event in question JURY

Taste Mark Rosen, Esq. copyright 2010

3

4

Rule 702 Expert testimony

Admissibility • Based upon witnesses own perception • Must be helpful • Not based upon scientific, technical or specialized knowledge

Mark Rosen, Esq. copyright 2010

Mark Rosen, Esq. copyright 2010

• If scientific, technical or specialized knowledge will help trier understand evidence witness may testify IF: – Testimony based upon facts or data – Testimony result of reliable principles and methods – Witness applied the principles reliably

5

Mark Rosen, Esq. copyright 2010

6

Must help the trier of facts • • • •

Qualifying as Expert • 1. Is subject beyond average juror? • 2. What are the qualifications for an expert? • 3. Does witness have them? • 4. Will testimony aid in determining truth? • Determined by the Court not jury

Credibility of witnesses? Negligence of a party in a tort case? State of mind of party? Intent of party?

Mark Rosen, Esq. copyright 2010

7

Mark Rosen, Esq. copyright 2010

Qualifying as Expert Some Factors • Degrees • Specialized training • Licensed • Practiced • Taught • Published

Rule 703 Bases of Opinion • Facts and data may be made known to expert before hearing • Facts and data need not be admissible to support opinion if commonly used by experts

• Qualified as expert • Professional Orgs. • Years at it • Cases etc like this

Mark Rosen, Esq. copyright 2010

9

Mark Rosen, Esq. copyright 2010

Rule 705 Disclosure of facts or data

10

Rule 705 Hypotheticals

• Expert may testify w/o giving data • May be compelled to provide data • Bare bones basis for opinion is permitted • Usually, further Q&A qualifies the witness

Mark Rosen, Esq. copyright 2010

8

• • • •

11

Form left to court Need not include all facts Should not mislead or confuse Only include facts not in dispute

Mark Rosen, Esq. copyright 2010

12

Frye Test 293 F. 1013 (D.C. Cir 1923)

Handwriting

• Novel scientific evidence admissible if proponent can show it is generally accepted by scientific community • Problem created when new technology comes to the market

Mark Rosen, Esq. copyright 2010

• Expert or Lay

13

Lay Witnesses • • • • • •

• Rational conduct • Speed of vehicle • Driving practices

15

Daubert v. Merrell Dow

Mark Rosen, Esq. copyright 2010

16

Daubert cont’d

• Lawsuits over Bendictin causing birth defects • Motion for summary J. Defense witness says 30 studies and 130,000 patients no evidence • Plaintiffs- 8 experts say yes after a “reanalysis”

Mark Rosen, Esq. copyright 2010

14

Lay Witnesses cont’d

Emotion Sensations Physical condition Voice Intoxication Identity Mark Rosen, Esq. copyright 2010

Mark Rosen, Esq. copyright 2010

• Court relies on Frye test • Petitioner’s say Federal Rule 702 superseded Frye. • USSC- proffer the expert Rule 104(a)- is the expert offering scientific knowledge? Will it assist?

17

Mark Rosen, Esq. copyright 2010

18

Daubert v. Merrrell Dow 509 U.S. 579 ( 1993)

Daubert cont’d 2

• Can expert’s theory or technique be tested? • Subject to peer review? • Known error rate? • Existence and maintenance of stds/controls • Generally accepted in scientific community?

Mark Rosen, Esq. copyright 2010

• No risk of a free for all • Rigorous cross examination is available • Science is evolving • Rules don’t look for cosmic understanding

19

Mark Rosen, Esq. copyright 2010

20

United States Army Trial Judiciary Fifth Judicial Circuit, Germany ) ) ) v. ) ) St. Gerard, Marius ) SGT, U.S. Army st Headquarters Support Company, 1 Battalion ) ) 10th Special Forces Group (Airborne) ) APO AE 09107 ________________ UNITED STATES

Essential Findings of Fact, Conclusions of Law, and Ruling Defense Motion, Daubert-Houser

7 June 2010

The defense has moved to exclude the opinion testimony of Mrs. Dana Sevigny that a specific cartridge case was fired by a specific AK-47 under MRE 702 and applicable case law. I have considered the briefs submitted by the parties, the charge sheet, AE XII – XXIII, the testimony of Mrs. Sevigny, the testimony of Dr. Adina Schwartz, and the arguments of counsel. 1. The Court finds the following facts by a preponderance of the evidence: a. Among other charges, the accused, in conjunction with SGT Aubrey Bradley II, is charged with the attempted murder of SPC Jacob Bell by firing an AK-47 at SPC Bell and committing an assault upon SSG McKinley Taylor, SGT John Rene, CPL Antonio Feagins, SPC Jacob Bell, SPC Jeramee Smith, and SPC Dante Daniel by shooting into the air with the same AK-47. b. On 30 December 2009, SPC Bell, SGT Rene, CPL Feagins, SPC Smith, and SPC Daniel reported to the Heidelberg Provost Marshal’s Office that in the early morning hours of 30 December 2009, near the ADAC office located across the street from the Esso gas station adjacent to the Holiday Inn in Heidelberg, Germany, SGT St. Gerard fired one round from an AK-47 into the air and SGT Bradley fired 2 rounds from an AK-47 at SPC Bell. c. Within 48 hours, German Police and US CID agents recovered an AK-47 from the apartment of the accused in Boeblingen, Germany. d. Approximately two months later, one of the participants in the confrontation provided a single cartridge case to US law enforcement agents, alleging to have found it at the scene upon re-visiting the site of the alleged shooting. e. Mrs. Dana Sevigny, a Firearms Examiner at the CID laboratory at Fort Gillem, Georgia, examined both the AK-47 and the cartridge case.

1

f. Mrs. Sevigny used the Association of Firearm and Toolmark Examiners’ (AFTE) Theory of Identification to determine if the cartridge case was struck and ejected by the AK-47. To conduct this examination, Mrs. Sevigny teset-fired 6 rounds from the AK-47. She then compared the marks made on the cartridge cases from the 6 known rounds with those on the cartridge case found at the scene of the alleged shooting using a comparison microscope. Using the AFTE Theory of Identification, Mrs. Sevigny was able to form an opinion that the known cases and the case provided to CID were fired by the same weapon if the surface contours on the casings were in “sufficient agreement.” Sufficient agreement exists when the agreement exceeds what she has personally seen through her training and experience in two toolmarks known to be produced by different tools and is consistent with agreement she has seen in toolmarks known to have been produced by the same tool. In this case, Mrs. Sevigny found sufficient agreement in the cases test-fired from the AK-47 and the one retrieved from the scene of the alleged shooting. g. Mrs. Sevigny passed the cartridge cases to another examiner to verify her results. The second examiner performed the same microscopic exam and also determined that there was sufficient agreement in the cases. h. Mrs. Sevigny will testify that the AK-47 made the marks found on the cartridge case and that it would be practically impossible for another tool to have made those marks. 2. Law and Analysis. a. Proferred expert testimony must meet the following criteria in order to be admissible: (1) The expert is qualified; (2) The subject of the testimony is within the realm of the expert’s qualification; (3) The expert has an appropriate basis for the testimony; (4) The testimony is relevant; (5) The testimony is reliable; (6) The testimony meets the balancing test under MRE 403. United States v. Houser, 36 MJ 392 (1993). b. In the instant case, the Court concludes that: (1) Mrs. Sevigny has ample qualifications to testify in the fields of firearm and toolmark examination based upon her education, background, training, and experience as demonstrated in her Statement of Qualifications (AE XIII) and her in-court testimony. Additionally, Mrs. Sevigny has been qualified as an expert in this field and testified in court approximately 50 times.

2

(2) The subject of the testimony is clearly within the realm of Mrs. Sevigny’s qualification. The examinations that Mrs. Sevigny conducted are precisely the types of examinations she has been trained to conduct and which she has been conducting for at least 8 years. (3) Mrs. Sevigny has an appropriate basis for her testimony, as she was able to examine both the AK-47 seized from the accused’s apartment and the cartridge case recovered from the scene of the alleged shooting. (4) Mrs. Sevigny’s testimony would be relevant to corroborate eyewitness reports that rounds were fired from the AK-47 at the scene. (5) In order to determine whether the proffered toolmark examination was reliable, the Court applied the following factors identified by the Supreme Court in Daubert v. Merrell Dow Pharmaceuticals’ Inc., 509 US 579 (1993): (a) Whether the theory or technique used can be and has been tested; (b) Whether the theory or technique has been subjected to peer review and publication; (c) Whether the known or potential rate of error is acceptable; and (d) Whether the theory or technique enjoys widespread acceptance in the scientific community. (7) In the instant case, the Court concludes: (a) Toolmark examination is tested both to determine the proficiency of examiners and the validity of the process. The AFTE community has conducted numerous tests to determine whether each tool truly does produce unique marks. Additionally, the AFTE community tests its examiners using tests developed by independent organizations. Although AFTE is wellintentioned in its testing, the test results are not consistently reliable for a number of reasons. The tests are not blind, meaning that the examiners know they are being tested and in some cases are not required to submit their results, thus potentially skewing the results. Also, the tests do not consistently mirror the level of difficulty presented during routine operations. In most tests, examiners are given samples and asked to make determinations as to whether the casing is a match to the weapon, excluded as a match to the weapon, or inconclusive. The examiners are then “graded” to determine if they made any false positive or false negative determinations. Findings of inconclusive when the answer was really a known match or known non-match do not count against the examiners, which likely causes examiners to be more conservative in their finding of sufficient agreement than they would ordinarily be when performing examinations on a daily basis. (b) Toolmark examination is subjected to both peer review and publication. The articles produced in this field appear in the AFTE Journal, which is a publication created by and intended 3

for members of AFTE and which is not widely available throughout the scientific community. Although AFTE is a relatively small community, not all peer reviewers are tool examiners; a reviewer may be a statistician, metallurgist, etc. depending on the subject matter of the article. (c) Although error rates have been calculated, these rates may be inadequate. They are often based on false positive or negative reports and do not incorporate the examinations in which an examiner incorrectly identifies a known match or known no-match as inconclusive. Further, the results of testing may be skewed by self-selection or testing methods as indicated previously. (d) Toolmark examination has enjoyed widespread acceptance in the scientific community for well over 50 years. Recently, however, the National Academy of Sciences Report on Forensic Sciences raised doubts about the reliability of certain forensic sciences, to include toolmark examination. The subjective nature of the analysis and inability to reproduce results are just some of the complaints about toolmark examination. (e) Considering the Daubert factors in light of Mrs. Sevigny’s anticipated testimony, the Court finds that any testimony indicating that the shell casing must have come from the AK-47 would be unreliable. While it is clear that Mrs. Sevigny has training and expertise in identifying toolmarks that would undoubtedly assist the trier of fact in this case, the subjective nature of the process, lack of quantitative standards, and limited scope of foundational testing do not demonstrate the scientific principles necessary to establish the origin of the marks with any specific amount of certainty. (8) Conducting a balancing test under MRE 403, the Court concludes that the probative value of Mrs. Sevigny’s proferred testimony that it would be practically impossible for a tool other than the seized AK-47 to have made the marks on the cartridge case would be substantially outweighed by the unfair prejudice associated with its unreliability. 3. Ruling. Accordingly, the defense motion to exclude the testimony of Mrs. Sevigny that it would be a practical impossibility for the cartridge case to have been fired by any weapon other than the seized AK-47 is GRANTED. This ruling is limited solely to testimony concerning the level of certainty of the origin of the marks.

---//original signed//--WENDY P. DAKNIS LTC, JA Military Judge

4

Faculty Biographies

Mark B. Rosen, Esq. Mark Rosen’s practice includes the representation of chief executive and senior level executives as well as academics. He represents clients in their employment negotiations and contracts. Mr. Rosen is a former assistant district attorney in the Kings County District Attorney’s office where he spent his time in the Investigations, Trial, Grand Jury and Rackets Bureau. He has been on the faculty at John Jay College of Criminal Justice for more than fifteen years. Mr. Rosen teaches Evidence, Constitutional Law and security related law courses. He has lectured on law, security and terrorism matters for the United Nations. He is currently developing legal training courses for United Nations diplomatic mission personnel. He developed and designed a course on Terrorism and the Law which has been sold out each semester it has been offered at John Jay. Mr. Rosen conducts continuing education courses on Evidence for Forensic Psychiatrists at NYU/Bellevue and has developed and conducted numerous continuing legal education courses. Mr. Rosen is a graduate of Alfred University and Brooklyn Law School. He is an active member of NYCLA and has conducted numerous CLE courses on such topics as Electronic Discovery, Terrorism, Federal Practice and others for the association. In addition, he developed and moderated a CLE course titled “Criminal Law Update” now in its third year at John Jay.

Professor [email protected] or [email protected] 212.237.8402 or 212.228.2492 422.40T

1985 JD 1976 PhD 1971 BA

Yale Law School Rockefeller University Oberlin College

Areas of Expertise: Evidence law, science and law, criminal procedure, cybersurveillance law, social and legal theory, and jurisprudence Before coming to John Jay, Adina Schwartz was a federal public defender and, before that, an assistant professor in the Yale University Philosophy Department. Her article, “A Systemic Challenge to the Reliability and Admissibility of Firearms and Toolmark Identification,” 6 Columbia Science & Technology Law Review 1 (March 28, 2005), is cited in the National Research Counsel’s Report on Ballistics Imaging (2008) and in two federal court decisions, United States v. Monteiro, 407 F. Supp. 2d 351 (D.Mass. 2006), and United States v. Green, 405 F. Supp. 2d 104 (D. Mass. 2005), that severely restrict the admissibility of firearms and toolmark identification testimony. She has served as an defense expert, consulted, and made numerous presentations on the issue. Courts have also cited two others of her articles: "Homes as Folding Umbrellas: Two Recent Supreme Court Decisions on 'Knock and Announce'," 25 American Journal of Criminal Law 545 (1998), and "A 'Dogma of Empiricism' Revisited: Daubert v. Merrell Dow Pharmaceuticals, Inc. and the Need to Resurrect the Philosophical Insight of Frye v. United States," 10 Harvard Journal of Law and Technology:149 (1997).

Peter Valentin Peter Valentin completed his coursework for his Master of Science degree in Forensic Science from the University of New Haven. In addition, he graduated Cum Laude from the John Jay College of Criminal Justice with a Bachelor of Science degree in Forensic Science. His coursework primarily consisted of chemistry and methods used to identify unknown materials in addition to physics and microscopy. Peter is also a detective in the Connecticut State Police Major Crime Squad and investigates homicide suspicious death and other major crime scenes full time. He is a member of Connecticut’s Urban Search and Rescue Team where he is trained to look for evidence of criminal activity at a major disaster, such as a building collapse while functioning as a rescue specialist focused on saving injured and trapped victims. Peter works with the federal government as a member of a forensic team charged with human remains identification during a terrorist event or disaster in the United States and around the world. He worked for several months in the aftermath of Hurricanes Katrina and Rita along the Gulf Coast. He trains with forensic specialists several times a year in such places as the Body Farm in Tennessee, the world renowned facility for forensic anthropology research. Peter has been involved in research on the uses of alternate light sources for evidence collection and photography at the Henry Lee Forensic Institute at the University of New Haven and plans to publish his results shortly. He has also written an article in the Dental Clinics of North America on disaster management. His education and work experience are a unique blend of the crime scene and investigative aspects of police work as well as the chemistry-based laboratory analysis. He has received special training in many different aspects of forensics including blood spatter interpretation, anthropology, bombing and explosives investigations, crime scene photography and others. He has also received specialized training in investigative methods including interview and interrogation techniques.

Steve Bojekian Mr. Bojekian retired in 2004 as Chief of Department for the Bergen County Sheriff’s Office in New Jersey after a public service career spanning over thirty years. During this time he coordinated and executed numerous inter-agency operations with local, state and federal entities. With a concentration in the field of forensic science, he developed and implemented guidelines and protocols used in the investigation of crime scenes and laboratory analysis of evidence. In 1990 he was certified as a Senior Crime Scene Analyst and coordinated the development of the first state-of-the-art forensic laboratory facility in Northern New Jersey. Mr. Bojekian’s hands-on investigative experience includes fingerprints, forensic photography, evidence collection and analysis including; DNA, trace evidence, human remains and arson detection.

Smile Life

When life gives you a hundred reasons to cry, show life that you have a thousand reasons to smile

Get in touch

© Copyright 2015 - 2024 PDFFOX.COM - All rights reserved.