User Com - Mason Technology [PDF]

The new melting point systems feature high quality video observation of the melting point capillaries, video recording,

0 downloads 4 Views 2MB Size

Recommend Stories


Chickenhawk robert mason pdf
Stop acting so small. You are the universe in ecstatic motion. Rumi

George Mason University (PDF)
If you feel beautiful, then you are. Even if you don't, you still are. Terri Guillemets

Mason
We can't help everyone, but everyone can help someone. Ronald Reagan

Technology Acceptance and User Experience
You miss 100% of the shots you don’t take. Wayne Gretzky

technology & user - friendly english language
Never let your sense of morals prevent you from doing what is right. Isaac Asimov

TL2 Technology Developer User Guide
You miss 100% of the shots you don’t take. Wayne Gretzky

Analysys Mason
Silence is the language of God, all else is poor translation. Rumi

MASON CREST
It always seems impossible until it is done. Nelson Mandela

Ann Mason
No amount of guilt can solve the past, and no amount of anxiety can change the future. Anonymous

Mason-Dixon
When you do things from your soul, you feel a river moving in you, a joy. Rumi

Idea Transcript


29

Thermal Analysis Information for Users

Dear Customer, We are delighted to announce the introduction of the Excellence Melting Point Systems, a new series of instruments for simple and efficient automatic melting point determination. The new melting point systems feature high quality video observation of the melting point capillaries, video recording, and high throughput analysis. The large color display makes the instruments very easy to operate. Further information about these exciting new products can be found in the News section. And for the many thermal analysis customers interested in validation, we are pleased to say that the longawaited handbook entitled “Validation in Thermal Analysis” is now available. This book is the latest addition to the “Collected Applications” series and contains valuable information on how to improve the quality and reliability of results in thermal analysis.

Analytical measurement terminology in the laboratory. Part 1: Trueness, precision and accuracy Marco Zappa

The correct use of terms such as error of measurement, precision, trueness, accuracy, uncertainty of measurement and other expressions is essential for a professional approach to analytical measurement. This article is the first of a two-part series on analytical measurement terminology. It explains what these terms mean in a laboratory environment and how you can strengthen confidence in the quality of your own measurement results.

User Com Contents 1/2009 TA Tip - Analytical measurement terminology in the laboratory. Part 1: Trueness, precision and accuracy

1

New in our Sales Program - New Excellence Melting Point Systems

8

- Better results in thermal analysis thanks to validation

9

Applications - Determination of phase transitions with simultaneous video observation

10

- Measurement of thin films in shear by DMA

14

- Vitrification during the isothermal cure of a thermoset studied by TOPEM®

17

- Analysis of the foaming behavior of a fire retardant by TMA and TGA

21

Dates - Exhibitions

23

- Courses and Seminars

23

Introduction

TA Tip

Measurement results should reflect reality and serve as a basis for making decisions. You should be able to trust the result of a single measurement without having to perform replicate (i.e. repeat measurements). To achieve this goal, it is essential to quantify and as far as possible minimize systematic and random errors. However, this means you need to have a thorough understanding of the possible sources of measurement errors before you can you set about optimizing an analytical method with regard to trueness and precision. This first article discusses different sources of measurement errors in thermal analysis and shows how such errors can be identified and avoided. Examples are taken from DSC (differential scanning calorimetry), TGA (thermogravimetric analysis), TMA (thermomechanical analysis) and DMA (dynamic mechanical analysis). Part 2 (UserCom 30) describes how a concept for determining the uncertainty of measurement is developed.

Systematic and random errors of measurement

Figure 1. The individual values of a measurement series (Ci) are scattered around a mean value (B) and show a certain standard deviation. The deviation of the mean value (B) from the true value (A) is the systematic error of measurement (or bias). Table 1. Determination of the enthalpy of fusion of indium (in J/g) by DSC; the same test specimen was measured one hundred times at 10 K/min.

2

METTLER TOLEDO

The results of measurements performed under identical conditions are never free of error but are scattered around a mean value (B). Depending on quality of the measurement procedure, the mean value deviates more or less from a generally ac-

cepted reference value that is considered to be the true value (A) (Figure 1). The difference between an individual measurement value (Ci), and the true value (A) is made up of two parts: the systematic error and the random error. In general, a systematic error remains constant in magnitude and sign within a measurement series and applies to all measurement values − it causes the values to be too high or too low to the same extent. A systematic error is often referred to as bias and is also known as a determinate error. Systematic errors are often difficult to detect and eliminate. In contrast, the scatter (or spread) of individual values (Ci) around the mean value is due to random errors of measurement − some values are too high while others are too low. Random errors are also called indeterminate errors. They can be described with the aid of statistical parameters such as the standard deviation. A typical example of a systematic error of measurement in thermogravimetric analysis, TGA, is buoyancy: If a sample is heated in air at atmospheric pressure, the density of the air in the furnace decreases with increasing temperature. As a result, the buoyancy experienced by the sample, crucible, and crucible support decreases and the apparent mass of the sample increases. This systematic error is particularly relevant for small mass losses. If it is not corrected, the measured mass loss deviates from the true value by a certain amount. In practice, this systematic error is corrected by performing a blank measurement in which an empty crucible is heated under identical conditions. The

resulting blank curve is then subtracted from the sample measurement curve. An example of random error of measurement is the scatter of measured values of the enthalpy of fusion of indium. The values in Table 1 (in J/g) are the results of one hundred DSC measurements of the same test specimen. The mean value of all the measured values was 28.45 J/g. The deviation from the conventional true value or accepted reference value (28.51 J/g) is the systematic error of measurement. The spread of the measured values due to random errors of measurement is 0.12 J/g (standard deviation).

Precision, trueness and accuracy These terms can be explained by considering a set of measurement values using a target board presentation (Figure 2). The true value is assumed to be at the center of the target. The smaller the systematic error of measurement of the individual values, the better is the trueness of the mean value of the measurement series. The smaller the mean random error of measurement of the individual values, the better is the precision of the measurement series. Accuracy, when applied to a set of measurement results, involves a combination of a common systematic error or bias component and random errors of measurement. Accuracy involves precision and trueness. When applied to an individual measured value, it is a measure of the closeness of agreement between the measured value and the true value. A set of measurement results can therefore exhibit: a) good precision and trueness; this is the ideal case because the scatter of the results around the mean value

28.417

28.308

28.477

28.592

28.583

28.642

28.208

28.424

28.572

28.329

28.373

28.262

28.245

28.341

28.364

28.215

28.387

28.405

28.465

28.409

28.414

28.409

28.599

28.441

28.429

28.393

28.669

28.546

28.714

28.377

28.634

28.271

28.510

28.550

28.663

28.441

28.392

28.525

28.408

28.534

28.290

28.356

28.281

28.410

28.446

28.453

28.414

28.694

28.257

28.368

28.164

28.611

28.308

28.377

28.534

28.502

28.547

28.516

28.298

28.326

28.527

28.486

28.346

28.423

28.465

28.512

28.465

28.349

28.659

28.504

28.458

28.542

28.546

28.379

28.348

28.573

28.317

28.277

28.529

28.521

28.695

28.610

28.595

28.463

28.450

28.500

28.447

28.333

28.253

28.542

28.499

28.519

28.474

28.336

28.587

28.415

28.357

28.359

28.402

28.400

UserCom 1/2009

An example of completely random error of measurement is the repeated measurement of the thickness of a piece of aluminum foil at 30 °C using a thermomechanical analyzer, TMA. The probe was lowered onto the foil for 5 min and then raised again for 5 min. The measurement cycle was repeated ten times. The statistical evaluation is shown in Figure 3 and yields a mean thickness of 747.182 µm and a precision expressed as the standard deviation of 0.33 µm (i.e. 0.04%). In contrast, the standard deviation during an individual measurement is only about 0.012 µm (i.e. 0.0016%). The much larger standard deviation of the mean thickness can be explained by the fact that, in each individual measurement, the probe is in contact with a different point on the foil. The 0.33 µm is therefore largely a measure of the variability of the thickness of the foil whereas the 0.012 µm characterizes the instrument-specific noise. In reality, the true value of a quantity (represented by the center of the concentric circles in Figure 2) is unknown. In the case of the aluminum foil, the difference between the mean value of the measurement series and the true value of the thickness is not known. In principle, therefore, the trueness of the measurements cannot be determined. This is where the concept of the conventional true value or accepted reference value instead of the true value enters into the discussion.

True value and conventional true value Every analytical procedure inevitably includes systematic and random errors of measurement, so that even the certified

reference values are strictly speaking not the true values but rather the conventionally true values, i.e. they are generally accepted reference values to which a degree of uncertainty is attached. The true value of a quantity is unknown; it is of theoretical nature and cannot be determined with complete certainty. This means that any calibration (i.e. the determination of the deviation between the measured value and the conventional true value) and adjustment (i.e. matching the measured value to the conventional true value) depends on the accuracy of the reference material. The same applies to every measurement based on this adjustment. If, for example, you use indium reference material from the National Institute of Standards and Technology (NIST) for DSC heat flow adjustment, you must take into account that the certified value is 28.51 ± 0.19 J/g (i.e. ±0.65%). This means that the accuracy of enthalpies measured by DSC at the melting temperature of indium (156.6 °C) cannot be better than 1.3%. For the aluminum foil example (Figure 3), the trueness of the measured foil thickness can only be as good as the uncertainty of the length calibration kit that was used for the length adjustment of the TMA. The uncertainty is ±1.08 µm for a length of 1 mm. The uncertainty (±1.08  µm) is thus greater than the standard deviation (±0.33 µm).

Definitions – an overview Random errors of measurement cause the values of individual measurement results to fall on either side of the mean value. The magnitude of these random errors of measurement determines the precision of a measurement series. Synonyms: random errors, indeterminate errors. Systematic errors of measurement are the origin of the difference between the mean value of a measurement series and a value that is accepted as being the true value. This difference is known as bias and determines the trueness of a measurement series. Synonyms: systematic errors, bias, offset, determinate errors.

Increasing accuracy

Trueness

(spread) and the deviation from the true value are small. b) good precision but poor trueness; the spread is small but the difference between the mean value and the true value is large. c) poor precision but good trueness; the spread of the measured values is large, the deviation of the mean value from the true value is however small. d) poor precision and poor trueness; the spread and the difference to the true value are both large.

Precision

Accuracy describes the closeness of agreement between an individual measured value and the value considered to be the true value. Accuracy, when applied to a set of measurement results, involves a combination of systematic and random errors and involves trueness and precision. The result of a measurement is accurate if it is free from systematic and random errors. Synonym: accuracy of measurement.

Figure 2. The better the precision of an analytical procedure, the smaller is the random error of measurement of the individual values from the mean value. The trueness is independent of the precision. It describes the difference between the mean value and the true value (here the center of the target board).

Precision is the closeness of agreement between individual measured values. The smaller the random error of an analytical procedure, the smaller is the spread of the results and the better the precision of the procedure. Different types of precision are used depending on the conditions used to determine the measured values. The type of precision must be specified, for example, whether it is repeatability precision or reproducibility precision. The quantitative measure used for all types of precision is the standard deviation or the confidence interval. Both are frequently expressed as a percentage of the mean value. Synonyms: spread, scatter, dispersion.

Repeatability precision or reproducibility precision is defined as the closeness of agreement between individual results of successive measurements of the same measurement quantity (measurand) performed under repeatability or reproducibility conditions. They are deMETTLER TOLEDO

Figure 3. Ten TMA measurements to determine the thickness of a piece of aluminum foil (at 30 °C). The continuous line is the mean value of the ten measurements; the dotted lines correspond to the standard deviation of the mean of the ten measurements (0.33 µm). The standard deviation of each individual measurement was about 0.012 µm.

UserCom 1/2009

3

TA Tip

fined quantitatively by the repeatability standard deviation or the reproducibility standard deviation. Under repeatability conditions, factors a) to h) are the same, whereas under reproducibility conditions, b) to h) can vary. The factors involved in this context are: a) Sample material b) Operator c) Equipment d) Measurement procedure and method of measurement e) Sampling and sample preparation f) Laboratory g) Environment conditions h) Measurement series performed within a short interval of time. Although in many cases repeatability conditions are not strictly adhered to (e.g. the measurement is not performed within a short interval of time), repeatability conditions are assumed if all the other repeatability conditions are satisfied. Synonyms: repeatability, reproducibility. Trueness describes the closeness of agreement between the mean value of a series of measurements and the true value, that is, an accepted reference value or conventional true value. The smaller the systematic error (or errors) of an analytical procedure, the better is the trueness. A procedure is true if it does not exhibit any systematic errors of measurement. Synonym: accuracy of the mean. Conventional true value − the conventional true value of a particular meas-

Figure 4. Determination of the Solid Fat Index (SFI) by DSC and NMR.

urement quantity is defined as a generally accepted reference value, to which uncertainty is attached. The difference to the true value is unknown. An example of this is the enthalpy of fusion of the reference material indium with a value certified by NIST of 28.51 ± 0.19 J/g. Synonyms: conventional true value, assigned value, correct value, nominal value. True value − the true value of a measurement quantity cannot be determined. It is of theoretical nature and can never be determined with certainty. Exceptions are the melting point of indium (156.6 °C) and the triple point of water (273.16 K), both of which are both defined as true values.

The most important sources of measurement errors The aim of an analyst is to obtain a measurement result with high accuracy. For this reason, the factors that cause systematic and random errors of measurement must be identified and minimized. The minimization of systematic errors of measurement leads to the measurement results being closer to reality because the difference to the conventional true value becomes smaller and in the best case is zero. Random errors of measurement have to be low in order to get as close as possible to the conventional true value with the least possible number of measurements. In analytical measurement technology, a number of factors can lead to systematic

and/or random errors of measurement. The most important are: - Influence of the procedure (often called method bias) - Instrumental influences - Sampling and sample preparation - Environmental influences - Experimental parameters - Evaluation methodology - Time-dependent factors - Shortcomings of the operator - Gross errors. Under certain conditions, systematic errors of measurement can become random errors of measurement. This is especially the case if the operator is not aware of the effect of experimental influence factors on the measurement result and if the measurements are performed under varying rather than constant conditions. A strict classification into systematic and random errors of measurement is therefore unsuitable for a practical consideration of the sources of measurement errors. Influence of the procedure Analytical results measured using different procedures always lead to different results; the reasons are however not always immediately obvious. Example: The glass transition temperature can be determined by several different measurement techniques, for example using DSC based on the change of the specific heat capacity, by TMA based on the change of the expansion coefficient, and by DMA based on the change of the elastic modulus. The reason for the different results is that each measurement technique measures a different physical property, which means that the results (i.e. the glass transition temperature) cannot be directly compared. Example: The Solid Fat Index (SFI) defines the percentage amount of a fat that is present in crystalline form at a particular temperature. The SFI is frequently determined by DSC or NMR. Figure  4 shows that the SFI determined by DSC is systematically lower than the value determined by NMR. There is a linear relationship between the two measurement methods so the DSC-SFI results can be

4

METTLER TOLEDO

UserCom 1/2009

easily converted to NMR-SFI values, and vice versa. This systematic deviation between two methods is known as method bias. It is not necessarily linear. Instrumental influences A frequent cause of systematic errors of measurement lies in incorrect adjustment of the measuring instrument. Example: If the heat flow adjustment in the DSC is wrong by 10%, then all the calculated enthalpies (peak areas) and heat capacity changes (e.g. glass transition step) will also be wrong by 10%. The prerequisite for adjustment using reference materials is that such materials exist for the particular measurement quantity and that they cover the desired measurement range. In particular, instruments are often adjusted under conditions that do not correspond to the measurement conditions used later on with regard to temperature range, heating rate, furnace atmosphere, pressure, type of crucible or force and length range. Example: To perform reliable measurements over the full stiffness range of a DMA, the entire force and displacement range must be adjusted. This is possible with the DMA/SDTA 861e. With a onepoint adjustment, systematic errors of measurement above and below this point would be expected in force, displacement and hence in the elastic modulus. The quality and state of the measuring instrument can also influence the measurement results. Example: In a DSC instrument, there must be symmetry between heating and cooling, that is, a temperature adjustment in the heating mode must also be valid for the cooling experiment. If this is not the case, systematic errors of measurement occur in cooling experiments. Instrument resolution, sensitivity, detection limit, linearity, etc. are also possible sources of measurement errors, but will not be discussed here.

Sample selection, sampling and sample preparation Delegating sample preparation to the least experienced assistant is not a good idea. In thermal analysis, it is precisely this step that requires experience and a broad understanding of the measurement process. Examples: The following factors need to be taken into account. Otherwise measurement errors can occur: - Changes in the sample due to high mechanical or thermal stress (e.g. through an unsuitable separation technique) - A change in the material properties over time (e.g. through loss of moisture) - Instability of the sample (e.g. toward oxygen) - Aging of the sample material during storage (e.g. through UV light) - Changes in the sample during transport (e.g. uptake of moisture) - Contamination of the sample due to the use of dirty tools and instruments (e.g. saws, tweezers, syringes, etc.) - Undefined thermal contact between the sample and the crucible (DSC and TGA), inaccurate sample weighing (or poorly adjusted balance), inaccurate determination of the sample geometry (TMA and DMA), etc. Example: The sample dimensions (sample geometry) must be exactly known in order to determine accurate values of the elastic modulus by DMA. Let us assume that the thickness of a 3-point bending sample is 1.865 mm. The device available for measuring the sample thickness gave a value of 1.90 mm. This yields a systematic error of −5.7% in the modulus value due to the inaccurately measured sample thickness. The measurement error of just 35 µm in the sample thickness is particularly serious because it affects the geometry factor to the power of three. Besides material-specific factors to do with sampling, attention must also be paid to the actual sample selection process. It must be clearly specified how random samples are taken from the bulk sample, how they are labeled, and the paths they take up until the time when

they are measured. A well-developed sampling plan (Figure 5) allows a consistent sampling process to be defined that is independent of the analyst. It reveals weak spots in the sampling process and ensures good traceability of the analytical results. The most important questions in the sampling process can be visualized with the aid of Figure 5: - Which production lots should be examined? - At which point or points of a production lot or which part or parts of a production lot should be examined? - Is the sampling point and the sample size representative with regard to possible inhomogeneity of the production lot? Can conclusions be drawn from the selected sample about the bulk sample? - How large should the sample size be with regard to volume and number of items? - How often should a material property be determined and how many samples should be measured? - Are the samples taken at different points mixed together again to a composite sample (or aggregate sample) or are they processed separately as test samples? - Are the samples always representative of the bulk material from which they were taken? The existence of a sampling plan does not eliminate the possibility of errors of measurement that can be traced to the sample selection process. A poor sampling plan is just as likely to produce systematic and random errors of measurement as purely random and ill-considered sampling. Environmental influences An important task in the development of a measurement procedure is to isolate the measuring system from its environment, or to make it so insensitive toward environmental influences that perfectly reliable measurements are possible. The most important factors influencing the measurement signal in thermal analysis are pressure, temperature, vibrations and contamination. METTLER TOLEDO

UserCom 1/2009

5

TA Tip

Example: Reoccurring mechanical vibrations of short duration caused by machines in the immediate vicinity of a TMA instrument can give rise to irreproducible measurement errors in the coefficient of linear expansion of a sample. Example: In TGA, pressure fluctuations in the gas supply can lead to increased noise or to periodic fluctuations of the balance signal. Method parameters and evaluation Clearly, method parameters influence the measurement results. This means that the conditions under which a measurement is performed must be recorded for every measurement. The conditions can for example include sample mass, sample geometry, temperature range, heating

Figure 5. Example of sampling operations (sampling plan).

and cooling rate, atmosphere, pressure, crucible, force, displacement and frequency, sampling and sample preparation and storage. Examples: The heating rate influences the glass transition temperature and the extent of cold crystallization of a plastic, chemical reactions are shifted to higher temperature at higher heating rates, and polymorphic transitions do not occur at certain heating rates. In these cases, a comparison of measurement results between laboratories is only valid if the same heating rate is used. Otherwise, systematic errors of measurement between laboratories occur and the results are disputed. Besides the actual documentation of experimental parameters, professional

method development (i.e. the proper selection of optimum parameters by qualified personnel) is extremely important. Example: To obtain correct measurement values for the elastic modulus of polymers in the DMA, it is important to ensure that the material is stressed in a region in which the relationship between the applied force and the sample deformation is linear. If this is not the case, the values of the elastic modulus are systematically too low. It is therefore essential to check the linear range of the sample material beforehand. The evaluation methodology can also lead to systematic or random deviations, especially when the measurement results of different analysts and laboratories are compared. Exa mple: The tangents drawn to evaluate a glass transition and the type of evaluation used (e.g. according to Richardson, DIN53765 or ASTM E1356) lead to different transition temperatures. The baselines used for peak integration influence the resulting transition enthalpy, and weight losses in the TGA show small differences with or without blank correction. Time-dependent factors Time often has an influence on the performance of an instrument, for example, the sensitivity of a sensor can gradually change with time. If this process occurs over a long period, it is observed as a measurement error that changes in a systematic way. Example: Substances in contact with Pt/Rh thermocouples such as sulfides, carbonates and silicon can give rise to chemical reactions or alloy formation, which gradually changes the thermoelectric voltage generated by the thermocouple in the TGA/DSC. This leads to a corresponding change in the heat flow signals and temperatures. Shortcomings of the operator Systematic and random errors of measurement also occur due to shortcoming of the operators. Their individual abilities, theoretical knowledge of the measuring

6

METTLER TOLEDO

UserCom 1/2009

instrument, practical skill and experience, as well as their personal day-to-day performance and care all play an important part. Probably, most operators do not understand all the factors that lead to measurement errors. Shortcomings of this nature are a significant obstacle to obtaining better and more accurate results because the operators are responsible for the adjustment of the instrument, sampling and sample preparation, performing the measurement and the evaluation, and in general for the proper operation of the measuring instrument. This is why it is very important to train operators in the correct use of the instruments they use. Example: In DMA, the elastic modulus of the sample is calculated from the stiffness of the test specimen and the sample geometry. One assumes that the stiffness of the test specimen is appreciably smaller than the stiffness of the sample holder. If this is not the case, for example due to a change in sample geometry, it is quite possible that large systematic errors occur. Example: In the DSC measurement of the OIT (Oxidation Induction Time), the measuring instrument must also be adjusted for isothermal measurements. If the instrument has only been calibrated and adjusted for dynamic measurements (e.g. only for 10 K/min), systematic errors in the isothermal temperature and hence in the OIT result are possible. Measurement errors due to gross errors Besides the systematic and random errors of measurement already described, there is also a third type of error, socalled gross errors. Typical gross errors that are not immediately obvious include wrongly transcribed results and measurement data, errors in calculations, signs and rounding, programming errors in computer programs, mistaken identity of sample material or wrong reagent concentrations, incorrect weighing or determination of sample geometry, etc. These types of gross errors can only be reduced or eliminated by working with the utmost care and through repeated

checks, where possible by other people. Gross errors often originate through misunderstanding as well as through lack of knowledge about the operation of measuring instruments and the influence of the environment on the measurement. Another cause of gross errors has to do with operators thinking they have to obtain the same value as before. This leads them to consciously or unconsciously manipulate measurement results.

Detection and elimination of measurement errors Systematic errors of measurement occur in every measurement to the same extent and with the same sign. They usually only become apparent when comparative measurements are made. They cannot be detected or eliminated by performing replicate measurements, for example by increasing the number of random samples. Laboratories with incorrect results might think that everything is satisfactory although their results in fact deviate significantly from the true value. The three most important types of comparative measurements are: • Deliberate change of the experimental parameters: This is one of the best methods to detect systematic measurement errors. In this method, all the relevant factors that influence the method are deliberately systematically varied and the influence on the measurement quantity is quantified. This procedure identifies experimental parameters whose variation has no influence on the measurement result. • Use of a fundamentally different measuring method: If the principle of measurement is changed and the same results are obtained within the accuracy of measurement, then there are probably no important systematic errors of measurement. One should, however, be aware that comparative measurements can also be subject to systematic errors. • Interlaboratory studies (Round Robin studies): In an interlaboratory study, several laboratories measure one or more quantities of identical samples under precisely defined conditions. The

organizer of the study collects and evaluates the measurement data and compiles it into a single document. The participants, who remain anonymous, are informed later about the results. Interlaboratory studies are a good way to detect systematic errors of measurement and poor precision in a laboratory.

Conclusions Accuracy is a term that involves both trueness and precision. Trueness describes the closeness of agreement between the mean value of a series of measurements and an accepted reference value or conventional true value and is a measure of the systematic error of measurement. Precision describes the closeness of agreement between individual values obtained in measurement series, that is, the scatter or spread of the values. It is a measure of the random error of measurement. The most important causes of measurement errors are influences of the procedure, instrumental influences, sampling and sample preparation, environmental influences, experimental parameters, evaluation methodology, time-dependent factors and shortcomings of the operator. The accuracy of analytical measurement results can be improved through detailed understanding of the measurement process and competent method development.

Literature

[1] Validation in Thermal Analysis, METTLER TOLEDO, 2008 [2] Analytical Measurement Terminology, Royal Society of Chemistry, UK 2000, ISBN 0-85404- 443-4. [3] ISO 5725: Accuracy (trueness and precision) of measurement methods and results.

METTLER TOLEDO

UserCom 1/2009

7

New in our Sales Program

New Excellence Melting Point Systems The new melting point systems feature innovative technology and superior ergonomics. Compared with previous instrumentation, both the performance and operating convenience have been greatly improved. The melting process is now captured on video and can be replayed on the instrument as many times as desired. Sample throughput and efficiency is much greater because you can now measure up to six samples simultaneously. Operation using the color touch screen is very easy and quick to learn. You can start a stored method with just One Click.

METTLER TOLEDO

One Click and superior ergonomics − quick to learn, easy to operate Convenient replay of high resolution color videos − for maximum security Simultaneous measurement of up to 6 samples − saves time and money Compliance with standard methods − ensures maximum reliability

The METTLER TOLEDO melting point systems are available in three different models. The table below illustrates their most important features. MP50 Measurement principle Temperature range

The innovative METTLER TOLEDO melting point systems offer many important advantages. You can determine the melting point or melting range very accurately and thanks to video observation investigate color changes, clear points and decomposition temperatures.

UserCom 1/2009

MP70

RT to 300 °C

Video export Results export

RT to 350 °C

RT to 400 °C

0.1 to 20 °C per minute

Capillaries Video

MP90

Observation in reflection, measurement in transmission

Heating rate

Video run time

Figure 1. The MP50 und MP90 automatic melting point systems.

8

At a glance:

Up to 4

Up to 6

Gray scale AVI

Color AVI

30 min

300 min

No

On SD card

Compact printer

Ethernet printer, PDF, Text export

Standards

European Pharmacopeia (Ph.Eur.) 2.2.60

complied with

United States Pharmacopeia (USP) Japanese Industry Standard (JIS) K 0064 ASTM D1519

We are pleased to announce the availability of the new “Validation in Thermal Analysis” handbook. This is the latest addition to the “Collected Applications” series. Most previous handbooks in this series, such as Thermoplastics, Thermosets and Elastomers focused on particular types of material and have been very application oriented. The TGA-EGA handbook was an exception and dealt with special analytical measurement techniques. This new handbook provides valuable information for all thermal analysis users who want to know how good their results are, and how they can qualify them and if necessary improve their quality. The validation of instruments, processes and methods is a basic requirement of quality assurance in many important industries and is often regulated by law. It includes instrument qualification and the validation of computer systems and analytical methods. Measurement results must reflect reality because they are used to make decisions. Furthermore, you should be able to trust the result of a single measurement without having to perform repeat measurements. This handbook contains valuable advice on how to perform validation. It covers the topics: • Instrument qualification • Validation of computer systems • Validation of methods One of the main topics is the determination of measurement error and the uncertainty of measurement as well as method validation in general. The chapter on method validation covers three possible approaches:

1. Validation of analytical methods 2. Interlaboratory studies 3. Working to standards (ASTM, ISO, DIN, ...) Examples dealing with the measurement of the glass transition, purity determination and content analysis of materials show how validation concepts can be applied to obtain a proper standard operating procedure (SOP) and finally a validated method.

AGC Thermal Book 20% Cyan Analysis

Better results in thermal analysis thanks to validation

Validation Handbook

Validation in Thermal Analysis Collected Applications

The theoretical concepts discussed in the handbook are illustrated using practical examples. The titles and order numbers of handbooks that have already appeared in the “Collected Applications” series are given in the table. Detailed information can be found in UserCom 24 or at www.mt.com/ta-books: Title of Handbook

Language / Order Number

Validation in Thermal Analysis (new) (230 pages)

English:

51 725 141

Thermosets (300 pages)

English:

51 725 069 (Volumes 1 + 2)

Volume 1: 51 725 067 Volume 2: 51 725 068

Thermoplastics (150 pages)

English:

51 725 002

Elastomers (275 pages)

English:

51 725 061 (Volumes 1 + 2)

Pharmaceuticals (100 pages)

English:

51 725 006

Food (50 pages)

English:

51 725 004

Evolved Gas Analysis (65 pages)

English:

51 725 056

Tutorial Kit (25 pages)

Handbook: German: English: French:

Volume 1: 51 725 057 Volume 2: 51 725 058

51 709 919 51 709 920 51 709 921

Handbook with test substances: German: 51 140 877 English: 51 140 878 French: 51 140 879

METTLER TOLEDO

UserCom 1/2009

9

Applications

Determination of phase transitions with simultaneous video observation Dr. Matthias Wagner

The melting point is without doubt the thermal value most frequently used to characterize materials. This fact together with ever-increasing requirements for melting point determination were the two main reasons why METTLER TOLEDO decided to develop a completely new series of instruments. The new Excellence Melting Point Systems allow substances to be analyzed that could previously not be measured by conventional melting point instruments. The following article presents a number of different examples.

Introduction The melting point The melting point is a characteristic property of a substance. It is the temperature at which the crystalline phase changes to the liquid state. A pure substance normally has a sharp melting point, whereas an impure substance melts over a temperature range that is lower than the melting point of the pure substance. This effect is well known and called the melting point depression. Some organic compounds melt and decompose simultaneously. This makes it difficult to determine an exact melting point. Melting can also occur over a relatively wide temperature range. One then refers to a melting range rather than a melting point. This effect is especially observed with polymers. In general, melting point determination is used in research and development as Figure 1. Intensity curves of transmitted light during a typical melting process: point A is the start of melting; B is a threshold value, 40%; C is the end of melting for the six samples that can be simultaneously measured.

10

METTLER TOLEDO

UserCom 1/2009

well as in quality control to identify and check the purity of substances. Melting point detection Many materials are opaque in the crystalline state but transparent in the liquid state. This change in optical properties during melting can be used to determine the melting point or melting range. The measurement is performed by heating the sample in a furnace at a constant rate and continuously measuring the intensity of light transmitted through the sample (i.e. the transmittance). When the transmittance exceeds a predefined value, the sample is said to have melted. This well-proven principle is also employed in the new METTLER TOLEDO melting point instruments. The new instruments use a camera as a detector to measure the light intensity of the sample and LEDs as the light source. LEDs offer advantages such as lower en-

ergy consumption and longer lifetime and at the same time generate more homogeneous light. The large display incorporated in the instrument allows several observers to follow the melting process. Two models, the MP70 and MP90, permit videos to be transferred via SD card to the computer and archived. The video format is AVI. This means that you can replay videos on the computer with commercially available software. Likewise, the experiment data and intensity curves stored on the SD card as an ASCII text file can also be archived if desired. The file can be opened and processed in a spread sheet such as Microsoft Excel. The instruments simultaneously determine up to four (MP50 and MP70) or six samples (MP90). Figure  1 shows typical light intensity curves measured during a typical melting process. Three points, labeled A, B, and C, are marked on the curve. These points are characteristic temperatures determined in a melting point analysis. Point A marks the start of melting, point B a characteristic temperature at which the transmittance reaches a certain value, and point C the end of melting. For the melting point determination, either point B or point C is used. Most standards define the end of melting (point C) as the melting point. Point C can also be automatically evaluated by the new melting point systems. Points A and C are used to determine the melting range.

Figure 2. Video images of substance during melting.

Before the start of melting

At the meniscus point

Melting is complete

Visual observation Standards that govern the determination of melting point, such as USP or Ph. EUR 2.2.14, require visual observation of the sample.

Figure 3. Sample preparation using the sample preparation tool.

Figure 4. The start screen showing the One Click™ “UREA” shortcut key.

This is done using the built-in camera system. However, the possibilities offered by the new melting point systems go much further. Previously, you had to sit in front of the melting point apparatus during the measurement and observe the samples through a lens. If you missed the melting point, you had no choice but to prepare a new sample and repeat the measurement. In the new instruments, the melting process is stored as a video. You can replay the video as many times as you wish. Figure 2 displays different images from such a video. The left image shows the sample as a white powder in the capillary at the beginning of a measurement. During melting, the sample consists of a mixture of liquid and solid crystals. The liquid forms a column with a meniscus at the top. This point is called the meniscus point and is shown in the middle image. Finally, as the temperature increases, the remaining crystals melt and the contents of the capillaries are completely clear. This is the case at point C in Figure 1 and at higher temperatures (right image).

Experimental details An experiment begins with sample preparation. First, the dry substances are finely ground and filled into capillaries. The furnace of the instrument can accommodate capillaries with diameters of up to 1.8 mm. This satisfies all current standards. Depending on the type of substance you are measuring, you will have to vary the filling level and possibly stopper the capillaries. The filling level is typically

Figure 5. Method programming.

Figure 6. The online screen showing the melting range that was detected.

2.5 to 5 mm and can be easily checked using the sample preparation tool (see Figure 3). After switching on the instrument, the start screen appears on the display (Figure  4). The pharmacopeia operation mode is the default setting in the instrument. In this mode, the temperatures displayed refer to the furnace temperature, which is measured by a Pt100 temperature sensor. If thermodynamically correct melting temperatures are required, the instrument setting must be changed. These temperatures refer to the actual sample temperature. Most standards, however, are based on the pharmacopeia temperatures. The next step is to define the temperature program (Figure 5). Under “Manual method”, you set the Operation mode, Start temperature, End temperature, Heating rate, insert the capillaries and start the measurement. In the Operation mode you can select Melting point, Melting range or Manual determination

of two temperatures (by clicking the appropriate keys). The start temperature should be 3 to 5 °C lower than the expected start of melting. The heating rate is typically 1 °C/min. If necessary, you can program an isothermal wait time for equilibration before heating the sample and hold the end temperature isothermally for a certain time after heating is finished. You can store the method as a “One Click™ short cut key” on the start screen by clicking “AddToHome”. This key allows the user to start the entire measurement sequence with just one click, which is very useful for repetitive measurements. When an experiment is running, the “Online” screen appears. This displays the live video, the status, the remaining measurement time, and the temperature. If you wish, you can display the intensity curve instead of the image information. You can also add commentaries about the measurement or details of the sam-

METTLER TOLEDO

UserCom 1/2009

11

Applications

ples. The “Stop” key terminates a measurement that is running. The instrument signals the end of melting with an acoustic tone and simultaneously displays the measurement results (see Figure 6).

Other applications concerned the determination of the characteristic temperatures of thermochromic substances (i.e. substances that change color on heating), as well as the melting of polymers.

Measurements and results

Colored substances The test substance chosen was red solid potassium dichromate (K 2Cr2O7). According to literature data, it should melt at 398 °C.

A number of “difficult” substances were measured to assess how reliably the new instruments were able to determine melting points and melting ranges. The substances chosen for these studies were expected to cause problems because of their particular physical or chemical properties. They included colored substances, substances that sublime, and substances that melt with decomposition. Another interesting study dealt with substances that tend to form bubbles during melting and whether such substances can be reliably characterized.

Figure 7. The melting of potassium dichromate. The experiments showed that potassium dichromate and other colored substances can easily be determined.

The measurement was started at 395 °C at a heating rate of 1 °C/min (Figure 7). Substances that decompose Sugar was chosen as the test substance. In practice, the controlled decomposition of sugar is used for the production of caramel desserts (caramelization). The measurement was started at 180 °C at a heating rate 1 °C/min. The melt-

Bubble formation If the vanillin sample is not carefully prepared, bubbles often occur during the measurement and make reliable determination of the melting point difficult. Vanillin presented no problems with the new MP90 (Figure 10). Thermochromism Thermochromism is the term used to describe the reversible change in the color of a substance as its temperature is changed. A good example that illustrates this phenomenon is mercuric iodide (HgI2). On heating, it changes from red to yellow during a solid-solid transition. The characteristic temperatures for the thermochromic transition are the first occurrence of yellow crystals and the disappearance of the last red crystals. These temperatures can be determined manually using the set keys.

Potassium dichromate at the beginning of the measurement

After the start of melting at 397 °C

Completely molten at 399.5 °C

Sugar after the start of the measurement: white crystals

At 186 °C, beginning of decomposition: still solid but clearly brown

After the melting point was detected in all samples: 186.3 °C

Figure 8. Melting and decomposition of sugar. The melting process of substances that decompose can easily be measured.

Figure 9. Caffeine during melting at 235 °C.

Figure 10. Vanillin after melting with bubbles in five of the six capillaries. Despite this, the automatic detection system functioned properly: the melting point threshold value (20%) was 83 °C.

12

METTLER TOLEDO

UserCom 1/2009

ing point was detected automatically at 186 °C (Figure 8). Substances that sublime Caffeine is well known as a substance that sublimes and tends to cause problems in melting point determination. The measurements showed that the new instruments can easily determine the melting point of caffeine. The melting point determined for a batch with a certified melting point of with 235.5 °C was 235.6 °C (Figure 9).

The method consisted of heating from 120 °C to 160 °C at 5 °C/min. The start of the transition was determined as 142.3  °C and the end 150 °C (Figure 11). Polymers Many homogeneous crystalline polymers become transparent at their melting point. In such cases, the new melting point systems can also be used to distinguish between polymers. As an example, two different polyethylenes, an LDPE and an HDPE, were pre-

Figure 11. Thermochromism of mercuric iodide. The application shows that thermochromic solid-solid transitions can also be measured. The red crystals of mercuric iodide at 120 °C

pared in capillaries of 1.75 mm internal diameter. The capillaries were heated at 5 °C/min from 95 °C (LDPE) and 125 °C (HDPE) respectively. The melting points determined in this way were 104 °C and 132 °C (Figure 12). Liquid crystal transitions Even liquid crystal transitions can be studied using the melting point system. Azoxydianisole is a good example. In a DSC experiment measured at a heating rate of 20 °C/min, azoxydianisole first exhibits a phase transition at about 118 °C and then undergoes a transition to an isotropic liquid at about 135 °C. In the new melting point system measured at 1 °C/min, the transitions are observed visually as follows: the yellow powder melts at about 119 °C but the liquid remains cloudy. The contents of the capillaries do not become clear until about 134 °C. The intensity curve also

The thermochromic transition of mercuric iodide at 148 °C

Substance is completely yellow at 155 °C Figure 12. Measurement of LDPE and HDPE. Polymers can be qualitatively differentiated using the new melting point systems.

LDPE rods prepared in capillaries

LDPE rods after melting – dimensionally stable but transparent

HDPE rods in capillaries

Molten HDPE rods

shows a marked increase at this temperature (Figure 13).

Summary The new melting point systems offer many advantages such as: • High resolution touchscreen for optimal display of videos, intensity curves and results

• Ease of use including One Click™ short cut keys • Video recording in reflection coupled with well-proven automatic measurement in transmission • High throughput analysis for the simultaneous measurement of up to four or six samples • Manual setting of up to two points Figure 13. Measurement of liquid crystal transitions.

Initial situation: capillaries filled with azoxydianisole

At 119 °C: The substance has become a cloudy liquid; individual crystals are no longer visible

The intensity curve shows a small increase at A, a small decrease at B and then a marked increase at C when the substance becomes clear at 132 °C.

At 134 °C: The liquid is now clear

DSC curve of azoxydianisole measured at a heating rate of 20 °C/min.

METTLER TOLEDO

UserCom 1/2009

13

Applications

• Automatic endpoint detection, pharmacopeia mode, and the size of capillaries ensure compliance with all current standards • Video replay on the instrument • AVI video and experiment data export via SD card • Statistical evaluation of the results of a measurement

These functionalities and many other features enable the user to quickly obtain valuable information about the melting behavior of different materials. Detailed IQ/OQ documents are available for all instruments. If required, a METTLER TOLEDO service engineer will help you with IQ/OQ.

Measurement of thin films in shear by DMA Dr. Markus Schubnell

Thin films with a thickness of 50 to 200 µm are usually measured in tension in the DMA. They can, however, also be measured in shear if proper attention is paid to sample preparation and other factors. In this article, we present two examples to show how this is done.

Introduction In the DMA, a sample undergoes periodic deformation. However, the force necessary to deform the sample acts not only on the sample but also on the sample holder. This means that the measured displacement amplitude is the sum of the deformation of the sample and the deformation of the sample holder. Ideally, the deformation of the sample holder should be negligible compared with the deformation of the sample. When thin samples (thickness

Smile Life

When life gives you a hundred reasons to cry, show life that you have a thousand reasons to smile

Get in touch

© Copyright 2015 - 2024 PDFFOX.COM - All rights reserved.