lunes, 7 de junio de 2021

entrada 1.1

ORY OF THE MEASUREMENT OF HEAT

 I. THERMOMETRY AND CALORI

, BROOKLYN COLLEGE THE years 1942 and 1943 mark important anniversaries in the history of thermal quantitative methodology. They celebrate the 350th year sinice the invention of the thermometer (c. 1592) and the 200th year of the scale of Celsius (1742). They commemorate the death of the inventor (Galileo, t 1642), and the birth and death of two who improved the instrument (Newton, b. 1642 O.S., 1643 N.S.; Halley, t 1742). These years also mark precisely a century since Mayer (1842), Joule (1843), and Colding (1843) established the principle of the conservation of energy, the greatest generalization which thermometry has made possible. The present time may therefore be regarded as peculiarly fitting for a review of the basic steps in the development of quantitative thermotics. The importance of thermal phenomena had been remarked as early as the Hellenic period. Heat and cold were recognized by Democritus and Heraclitus as playing a vital part in the dynamic world of nature, and these qualities were intimately bound up with the Empedoclean theory of the four elements and with the Hippocratean humoral pathology. The classical expression given to such doctrines about a century later by Aristotle and his school continued to dominate scientific thought for close to two thousand years. Peripatetic science, however, remained essentially qualitative and failed to see the possibility of, or at least the tremendous importance of, a means of determining the extent to which the properties hot and cold were present in any given situation. Hellenistic science stands in marked contrast, in its attention to quantitative studies in astronomy and mechanics, both to the earlier IHellenic period and to the later Greco-Roman age. However, no basis for the measurement of thermal phenomena was found at that time. Heat could not be seen or weighed, and the physiological sensation was far too unreliable to serve as a measure. It is scarcely surprising, then, that the determination of specific heats came about two thousand years after the earliest measurements of specific gravity, and that the inverse proportionality of these for gases was discovered another hundred years later. Nevertheless, a bold attempt in the direction of a quantitative study of heat was made in medicine by Galen through a classification of drugs and simples on the basis of a scale of four orders or degrees of heat and cold. Such an arrangement was bound to be highly subjective and dogmatically a priori, but it served to inspire further efforts toward a quantitative basis. In particular, the Galenic views were continued and elaborated upon by Arabic commentators, so that in the work of Alkindi (c. 850) one finds adumbrations of an important distinction-that between intensity and quantity of heat and cold. The decline of Arabic learning happily coincided, at least roughly, with the rise of a more vigorous scientific interest in the Latin world. The natural science of Aristotle was eagerly discussed by the Scholastic philosophers and, especially at Paris and Oxford during the four 

a significantly new quantitative orientation. This tendency was most pronounced in two fields upon which the mathematical mind of Archimedes had failed to touch-dynamics and thermoties. In the former branch the late medieval period introduced two important concepts-that of impetus or inertia and that of acceleration, both uniform and non-uniform. That the age was somewhat less successful with respect to the study of heat may perhaps have been due to the fact that it studied the dynamic aspects of heat without first having mastered thermuostatics. Richard Suiseth and others discussed changes in thermal intensity and content in much the same terms as they had linear velocities and ac(celerations; and Nicole Oresme represented such variables graphically. During the fifteenth and sixteenth centuries these discussions were continued. Giovanni Marliani and his contemporaries and successors adopted a scale of eight degrees of calidity and frigidity, and on this basis sought to distinguish between the temperature of an object and the quantity of heat which it contained. However, speculation and logical deduction here remained relatively fruitless because no sound body of raw data had yet been gathered through precise quantitative observation. It is interesting to note in this connection that although during the medieval period it was suggested that heat might be a form of motion, anticipating Francis Bacon and others by some three hundred years, such an adumbration of the modern view could not take on appropriate significance without the empirical mensurational work which during the seventeenth and eighteenth centuries followed upon the invention of the thermometer. The earliest forms of the thermometer appear to have been suggested by the sixteenth-century revival of interest in the classical mechanical works of antiquity, rather than through Scholastic philosophical discussion. The technological tendencies of the Greek world had been overshadowed to a great extent by the speculative tradition which medieval thought had sedulously fostered. Consequently the works of Archytas of Tarentum, of Philo of Byzantium, of Ctesibus and Hero of Alexandria survive now only in the fo

THE SCIENTIFIC MONTHLY terested in Hero's Pneumatica chiefly as furnishing examples of "natural magic." Consequently in 1589 he described the experiments of Philo and Hero as illustrating the changing density of air; the idea of this as a measure of the degree of heat did not occur to him. To Galileo, however, such an interpretation did occur soon after he was established at Padua in 1592, just about three hundred and fifty years ago. Using Philo's arrangement, Galileo fastened a straight thin glass tube to a hollow glass ball about the size of a hen's egg. This he then held vertically with the open end of the tube in a flask of water. As the glass ball was warmed by the hand, air was driven out of the tube and bubbled up through the water. When the hand was removed, water rose in the tube as the enclosed air cooled and contracted. The level to which the water rose in the tube Galileo recognized as a rough indication of the extent to which the air had been heated. Galileo was thus the first one to give a means of determining temperatures independently of the highly equivocal sensation of touch. His device is to be regarded as the earliest crude thermometer, the first objective means of describing thermal phenomena quantitatively. However, inasmuch as in this early form it lacked a definite scale and was subject to changes in atmospheric pressure, Galileo's instrument frequently is referred to as a baro-thermoseope. Galileo seems not to have appreciated his invention of the thermometer, his reference to it being quite casual. Consequently credit has sometimes gone to rival claimants, although it appears to be clear from Galileo 's correspondence that he is definitely entitled to priority in this invention, the description of which has been supplied by his associates. During the eighteenth century the invention customarily was ascribed to Drebbel in 1608; but Drebbel's position was precisely that of Porta-he repeated the observations of Philo and Hero, but seems not at first to have used the instrument as a heat-measurer. Serious pretensions have been advanced also on behalf of the Paduan physician, Sanetorius, who in about 1612 described thermometers in connection with commentaries on the works of Galen and Avicenna. Sanctorius, however, never claimed the invention for himself, and it is possible that he learned of the instrument through his colleague Galileo. Independent invention has sometimes been ascribed also to Salomon de Caus in 1615, to Fra Paolo Sarpi in 1617, or to Robert Fludd even later, but such ascriptions lack adequate confirmation.* Whereas the use of the telescope spread rapidly after its invention in 1608, application of the thermometer was by contrast surprisingly slow. This situation is probably to be explained by the fact that the former instrument was applied in qualitative description, whereas the latter was intended for quantitative determinations. Quantitative analysis is more difficulthan qualitative, although it is also generally more valuable. To make the thermometer an effective measure of heat intensity a precise and objectively reproducible scale was necessary; but throughout the seventeenth * While this paper was in proof, there appeared a valuable review of the claims of Galileo, Sanctorius, Fludd, and Drebbel by Dr. F. Sherwood Taylor in "The origin of the thermometer," Annals of Science, V (1942), 129- 156. Sherwood closes his excellent analysis with the following paragraph: " To sum up the whole position, it seems not improbable that Santorio, Galileo, Fludd and Drebbel each invented the thermometer independently. Galileo seems, at some period between 1592 and 1603, to have been the first inventor, while Santorio in 1611 gives the first written record of the invention, published or unpublished. Drebbel may have invented the two-bulbed thermometer at any date between 1598 and 1622. Fludd may have modified Philo 's apparatus into the weather-glass, but did not do so until some period between 1617 and 1626."

HISTORY OF THE MEASUREMENT OF HEAT 445 century no such standard was adopted. Many of the early scales-including those of Telioux in 1611, of Mersenne in 1644, of Morin in 1661, and of Fabri in 1669-were divided into eight spaces, following the late medieval philosophical tradition. Sometimes these intervals were further subdivided into eight or sixty parts each-the latter in accordance with the Babylonian astronomical tradition. Astronomy and geometry undoubtedly led Galileo's friend Sagredo in 1615 to divide the interval between the greatest heat of summer and the extreme cold of winter into 360 parts or "degrees." The famous thermometers of the Florentine Accademia del Cimento were variously divided into fifty or one or more hundred parts. Otto von Guericke adopted a scale of seven degrees and Fludd one of fourteen. Renaldini and Newton used scales of twelve parts. As there was no uniformity during the seventeenth century with respect to scale divisions, so also no general agreement was reached as to desirable fixed points for determining the limits of the scale. Winter and summer heat, the temperature of a deep cellar, lthe melting point of butter or of anise-seed oil, the freezing and boiling points of water were among those proposed; but not one of these secured general approval. During the century the form of the thermometer had changed considerably. Jean Rey in 1632 described a thermometer for fever patients in which a rise of temperature was indicated by the expansion of water in a flask up into a long thin neck. This liquid thermometer was followed by others, including the alcohol and mereury instruments of the Florentine Academy. The change from air to a liquid as the thermometric substance reduced the discrepancies due to atmospheric pressure, but did not wholly eliminate them. At some time before 1654, however, Ferdinand II and the Academicians sealed their thermometers and removed this source of error. Greater accuracy was now attainable, provided some standard method of calibration could be adopted. Boyle, Hooke, and Huygens in 1665 suggested that a single fixed point, such as the freezing or boiling point of water, be chosen as a starting point, and that temperatures above and below this be measured by the proportionate expansions and contractions of the thermometric substance. Adoption of this principle would have made thermometers universally comparable, but agreement could not at that time be reached. Shortly afterwards it was suggested by Fabri, Dalence, Renaldini, Newton, Halley, Roemer, and others that two fixed points would be preferable, the interval between these to be subdivided in some manner to be agreed upon. On the basis of these principles,- using either one fixed point or two-the thermometric scales which we now useFahrenheit, Centigrade (or Celsius), Reaumur, and Absolute-were established during the first half of the eighteenth century. The origin of the Fahrenheit scale is to be found in the work of Roemer. The Danish astronomer in calibrating thermometers set his zero at the lowest temperature he could obtain with a mixture of ice and salt; his upper point was the boiling point of water. On dividing the interval between these extremes into sixty parts, Roemer found that the freezing point of water fell at about 71 or 8 and the temperature of the body at 22i. Fahrenheit in 1708 visited Roemer in Copenhagen and subsequently undertook the calibration of thermometers along similar lines. As a maker of meteorological instruments-the thermometer was indeed at that time often referred to as a " weather-glass "-Fahrenheit was concerned primarily with the lower portion of Roemer's scale. He therefore retained Roemer 's zero, but as his upper fixed

point he adopted normal body temperature. Moreover, he found Roemer's 221 divisions between these points inadequate for precision, so that he multiplied the number by four. Subsequently he found it conyvenient to change from 90 to 96 the number of degrees in this range. With these modifications, as the result of which the freezing and boiling points of water incidentally fell at 32 and 212 respectively, the present Fahrenheit scale was established. The origin of the Centigrade thermometer is not so clearly indicated. A scale of a hundred parts had appeared among those adopted by the Florentine Academy, and other centesimal thermometers were used in the first half of the eighteenth century by La Hire and Du Crest, but these were not associated with both the freezing and boiling points of water. On the other hand, Renaldini in 1694 had proposed these latter fixed points, but he subdivided the interval duodecimally. A suggestion that Renaldini's fiducial points be associated with a centesimal scale is contained in a letter of the great naturalist Linnaeus, but this is undated and so leaves unanswered the question of priority. Apparently the first thermometer constructed along those lines was that described in 1742 by Celsius. In this the freezing point was chosen as 100 and the boiling point as 0, but a few years later the scale was inverted by his colleagues to establish the present Centigrade scale. In the period between the work of Fahrenheit and that of Celsius there arose a third scale which also achieved wide popularity. In 1730-1731 Reaumur proposed a thermometer established on the principle of Boyle, Hooke, and Huygens. Starting from only one fixed point, the freezing point of water, he chose his divisions on the basis of the volumetric expansion of the thermometric substance -one degree for each increase by 1/1000 of the original volume of alcohol. On this scale water was found to boil at 800. However, because of the varying quality of thermometric spirits, this boiling point subsequently was adopted as an arbitrary and invariable second fixed point, thus standardizing the scale for so-called Reaumur thermometers. Interest on the part of both Fahrenheit and Reaumur had been influenced by the earlier works of Amontons, to whom is due the idea of an absolute scale of temperature. Amontons had been led to thermometry through meteorology and the problem of varying atmospheric pressure, but in emphasis he departed from the traditional view. The air thermometer had been the first to be developed, but it had soon given way to sealed liquid thermometers. Liquids, unfortunately, have not only very small rates of thermal expansion, but these rates are unequal for different substances and are not uniform for any one fluid over different temperature ranges. The same is true also of solids, the unequal expansions of which were used in 1747 by Mussehenbrock to construct a new type of thermometer or pyrometer. About 1701, on the other hand, Amontons had discovered that the thermal expansion of air is surprisingly uniform. He found that if a fixed volume of air at any initial pressure is heated from a moderate temperature to the boiling point of water, the pressure will in every case be increased by about one third. From this fact he inferred that for equal increments or decrements in heat or temperature the pressure of a gas will be increased or decreased by the same fraction of the pressure at some arbitrary point. He therefore suggested a scale based on one fixed point-the boiling point of water-with degrees of heat intensity to be measured in terms of the proportionate increase or decrease in the pressure of a given volume of air at this initial temperature. Thus Amontons found that a pressure of 73 units at the boiling point corre

HISTORY OF THE MEASUREMENT OF HEAT 447 sponded to one of 58 units at greatest summer heat and to one between 51 and 52 units at the freezing point. He then made the significant observation that by extrapolation below this point one could infer that at the zero temperature of this scale the air would exert no pressure; it would have no elasticity because its parts would then be contiguous and cease to move. He suggested that this might well be regarded as an absolute zero of heat content or intensity. However, scientists at the time were skeptical of his conclusions, and this suggestion of an absolute thermometric scale remained largely unnoticed. Late in the century and early in the next Amontons' observation on air was rediscovered and generalized for other gases by Lambert (1779), Charles (1787), Volta (1793), Gay Lussac (1801), and Dalton (1802). Toward the middle of the nineteenth century this work on gases was associated by Kelvin and Clausius with independent developments in thermodynamics which also pointed to the same absolute zero, and hence temperatures measured from this point (by means of the adjusted Centigrade system) often are referred to as degrees Kelvin or Absolute. The establishment in the early eighteenth century of adequate thermometric scales gave precision to the idea of heat intensity. The problem. of heat quantity, on the other hand, had received no satisfactory consideration. The recognition of the constancy of fixed points, on which thermometry isbased, preceded by about a century the determination of heat capacities upon which calorimetry depends. Arabic and Scholastic philosophers were aware that thermal effects are determined by both intensity and quantity of heat and cold. The latter factor, they knew, was to some extent dependent upon the quantity or mass of the hot and cold bodies. They accepted this functional relationship as a simple proportionality. Renaldini, Fahrenheit, Boerhaave, and others did indeed establish experimentally that, when unequal quantities of the same substance at different temperatures are mixed, the rise or fall in temperature is very nearly proportional to the masses involved and to the difference in temperature. In the case of two unlike substances, however, this rule failed to hold. Mercury, for example, had far less thermal effect at a given temperature than did an equal mass of water. In fact Fahrenheit, following a suggestion of Boerhaave, had found on mixing equal

THE SCIENTIFIC MONTHLY of materiality had been impressed with such thoroughness on the eighteenth century that Boerhaave was led to attempt to weigh heat, f or gravity is one of the chief properties of matter. Moreover, the fact that metals increase in weight during calcination tended to confirm the suspicion that heat was a gravitating substance. The results of Boerhaave 's experiments, however, were distinctly negative, and he was forced to conclude that heat was a material sui generis having no weight. Mussehenbroek and Buffon questioned this conclusion, and the latter insisted that he could indeed associate an increase in weight with a rise in temperature. However, Black, Rumford, and others were not convinced by Buff on's results, and heat remained throughout the century among the imponderables. The absolute quantity of heat could not be determined by the balance, but successful attacks upon the problem of relative quantities of heat were nevertheless made independently by a number of men, with credit for priority apparently going to Black. He arrived at his results shortly before 1760, although they were not published during his lifetime. The result of Fahrenheit and Boerhaave on mixing water and mercury impressed Black as having a significance which, surprisingly, these men had overlooked. Rather than indicating roughly the uniform distribution of heat in space, Black saw that the experiment showed clearly that different substances have characteristically different capacities for heat. The capacity of mercury, for example, he found to be less by about 30%o than that of an equal volume of water. Experiinental determination of the capacities of substances relative to that of water were made also at somewhat the same time by Deluc, Wileke, Irvine, Crawford, Lambert, Watt, and others. Such values, when equal masses are compared, have since become known as specific heats. This work inaugurated what may be regarded as a second great branch of quantitative thermoties. It is a surprising fact that thermnometry, or the determination of heat intensities, had developed for more than a century and a half before the effective rise of calorimetry, or the measurement of relative thermal content or capacity. This is difficult to explain inasmuch as no new instrument was necessary in the latter case. A balance and a thermometer suffice to measure the relative heating effects. The method of mixtures long before had been used by Renaldini in connection with quantities of a single substance at different temperatures to determine the degrees on a thermometric scale. Calorimetry would follow as a simple corollary on mixing quantities of two different substances at unequal temperatures. Yet this was not systematically developed until it had been

HISTORY OF THE MEASUREMENT OF HEAT 449 science than are general theories as to the nature of things. Such at least was true of the science of heat in Black's day. Black in 1757 had found reason to question the traditional view with respect to change of state. He saw that if no great change in heat content were necessary to bring about a change of state, then it would be truly remarkable that ice melts so slowly in warm surroundings. Great quantities of snow and ice on thawing should rather be expected, through sudden liquefaction, to produce irresistible torrents and inundations. Black concluded that, contemporary opinion notwithstanding, a great increase in the qurantity of heat must be brought about to give melting ice its fluidity, even though this is not accompanied by any rise in temperature. The added heat was merely "rendered latent." Moreover, for any one substance and change of state, this latent heat was a perfectly definite quantity directly proportional to the mass involved. In view of this, Black thought of the melting process as a sort of chemical reaction: a mass of ice at 320 when combined with 139 degrees [Fahrenheit] of heat yields an equal mass of water at this same temperature. His figure for the latent heat of fusion thus differed but slightly from the modern value. For the latent heat of vaporization, however, Black arrived at "not less than 774 degrees [Fahrenheit], " which is smaller by almost 20 per cent. than that accepted at the present time. Black utilized his discovery and determination of latent heats as an alternative method of determining relative quantities of heat or of thermal capacities. If a hot body is placed in a cavity in a block of ice which is then covered with a slab of ice, the quantity of heat lost by the body in cooling to the freezing point of water will be directly proportional to the mass of ice which is melted. So convenient was this method of determining heat quantities, when used in connection with improved ice calorimeters, that the amount of heat required to melt a unit weight of ice in many cases was taken as the unit of heat, replaced now by the calorie and British thermal unit. The calorimetric researches carried out by Black and others fortunately were not subject to qualifications which might follow from a particular theory as to the nature of heat. This work answered only the question "How much," not "How" or "Why" in some meehanistic sense which might "explain " the phenomena of heat by appealing to analogies with other more immediate and familiar experiences. Black himself was never a lover of theory and so seems to have felt a definite reluctance to adopt any specific doctrine in this respect. By that time the Aristotelian view of heat and cold as primary and unanalyzable qualities had been abando

THE SCIENTIFIC MONTHLY materialistic and the dynamic or mechanical theories of heat. Adherents of the former had the ready answer, suggested by optical, gravitational, magnetic, and electrical phenomena, that heat was an imponderable substance. Moreover, Newtonian influence favored a view which could be expressed in terms of attractive and repulsive forces betweenl particles. Boerhaave's fluid theory therefore dominated thought for over a century. When in 1738 the Academie des Sciences offered a prize for an essay on the nature of heat, the three winners (Euler, Voltaire, and the Marquise du Chatelet) all postulated the substantial theory. This view of heat adequately satisfied the craving for an interpretation which could be visualized in terms of sensory experience. Moreover, it was flexible enough to allow of modifications ad hoc to explain such phenomena as elasticity, change of state, modes of communication, thermal expansion, heat capacity, heat of compression, latent heat, and solar radiation. It was generally assumed that the caloric particles were in constant motion, that they repelled each other, and that they were attracted to the atoms of a substance with a force which varied with the heat capacity of the material. During compression, or on rubbing substances, some of the caloric of the body was squeezed out, thus causing the body to become sensibly hot. Conversely, the intrusion of more heat into a body resulted in a greater internal repulsion among the caloric particles and hence resulted in an expansion of the substance. A change of state could be brought about by injecting heat in such amount that the attractive bonds of the atoms of the substance were overcome by the repulsive forces of the caloric particles for each other. The additional heat necessary to overcome these atomic forces was not free but was in some way bound up with the substance; i.e., it was latent and produced no sensible increase in the temperature of the substance. The idea that heat was material was rendered plausible also by the confusion between ordinary sensible heat and radiant energy. Solar radiation was regarded simply as a steady stream of caloric particles, a view which in its simplicity contrasted markedly with the need on the part of dynamic theories of heat and light for a supposititious all-pervading medium or ether possessing quite extraordinary properties. In view of such a ready adaptability to all situations, it is small wonder that the substantial doctrine of heat persisted up to the middle of the nineteenth century. Fortunately, however, quantitative experimental work in thermoties meanwhile was hampered little, if at all, by notions as to the ultimate nature of heat. Indeed, a certain indifference toward such speculations was evinced not only by

HISTORY OF THE MEASUREMENT OF HEAT 451 was enthroned among the chemical elements as Lavoisier 's "caloric," and many years later Laplace in his Mecanique celeste continued to support the material theory. Laplace and Lavoisier failed to forge the quantitative link between heat and motion, but they did make significant contributions in the quantitative correlation of the chemical and biological aspects of thermal phenomena. The phlogiston theory had made thermal phenomena so completely a part of chemistry that the latter subject was known as the science of heat and mixture. Lavoisier was directly responsible for the overthrow of phlogiston through the substitution of the oxygen theory of combustion and respiration, but he retained a "chemical" view of heat. This view may, inci(lentally, account for the myopia with -respect to a quantified mechanical theory. The caloric doctrine appealed more strongly to men who were keenly aware of the need for quantitative statements. It was natural, then, that an attempt should be made to measure the amount of caloric which is evolved during the chemical process of combustion. L1aplace and Lavoisier burned charcoal in their improved ice calorimeter and determined that in this oxidation the production of one ounce of fixed air (carbon dioxide) from food and pure air (oxygen) was accompanied by heat sufficiento melt 26.692 ounces of ice. Inasmuch as there was at the time no concept of energy or, a fortiori, of chemical energy, Lavoisier believed that during combustion some of the heat which had been combined with the oxygen principle in the pure air was liberated as sensible heat. Ever since the days of classical Greek medicine the lungs had been regarded as playing a thermiostatic role in tempering the vital heat of the blood. However, Lavoisier held that the function of respiration is to supply to the lungs the newly discovered element oxygen which there combines chemically with the products of digestion to maintain the heat of the body. To show that this oxidation is entirely comparable to the ordinary visible process of combustion, Laplace and Lavoisier sought to determine calorimetrically the quantity of heat generated in animals during the formation of carbon dioxide. Because their method failed to take into account the oxidation of hydrogen, the result was too high-about 13.7 ounces of ice for 224 grains of fixed air-but it was sufficiently close to the expected result to indicate that respiration is a combustion during which the heat lost by the body is renewed through the conversion of oxygen to carbon dioxide. Animal heat was shown to be not perceptibly different from caloric. This was a vindication of that faith in the unity of nature which had been expressed boldly by Buffon and which later inspired the discovery of the conservation of energy. Moreover, it made possible for the first time an understanding of that color contrast between arterial and venous blood which sixty years later directed Mayer to this very law which Laplace and Lavoisier so narrowly missed. The collaboration of Lavoisier and Laplace on the specific heats of gases had failed to yield satisfactory results, yet such efforts also led directly toward Mayer's work. The basic law of thermometry for air (and for other gases) had been established by'Amontons at the beginning of the eighteenth century, but the calorimetry of gases was undeveloped a hundred years later. The elasticity and low specific gravities of gases delayed the determination of their specific heats long after they were isolated and identified. French scientists of the first half of the nineteenth century devoted much attention to this problem before arriving at a satisfactory solution.  ered and generalized the law of Amontons, he turned his attention to the thermal capacities of gases. Ten years later he still lacked accurate values for their specific heats, but he had made the important discovery that the heat capacities of equal volumes of air, hydrogen, oxygen, and nitrogen were nearly equal. The following year Delaroche and Berard verified this fact through the first reasonably accurate direct measurement of the specific heats of gases at constant pressure. The determination of specific heats at constant volume nevertheless still presented difficulties. In 1816 interest and attention to this problem were heightened by a bold conjecture on the part of Laplace. For well over a century no one had been able to explain why the velocity of sound as calculated by Newton from the elasticity of the air should be smaller by about 1/6 than the observed speed. Laplace finally hit upon the correct explanation: the vibrations constituting sound waves are so rapid that the compressions and rarefactions are not isothermal, as Newton had supposed, but adiabatic. On the basis of such vibrations Laplace was able to show that Newton's calculated velocity of 997 feet per second is corrected on multiplying it by V/y, where y is the ratio of the specific heat of air at constant volume to the specific heat at constant pressure. Laplace estimated y as 3/2, but the known velocity of sound showed that y should be about 1.4. This latter figure was confirmed somewhat later by the values of y and of the specific heats of diatomic gases obtained by Clement and Desormes, Gay-Liussac, Dulong, Regnault, and others. Such data, through the kinetic theory of gases, confirmed the discovery of Dulong and Petit in 1819 that heat capacities are directly associated with atomic theory. Moreover, during the second quarter of the century this work was destined, through the establishment of the law of the conservation of energy, to play a central role in the rise of the theory of thermodynamics, which was the third stage in the development of quantitative thermotie 

sábado, 10 de abril de 2021

wiki

 MENU DE ENTRADA  entrada 1 I entrada entrada 3 I entrada 4 I entrada 5 



Líquidos iónicos y su toxicidad en la actividad enzimática

 

 

 

 

ACXEL ANRES AVILA HERRERA

209115004

TERORIAS FISICAS II

 

 

 

 

 

Docente encargado

Carlos Hernán valencia guzmán

Correo institucional

chvalenciag@pedagogica.edu.co

 

 

 

 

 

Universidad pedagógica nacional

Bogotá D.C

Correo personal

axcelavila@gmail.com

Correo institucional

aaavilah@upn.edu.co

 

 

Hasta hace unos años, la industria química no incluía en sus planes el uso de sistemas biológicos, sin embargo, los graves problemas ambientales que comenzaron a aparecer promovieron la búsqueda de alternativas limpias a las metodologías clásicas. Así, los procesos que involucran biocatálisis comenzaron a jugar un importante rol no sólo por la biodegradabilidad de los biocatalizadores, sino porque se gasta menos energía, se producen pocos productos secundarios y residuos de menor toxicidad. No siempre es simple explicar qué es la biocatálisis desde ese punto: qué cosa es. Quizás sea más fácil comprender el alcance e implicancias que tiene al tener en cuenta que sin biocatálisis -el uso de catalizadores naturales en procesos químicos– cosas tan básicas como la cerveza y el queso no existirían. Pero la biocatálisis cobra vital importancia y acrecienta su valor en la investigación, desarrollo y producción de principios activos farmacéuticos, productos químicos, soluciones ambientales e inclusive en la industria cosmética. Los biocatalizadores son parte integrante de muchas aplicaciones biotecnológicas, aunque no parezca que lo son. (Lewkowicz, 2017)}

Desde los tiempos en que los antiguos chinos y japoneses fabricaban bebidas alcohólicas y alimentos derivados de la soja hasta llegar al siglo XIX, no hubo grandes avances. En el año 400 a.c. se describe en la Ilíada de Homero la producción de queso agitando la leche con un palo proveniente de una higuera (mediante esta acción se libera una proteasa, Ficina, que coagula la leche). La leche se almacenaba en bolsas hechas con estómagos de terneras recién sacrificadas, y se convertía en una sustancia semisólida, cuya compresión originaba un material más seco y más conveniente de almacenar: el queso. En 1814 se demostró que una infusión de avena era capaz de producir un azúcar fermentable a partir de la leche. Gay-Loussac postuló para ese entonces una de las teorías más aceptadas, que incluía la formación espontánea de organismos: la "GeneratioSpontanea". Pero su reinado fue bastante corto, ya que en 1862, Pasteur dio por tierra con todas las hipótesis demostrando que las fermentaciones siempre dependían de una inoculación de microorganismos. Paralelamente, Wagner describió en 1857 dos tipos de fermentos distintos, uno formado por cuerpos organizados (claramente vivos) y otro no organizado de composición proteica, "tal como en un cuerpo en descomposición". Y fue Kühneen 1878 quien denominó a este segundo grupo de fermentos: enzimas. (Lewkowicz, 2017)

Aunque no parezca del todo creíble este proceso de estudio respecto a la oxidación o reducción de iones y aniones en reacción. Son fundamentales entenderlas ya que podríamos comprender por ejemplo: Como una bebida cambia su composición, si se hace referencia a algún  tipo de vino entenderíamos que este una vez reacciona con el oxígeno tiende a oxidar por reducción un ion que va a permitir que su fermentación siga siendo duradera y su estado no cambie a ácido acético. En este caso el ion que se oxida y nos permite mantener por más tiempo el proceso de fermentación del vino es el ion  sulfito SO3 con 2 cargas negativas el cual se oxida por medio de un proceso reductor a ion sulfato. So4 con 2 cargas negativas. Este proceso asegura la durabilidad del producto.

El 86 % de la energía y el 96% de los químicos orgánicos que se consumen derivan de los recursos fósiles: carbón, petróleo y gas natural. El objetivo fundamental de este trabajo es realizar una revisión del uso de los líquidos iónicos en los procesos llevados a cabo en las biorrefinerias.

Para comenzar se ha procedido a revisar la literatura disponible concerniente a los recursos biomasicos y las biorrefinerias.

La biomasa constituye una materia prima barata y renovable que se encuentra disponible a escala global. Este incluye cualquier materia orgánica disponible de forma recurrente, como cultivos y arboles energéticos, cultivos alimenticios y los residuos de los mismos, plantas acuáticas, desperdicios de animales, y otros materiales residuales. Los principales componentes de la biomasa comprenden 5 categorías principales: almidón, celulosa, hemicelulosa, lignina y aceites. Igualmente, existen otros componentes que se encuentran en menor cantidad, denominados metabolitos secundarios. (hernanez, 2017)

 

                                          Imagen tomada de: 2.1. Concepto de biocatálisis (juntadeandalucia.es)

                                     

 

    

Imagen tomada de: 2.1. Concepto de biocatálisis (juntadeandalucia.es)

 

                La solución acuosa como entorno tradicional de proteínas y enzimas El agua se considera a menudo como el mejor disolvente para las reacciones enzimáticas. Las interacciones entre una molécula de enzima y el agua circundante (hidratación) son de importancia crucial para la catálisis enzimática. El agua actúa como lubricante o plastificante que permite que las enzimas exhiban la movilidad conformacional requerida para una catálisis óptima. Por ejemplo, Las interacciones hidrofóbicas que resultan de la estructura peculiar del agua cerca de los aminoácidos hidrofóbicos brindan estabilidad termodinámica para cumplir con sus usos extensivos en las industrias de detergentes y lácteos (Sawant y Nagendran, 2014). El segundo grupo más grande de enzimas industriales son varias carbohidrasas, principalmente amilasas y celulasas aplicadas en las industrias del almidón, textiles, detergentes y panificación (Contesini et al., 2013; Sundarram y Murthy, 2014). Durante las últimas décadas, el empleo de biocatalizadores para síntesis orgánica se ha convertido en una alternativa cada vez más atractiva a los enfoques químicos convencionales. Los biocatalizadores funcionan en condiciones suaves y minimizan las reacciones secundarias no deseadas, como la descomposición, la isomerización, la racemización y el reordenamiento. Aunque el agua se conoce como el medio convencional para las reacciones enzimáticas y las enzimas requieren un cierto nivel de agua en sus estructuras para mantener la conformación natural, las aplicaciones de las enzimas en la síntesis orgánica están restringidas por algunas desventajas, como la limitada solubilidad en agua de los sustratos orgánicos (Carrea y Riva, 2002; Stepankova et al., 2013). Se establecieron varios métodos para aumentar la solubilidad de compuestos orgánicos en reacciones orgánicas catalizadas por enzimas, como el uso de tensioactivos, sustitución y derivatización. Algunos enfoques basados ​​en disolventes orgánicos se adoptaron ampliamente para aumentar la solubilidad de sustratos lipofílicos, incluido el uso de una mezcla de agua y disolventes orgánicos miscibles en agua, y los sistemas bifásicos consisten en agua y disolventes orgánicos inmiscibles en agua. Sin emabargo, La mayor parte del mercado de solventes en la industria genera preocupaciones ambientales y de salud debido a su toxicidad relacionada con la hidrofobicidad, expresada por el logaritmo del coeficiente de partición del solvente en octanol y agua (Tabla 1) (Leo et al., 1975; Laane et al. al., 1987; Quijano et al., 2011). Por tanto, el desarrollo de tecnologías verdes dedicadas al diseño de líquidos iónicos como una alternativa prometedora. (Mehdi Mogharabi-Manzari, 2017)

 

Pensar en cuidar y desarrollar energías naturales que remplacen la función de combustibles fósiles y reduzcan la cantidad de contaminación, ayudara a futuras generaciones a vivir mucho más tiempo y además desarrollarse en un ambiente seguro y libre de residuos tóxicos.

La mejor forma de desarrollar este tipo de estudios y habilidades sin tener que contaminar ni deteriorar nuestro medio natural se encuentra en la virtualidad. Aunque para nuestro tiempo esta idea está muy desarrollada, es importante tenerla muy presente ya que los laboratorios virtuales son los que permitirán realizar prácticas y adquirir conocimientos sin gastar material innecesario y generar contaminación no deseada.

Los artículos consultados completaron la información necesaria para dar a entender al lector lo riesgoso que puede ser para nuestro ambiente y especies no realizar cambios en el modo de aprendizaje, ya que debe ser teórico, practica virtual, practica real.


domingo, 27 de septiembre de 2020

Entrada 1: Conexiones Incompatibles

MENU DE ENTRADA  I entrada entrada 3 I entrada 4 I entrada 5 I entrada 6 I Wiki  I WebQuest 


           CONEXIONES INCOMPATIBLES


Para empezar hay un aspecto que he visto durante toda mi vida desde que uso mi razón, incluso antes de estudiar algo referente a la ciencia ya que me refuerzo en el área de la química, en este párrafo intentare contarte porque pienso que hay un problema de desarrollo tanto en la ciencia como en la sociedad, ya que incluso tu como receptor de mi información has evidenciado ciertas falencias que tiene la sociedad sobre conocimientos científicos. ya sean matemáticos, químicos, físicos, biológicos y de mas. Estos últimos días he pensado mucho en porque tantas personas tienen un conocimiento muy mínimo respecto a la ciencia si normalmente todo lo que se desarrolla en nuestro en torno parte a través de ella.


http://https://trucoslondres.com/aprender-ingles/vocabulario/tecnologia-ingles/
Tomado de: https://trucoslondres.com/aprender-ingles/vocabulario/tecnologia-ingles/

 A continuación se te anexara un video, que intentara darte una idea mas concreta sobre que trata este blog.






















miércoles, 23 de septiembre de 2020

Entrada 2: Sociedad

 MENU DE ENTRADA  entrada 1 I entrada 3 I entrada 4 I entrada 5 I entrada 6 I Wiki  I WebQuest

Sociedad

Empezaremos la explicación de este tema, hablando un poco sobre la sociedad capitalista en la cual nos encontramos, la cual después de décadas se restructura así misma; cambia su visión del mundo, sus valores básicos, su estructura política y social, sus artes y sus instituciones clave, años mas tarde de desarrollo hay un nuevo mundo y quienes nacen entonces no alcanzan a imaginar el mundo en el que vivieron sus abuelos y en el que nacieron sus padres, tenemos presente que la tecnología es una herramienta que se encuentra en constante desarrollo siempre y antes de tocar este tipo de temas, considero que debes saber un poco mas de la estructura social en la que nos encontramos ya que la tecnología es algo que no se puede obviar hoy en día por la sociedad ya que es indiscutible los avances que observamos, el proceso histórico que tiene la sociedad es bastante denso y tenemos un conocimiento muy previo al respecto, sabemos que tras la revolucion industrial se desato un proceso de desarrollo tecnológico muy importante en la historia junto con la revolucion francesa que cambio toda la estructura del estado. se le abrió paso a todo un proceso de avances tecnológicos gracias a lo que conocemos como ciencia, ya que que esta a su vez se hizo muy indispensable en la vida de cualquier persona. Tanto que se desarrollaron monopolios en piases que se lograron saciar de producción y gracias al desarrollo tecnológico, tenían como seguir utilizando cada vez mas materia prima, y esto llevo a que se desarrollaran las potencias que hoy en día las conocemos como "los piases imperialistas" que controlan el desarrollo científico y tecnológico, quizás ya entendiendo esto sabremos que el desarrollo científico esta en la manos de ciertas personas y no de la población en general, ocasionando la ruptura de la sociedad como conocedora de la ciencia en todo su entendimiento.

Tomada de:https://www.definicion.xyz/2017/11/sociedad-y-sus-tipos.html 

Tomada dehttps://www.caracteristicas.co/sociedad/

Podemos observar en el siguiente video, un análisis sobre la sociedad y cierta relacion con la tecnología dada por la ciencia, es importante tomar todas estas definiciones ya que para ti como lector de mi información. La cual te intenta dar una perspectiva sobre una falla en la ciencia relacionada con la sociedad, sabemos que no es un tema fácil de tomar porque abarca muchos fallos pero si debemos tener muy presente que es la sociedad en términos de desarrollo para entender que es lo que sucede cuando aceptan lo material que da la ciencia pero se niega el conocimiento dado por ella.

                              Tomado de:https://www.youtube.com/watch?v=jrgOM8dARe8

Bibliografía

drucker, p. (s.f.). la sociedad poscapitalista .

 

domingo, 20 de septiembre de 2020

Entrada 3: Tecnología

MENU DE ENTRADA  I entrada 1  I entrada 2 I entrada 4 I entrada 5 I entrada 6 I Wiki  I WebQuest


Tecnología


Entender la tecnología como una aplicación desarrollada por la ciencia, es una forma mas procesable de comprender a la ciencia como tecnología que esta todo el tiempo con cada personas en el mundo, pero simplemente esta lo físico manejable, mas no el conocimiento detrás de toda esta construcción científica, de hecho lo recién descrito es el principal auge que tiene este blog como tema principal y se anexa un video que nos explica como a venido desarrollándose la tecnología como la aplicación de la ciencia a la resolución de problemas concretos.

              


                        


Frecuentemente cuando los medios de comunicación hablan de la influencia de la tecnología en nuestras vidas se refieren a las nuevas tecnologías o la alta tecnología. Nosotros mismos al escuchar la palabra tecnología tendemos a pensar en ordenadores de última generación, en naves espaciales, satélites artificiales, redes de alta tensión, centrales eléctricas, grandes máquinas…

Se suele asociar tecnología con modernidad, pero realmente la actividad tecnológica, la curiosidad por modificar nuestro entorno para mejorar nuestras condiciones de vida, es algo tan viejo como la humanidad. (xunta, s.f.)

Sin embargo los objetos más domésticos y cotidianos también son productos tecnológicos: los libros, la ropa que vestimos o los bolígrafos no han estado siempre ahí, surgieron a raíz de un descubrimiento o de una invención en un momento determinado de la historia; también fueron, en su día, tecnología punta.




Bibliografía

xunta. (s.f.). edu.xunta. Obtenido de edu.xunta.: https://www.edu.xunta.gal/espazoAbalar/sites/espazoAbalar/files/datos/1464945204/contido/1_la_tecnologa.html

 

Entrada 4: Ciencia

  MENU DE ENTRADA  entrada 1 entrada 2 I entrada 3 I entrada 5 I entrada 6 I Wiki  I WebQuest


Ciencia

Hagamos referencia a lo que llamamos ciencia, como concepto de estudio para desarrollar y descubrir la racionalidad humana, a partir de este concepto, en el siguiente párrafo  encontraras una definición mas concreta de lo que es la ciencia, y un video que nos cuenta un poco sobre la parte histórica del desarrollo de la ciencia. junto con una imagen al respecto. 

La ciencia es el conjunto de conocimientos organizados, jerarquizados y comprobables, obtenidos a partir de la observación de los fenómenos naturales y sociales de la realidad (tanto natural como humana), y también de la experimentación y demostración empírica de las interpretaciones que les damos. Estos conocimientos, además, son registrados y sirven de base a las generaciones futuras. Así que la ciencia se nutre a sí misma, se cuestiona, depura y acumula con el paso del tiempo. En el concepto de ciencia están contenidos diferentes saberes, técnicas, teorías e instituciones. Todo ello, en principio, tiene como objetivo descubrir cuáles son las leyes fundamentales que rigen la realidad, cómo lo hacen y, de ser posible, por qué.


Fuente: https://concepto.de/ciencia/#ixzz6ZM4ZBUMH

Breve historia de la ciencia

Características de la ciencia

En toda su complejidad, la ciencia se caracteriza por lo siguiente:

  • Aspira a descubrir las leyes que rigen el universo que nos rodea, mediante métodos racionales, empíricos, demostrables y universales. En ese sentido, valora la objetividad y la metodicidad, y se aleja de las subjetividades.
  • Analiza sus objetos de estudio tanto cuantitativa como cualitativamente, aunque no siempre acuda a modelos experimentales de comprobación (dependiendo de la materia).
  • Genera una importante cantidad de conocimiento especializado que debe ser puesto en duda y luego validado por la propia comunidad científica, antes de ser aceptado como cierto o valedero.
  • Se compone de un número importante de ramas o campos especializados del saber, que estudian fenómenos naturales, formales o sociales, y que en su totalidad conforman un todo unificado.

Fuente: https://concepto.de/ciencia/#ixzz6ZM9Yl4nK
















Tomada de:  https://www.insights.la/2017/03/21/arte-la-ciencia-detras-los-gif/                                                                            

sábado, 19 de septiembre de 2020

entrada 5: Relación

 MENU DE ENTRADA  I entrada 1 I entrada 2 entrada 3 I entrada 4 I entrada 6 I Wiki  I WebQuest

 
Relación  (Ciencia y sociedad)


 

Tomadas de https://www.emaze.com/@AQZRZTLT


El titulo (Relación)  De esta nueva pagina del blog, intentara dar a conocer una perspectiva sobre  lo que se a evidenciado del desarrollo de la ciencia en relación con la sociedad, y si leíste las paginas anteriores en este blog. Sabrás, que hablamos por separado sobre la sociedad, tecnología y la ciencia se entiende que estamos desarrollando una idea entre la ciencia y la sociedad. La pregunta seria. ¿Por qué hablar y dedicarle toda una hoja del sito en torno a la tecnología? y como el termino es muy obvio, sabemos que la tecnología es desarrollada por la ciencia y como objeto es la mas cercana a la sociedad.

Tomada de: https://www.izaro.com/una-nueva-cultura-de-relacion-hombre-tecnologia/c-1437134180/

 Tomada de: https://www.izaro.com/una-nueva-cultura-de-relacion-hombre-tecnologia/c-1437134180/                   

La educación  da a conocer a los estudiantes lo complejo que puede llegar a ser la ciencia en ciertos aspectos, yo como un observador de lo que ocurre día a día en mi entorno. He logrado observar el desinterés del 60% aproximadamente de estudiantes hacia la ciencia (química, física, tecnología, biología y otros mas) si bien sabemos la importancia de estos aspectos científicos, la pregunta seria ¿Por qué ocurre esto? y si intentamos responder esta pregunta encontraremos una cantidad de factores como por ejemplo. Una persona que trabaja todos los días y tiene la obligación de ayudar a su familia en que momento podría preocuparse y cuestionar los aspectos científicos ya mencionaos, no podría ¿verdad? ya que todo su enfoque esta en ayudar a su familia y por su mente no se pasa la idea de cuestionar ciertas cosas que pasan en la actualidad, acaso ¿será un problema estructural del sistema que nos rige?  si esto es así podríamos decir que estas falencias científicas que hay en las personas, no son porque así se quiera sino porque hay un sistema que nos obliga a darle importancia a otro tipo de cosas como priorizar la familia.