The geothermal gradient varies with location and is typically measured by determining the bottom open-hole temperature after borehole drilling. To achieve accuracy the drilling fluid needs time to reach the ambient temperature. This is not always achievable for practical reasons.
In stable tectonic areas in the tropics a temperature-depth plot will converge to the annual average surface temperature. However, in areas where deep permafrost developed during the Pleistocene a low temperature anomaly can be observed that persists down to several hundred metres.[14] The Suwałki cold anomaly in Poland has led to the recognition that similar thermal disturbances related to Pleistocene-Holocene climatic changes are recorded in boreholes throughout Poland, as well as in Alaska, northern Canada, and Siberia.
In areas of Holocene uplift and erosion (Fig. 1) the initial gradient will be higher than the average until it reaches an inflection point where it reaches the stabilized heat-flow regime. If the gradient of the stabilized regime is projected above the inflection point to its intersect with present-day annual average temperature, the height of this intersect above present-day surface level gives a measure of the extent of Holocene uplift and erosion. In areas of Holocene subsidence and deposition (Fig. 2) the initial gradient will be lower than the average until it reaches an inflection point where it joins the stabilized heat-flow regime.
In deep boreholes, the temperature of the rock below the inflection point generally increases with depth at rates of the order of 20 K/km or more.[citation needed] Fourier's law of heat flow applied to the Earth gives q = Mg where q is the heat flux at a point on the Earth's surface, M the thermal conductivity of the rocks there, and g the measured geothermal gradient. A representative value for the thermal conductivity of granitic rocks is M = 3.0 W/mK. Hence, using the global average geothermal conducting gradient of 0.02 K/m we get that q = 0.06 W/m². This estimate, corroborated by thousands of observations of heat flow in boreholes all over the world, gives a global average of 6×10−2 W/m². Thus, if the geothermal heat flow rising through an acre of granite terrain could be efficiently captured, it would light four 60 watt light bulbs.
A variation in surface temperature induced by climate changes and the Milankovitch cycle can penetrate below the Earth's surface and produce an oscillation in the geothermal gradient with periods varying from daily to tens of thousands of years and an amplitude which decreases with depth and having a scale depth of several kilometers.[15][16] Melt water from the polar ice caps flowing along ocean bottoms tends to maintain a constant geothermal gradient throughout the Earth's surface.[15]
If that rate of temperature change were constant, temperatures deep in the Earth would soon reach the point where all known rocks would melt. We know, however, that the Earth's mantle is solid because it transmits S-waves. The temperature gradient dramatically decreases with depth for two reasons. First, radioactive heat production is concentrated within the crust of the Earth, and particularly within the upper part of the crust, as concentrations of uranium, thorium, and potassium are highest there: these three elements are the main producers of radioactive heat within the Earth. Second, the mechanism of thermal transport changes from conduction, as within the rigid tectonic plates, to convection, in the portion of Earth's mantle that convects. Despite its solidity, most of the Earth's mantle behaves over long time-scales as a fluid, and heat is transported by advection, or material transport. Thus, the geothermal gradient within the bulk of Earth's mantle is of the order of 0.3 kelvin per kilometer, and is determined by the adiabatic gradient associated with mantle material (peridotite in the upper mantle).
This heating up can be both beneficial or detrimental in terms of engineering: Geothermal energy can be used as a means for generating electricity, by using the heat of the surrounding layers of rock underground to heat water and then routing the steam from this process through a turbine connected to a generator.
On the other hand, drill bits have to be cooled not only because of the friction created by the process of drilling itself but also because of the heat of the surrounding rock at great depth. Very deep mines, like some gold mines in South Africa, need the air inside to be cooled and circulated to allow miners to work at such great depth.
Wiki
Tuesday, November 9, 2010
Geothermal
Wednesday, October 6, 2010
Fuel from Sewage
Sewage sludge could be used to make biodiesel fuel in a process that’s within a few percentage points of being cost-competitive with conventional fuel, a new report indicates.
A four percent reduction in the cost of making this alternative fuel would make it “competitive” with traditional petroleum-based diesel fuel, according to the author, David M. Kargbo of the U.S. Environmental Protection Agency.
However, he cautions that there are still “huge challenges” involved in reducing the price and in satisfying likely regulatory concerns. The findings by Kargbo, who is with the agency’s Region III Office of Innovation in Philadelphia, appear in Energy & Fuels, a journal of the American Chemical Society.
Traditional petroleum-based fuels are increasingly beset by environmental, political and supply concerns, so research into alternative fuels is gaining in popularity.
Conventional diesel fuel, like gasoline, is extracted from petroleum, or crude oil, and is used to power many trucks, boats, buses, and farm equipment. An alternative to conventional diesel is biodiesel, which is derived from alternative sources to crude oil, such as vegetable oil or animal fat. However, these sources are relatively expensive, and the higher prices have limited the use of biodiesel.
Kargbo argues that a cheaper alternative would be to make biodiesel from municipal sewage sludge, the solid material left behind from the treatment of sewage at wastewater treatment plants. The United States alone produces about seven million tons of sewage sludge yearly.
To boost biodiesel production, sewage treatment plants could would have to use microbes that produce higher amounts of oil than the microbes currently used for wastewater treatment, Kargbo said. That step alone, he added, could increase biodiesel production to the 10 billion gallon mark, which is more than triple the nation’s current biodiesel production capacity.
“Currently the estimated cost of production is $3.11 per gallon of biodiesel. To be competitive, this cost should be reduced to levels that are at or below [recent] petro diesel costs of $3.00 per gallon,” the report says.
However, the challenges that remain in both lowering this cost and in satisfying regulatory and environmental concerns remain “huge,” Kargbo wrote. Questions surround methods of collecting the sludge, separation of the biodiesel from other materials, maintaining biodiesel quality, and unwanted soap formation during production, and the removal of pharmaceutical contaminants from the sludge.
Nonetheless, “biodiesel production from sludge could be very profitable in the long run,” he added.
World Science
Comets
Comets may have come from other solar systems.
Many of the best known comets, including Halley, Hale-Bopp and McNaught, may have been born orbiting other stars, according to a new theory.
The proposal comes from a team of astronomers led by Hal Levison of the Southwest Research Institute in Boulder, Colo., who used computer simulations to show that the Sun may have captured small icy bodies from “sibling” stars when it was young.
Scientists believe the Sun formed in a cluster of hundreds of stars closely packed within a dense gas cloud. Each star would have formed many small icy bodies, Levison and colleagues say—comets. These would have arisen from the same disk-shaped zone of gas and dust, surrounding each star, from which planets formed.
Most of these comets were slung out of these fledgling planetary systems due to gravitational interactions with newly forming giant planets, the theory goes. The comets would then have become tiny, free-floating members of the cluster.
The Sun’s cluster came to a violent end, however, when its gas was blown out by the hottest young stars, according to Levison and colleagues. The new models show that the Sun then gravitationally captured a large cloud of comets as the cluster dispersed.
“When it was young, the Sun shared a lot of spit with its siblings, and we can see that stuff today,” said Levison, whose research is published in the June 10 advance online issue of the research journal Proceedings of the National Academy of Sciences.
“The process of capture is surprisingly efficient and leads to the exciting possibility that the cloud contains a potpourri that samples material from a large number of stellar siblings of the Sun,” added Martin Duncan of Queen’s University, Canada, a co-author of the study.
The team cites as evidence a bubble-shaped region of comets, known as the Oort cloud, that surrounds the Sun, extending halfway to the nearest star. It has been commonly assumed this cloud formed from the Sun’s proto-planetary disk, the structure from which planets formed. But because detailed models show that comets from the solar system produce a much more anemic cloud than observed, another source is needed, Levison’s group contends.
“More than 90 percent of the observed Oort cloud comets [must] have an extra-solar origin,” assuming the Sun’s proto-planetary disk can be used to estimate the Oort Cloud’s indigenous population, Levison said.
World Science
Solar System
Solar system’s distant ice-rocks come into focus
Beyond where Neptune—officially our solar system’s furthest planet—circles the Sun, there float countless faint, icy rocks.
They’re called trans-Neptunian objects, and one of the biggest is Pluto—once classified as a planet, but now designated as a “dwarf planet.” This region also supplies us with comets such as famous Comet Halley.
Now, astronomers using new techniques to cull the data archives of NASA’s Hubble Space Telescope have added 14 new trans-Neptunian objects to the known catalog. Their method, they say, promises to turn up hundreds more.
“Trans-Neptunian objects interest us because they are building blocks left over from the formation of the solar system,” said Cesar Fuentes, formerly with the Harvard-Smithsonian Center for Astrophysics and now at Northern Arizona University. He is the lead author of a paper on the findings, to appear in The Astrophysical Journal.
As trans-Neptunian objects, or TNOs, slowly orbit the sun, they move against the starry background, appearing as streaks of light in time exposure photographs. The team developed software to scan hundreds of Hubble images for such streaks. After promising candidates were flagged, the images were visually examined to confirm or refute each discovery.
Most TNOs are located near the ecliptic—a line in the sky marking the plane of the solar system, an outgrowth of the fact that the solar system formed from a disk of material, astronomers say. Therefore, the researchers searched for objects near the ecliptic.
They found 14 bodies, including one “binary,” that is, a pair whose members orbit each other. All were more than 100 million times fainter than objects visible to the unaided eye. By measuring their motion across the sky, astronomers calculated an orbit and distance for each object. Combining the distance, brightness and an estimated reflectivity allowed them to calculate the approximate size. The newfound TNOs range in size from an estimated 25 to 60 miles (40-100 km) across.
Unlike planets, which tend to orbit very near the ecliptic, some TNOs have orbits quite tilted from that line. The team examined the size distribution of objects with both types of orbits to gain clues about how the population has evolved over the past 4.5 billion years.
Most smaller TNO’s are thought to be shattered remains of bigger ones. Over billions of years, these objects smack together, grinding each other down. The team found that the size distribution of TNOs with flat versus tilted orbits is about the same as objects get fainter and smaller. Therefore, both populations have similar collisional histories, the researchers said.
The study examined only one-third of a square degree of the sky, so there’s much more area to survey. Hundreds of additional TNOs may lurk in the Hubble archives at higher ecliptic latitudes, said Fuentes and his colleagues, who plan to continue their search. “We have proven our ability to detect and characterize TNOs even with data intended for completely different purposes,” Fuentes said.
World Science
Tuesday, September 14, 2010
Thermosetting Polymer
A thermosetting plastic, also known as a thermoset, is polymer material that irreversibly cures. The cure may be done through heat (generally above 200 °C (392 °F)), through a chemical reaction (two-part epoxy, for example), or irradiation such as electron beam processing.
Thermoset materials are usually liquid or malleable prior to curing and designed to be molded into their final form, or used as adhesives. Others are solids like that of the molding compound used in semiconductors and integrated circuits (IC's).
According to IUPAC recommendation: A thermosetting polymer is a prepolymer in a soft solid or viscous state that changes irreversibly into an infusible, insoluble polymer network by curing. Curing can be induced by the action of heat or suitable radiation, or both. A cured thermosetting polymer is called a thermoset
Process
The curing process transforms the resin into a plastic or rubber by a cross-linking process. Energy and/or catalysts are added that cause the molecular chains to react at chemically active sites (unsaturated or epoxy sites, for example), linking into a rigid, 3-D structure. The cross-linking process forms a molecule with a larger molecular weight, resulting in a material with a higher melting point. During the reaction, the molecular weight has increased to a point so that the melting point is higher than the surrounding ambient temperature, the material forms into a solid material.
Uncontrolled reheating of the material results in reaching the decomposition temperature before the melting point is obtained. Therefore, a thermoset material cannot be melted and re-shaped after it is cured. This implies that thermosets cannot be recycled, except as filler material.
Wiki
Failure Analysis
Failure analysis is the process of collecting and analyzing data to determine the cause of a failure. It is an important discipline in many branches of manufacturing industry, such as the electronics industry, where it is a vital tool used in the development of new products and for the improvement of existing products. It relies on collecting failed components for subsequent examination of the cause or causes of failure using a wide array of methods, especially microscopy and spectroscopy. The NDT or nondestructive testing methods are valuable because the failed products are unaffected by analysis, so inspection always starts using these methods.
Forensic investigation
Forensic inquiry into the failed process or product is the starting point of failure analysis. Such inquiry is conducted using scientific analytical methods such as electrical and mechanical measurements, or by analysing failure data such as product reject reports or examples of previous failures of the same kind. The methods of forensic engineering are especially valuable in tracing product defects and flaws. They may include fatigue cracks, brittle cracks produced by stress corrosion cracking or environmental stress cracking for example. Witness statements can be valuable for reconstructing the likely sequence of events and hence the chain of cause and effect. Human factors can also be assessed when the cause of the failure is determined. There are several useful methods to prevent product failures occurring in the first place, including FMEA and FTA, methods which can be used during prototyping to analyse failures before a product is marketed.
Failure theories can only be constructed on such data, but when corrective action is needed quickly, the precautionary principle demands that measures be put in place. In aircraft accidents for example, all planes of the type involved can be grounded immediately pending the outcome of the inquiry.
Another aspect of failure analysis is associated with No Fault Found (NFF) which is a term used in the field of failure analysis to describe a situation where an originally reported mode of failure can't be duplicated by the evaluating technician and therefore the potential defect can't be fixed.
NFF can be attributed to oxidation, defective connections of electrical components, temporary shorts or opens in the circuits, software bugs, temporary environmental factors, but also to the operator error. Large number of devices that are reported as NFF during the first troubleshooting session often return to the failure analysis lab with the same NFF symptoms or a permanent mode of failure.
The term Failure analysis also applies to other fields such as business management and military strategy.
Methods of Analysis
The failure analysis of many different products involves the use of the following tools and techniques:
Microscopes
Optical microscope
Liquid crystal
Scanning acoustic microscope (SAM)
Scanning Acoustic Tomography (SCAT)
Atomic Force Microscope (AFM)
Stereomicroscope
Photo emission microscope (PEM)
X-ray microscope
Infra-red microscope
Scanning SQUID microscope
Sample Preparation
Jet-etcher
Plasma etcher
Back Side Thinning Tools
Mechanical Back Side Thinning
Laser Chemical Back Side Etching
Spectroscopic Analysis
Transmission line pulse spectroscopy (TLPS)
Auger electron spectroscopy
Deep Level Transient Spectroscopy (DLTS)
Wiki
Polymorphism
Polymorphism in materials science is the ability of a solid material to exist in more than one form or crystal structure. Polymorphism can potentially be found in any crystalline material including polymers, minerals, and metals, and is related to allotropy, which refers to elemental solids. The complete morphology of a material is described by polymorphism and other variables such as crystal habit, amorphous fraction or crystallographic defects. Polymorphism is relevant to the fields of pharmaceuticals, agrochemicals, pigments, dyestuffs, foods, and explosives.
When polymorphism exists as a result of difference in crystal packing, it is called packing polymorphism. Polymorphism can also result from the existence of different conformers of the same molecule in conformational polymorphism. In pseudopolymorphism the different crystal types are the result of hydration or solvation. An example of an organic polymorph is glycine, which is able to form monoclinic and hexagonal crystals. Silica is known to form many polymorphs, the most important of which are; α-quartz, β-quartz, tridymite, cristobalite, coesite, and stishovite.
An analogous phenomenon for amorphous materials is polyamorphism, when a substance can take on several different amorphous modifications.
Polymorphism is important in the development of pharmaceutical ingredients. Many drugs receive regulatory approval for only a single crystal form or polymorph. In a classic patent case the pharmaceutical company GlaxoSmithKline defended its patent for the polymorph type II of the active ingredient in Zantac against competitors while that of the polymorph type I had already expired. Polymorphism in drugs can also have direct medical implications. Medicine is often administered orally as a crystalline solid and dissolution rates depend on the exact crystal form of a polymorph.
Cefdinir is a drug appearing in 11 patents from 5 pharmaceutical companies in which a total of 5 different polymorphs are described. The original inventor Fujisawa now Astellas (with US partner Abbott) extended the original patent covering a suspension with a new anhydrous formulation. Competitors in turn patented hydrates of the drug with varying water content, which were described with only basic techniques such as infrared spectroscopy and XRPD, a practice criticised by in one review because these techniques at the most suggest a different crystal structure but are unable to specify one. These techniques also tend to overlook chemical impurities or even co-components. Abbott researchers realised this the hard way when, in one patent application, it was ignored that their new cefdinir crystal form was, in fact, that of a pyridinium salt. The review also questioned whether the polymorphs offered any advantages to the existing drug: something clearly demanded in a new patent.
Acetylsalicylic acid elusive 2nd polymorph was first discovered by Vishweshwar et al. fine structural details were given by Bond et al. A new crystal type was found after attempted co-crystallization of aspirin and levetiracetam from hot acetonitrile. The form II is stable only at 100 K and reverts back to form I at ambient temperature. In the (unambiguous) form I, two salicylic molecules form centrosymmetric dimers through the acetyl groups with the (acidic) methyl proton to carbonyl hydrogen bonds, and, in the newly-claimed form II, each salicylic molecule forms the same hydrogen bonds, but then with two neighbouring molecules instead of one. With respect to the hydrogen bonds formed by the carboxylic acid groups, both polymorphs form identical dimer structures.
Paracetamol powder has poor compression properties:
this poses difficulty in making tablets, so a new polymorph of paracetamol is discovered which is more compressible.
due to differences in solubility of polymorphs, one polymorph may be more active therapeutically than another polymorph of same drug
cortisone acetate exists in at least five different polymorphs, four of which are unstable in water and change to a stable form.
carbamazepine(used in epilepsy and trigeminal neuralgia) beta-polymorph developed from solvent of high dielectric constant ex aliphatic alcohol, whereas alpha polymorph crystallized from solvents of low dielectric constant such as carbon tetrachloride
estrogen and chloroamphenicol also show polymorphism
Wiki