Tuesday, November 9, 2010

Geothermal

The geothermal gradient varies with location and is typically measured by determining the bottom open-hole temperature after borehole drilling. To achieve accuracy the drilling fluid needs time to reach the ambient temperature. This is not always achievable for practical reasons.

In stable tectonic areas in the tropics a temperature-depth plot will converge to the annual average surface temperature. However, in areas where deep permafrost developed during the Pleistocene a low temperature anomaly can be observed that persists down to several hundred metres.[14] The Suwałki cold anomaly in Poland has led to the recognition that similar thermal disturbances related to Pleistocene-Holocene climatic changes are recorded in boreholes throughout Poland, as well as in Alaska, northern Canada, and Siberia.





In areas of Holocene uplift and erosion (Fig. 1) the initial gradient will be higher than the average until it reaches an inflection point where it reaches the stabilized heat-flow regime. If the gradient of the stabilized regime is projected above the inflection point to its intersect with present-day annual average temperature, the height of this intersect above present-day surface level gives a measure of the extent of Holocene uplift and erosion. In areas of Holocene subsidence and deposition (Fig. 2) the initial gradient will be lower than the average until it reaches an inflection point where it joins the stabilized heat-flow regime.

In deep boreholes, the temperature of the rock below the inflection point generally increases with depth at rates of the order of 20 K/km or more.[citation needed] Fourier's law of heat flow applied to the Earth gives q = Mg where q is the heat flux at a point on the Earth's surface, M the thermal conductivity of the rocks there, and g the measured geothermal gradient. A representative value for the thermal conductivity of granitic rocks is M = 3.0 W/mK. Hence, using the global average geothermal conducting gradient of 0.02 K/m we get that q = 0.06 W/m². This estimate, corroborated by thousands of observations of heat flow in boreholes all over the world, gives a global average of 6×10−2 W/m². Thus, if the geothermal heat flow rising through an acre of granite terrain could be efficiently captured, it would light four 60 watt light bulbs.

A variation in surface temperature induced by climate changes and the Milankovitch cycle can penetrate below the Earth's surface and produce an oscillation in the geothermal gradient with periods varying from daily to tens of thousands of years and an amplitude which decreases with depth and having a scale depth of several kilometers.[15][16] Melt water from the polar ice caps flowing along ocean bottoms tends to maintain a constant geothermal gradient throughout the Earth's surface.[15]

If that rate of temperature change were constant, temperatures deep in the Earth would soon reach the point where all known rocks would melt. We know, however, that the Earth's mantle is solid because it transmits S-waves. The temperature gradient dramatically decreases with depth for two reasons. First, radioactive heat production is concentrated within the crust of the Earth, and particularly within the upper part of the crust, as concentrations of uranium, thorium, and potassium are highest there: these three elements are the main producers of radioactive heat within the Earth. Second, the mechanism of thermal transport changes from conduction, as within the rigid tectonic plates, to convection, in the portion of Earth's mantle that convects. Despite its solidity, most of the Earth's mantle behaves over long time-scales as a fluid, and heat is transported by advection, or material transport. Thus, the geothermal gradient within the bulk of Earth's mantle is of the order of 0.3 kelvin per kilometer, and is determined by the adiabatic gradient associated with mantle material (peridotite in the upper mantle).

This heating up can be both beneficial or detrimental in terms of engineering: Geothermal energy can be used as a means for generating electricity, by using the heat of the surrounding layers of rock underground to heat water and then routing the steam from this process through a turbine connected to a generator.

On the other hand, drill bits have to be cooled not only because of the friction created by the process of drilling itself but also because of the heat of the surrounding rock at great depth. Very deep mines, like some gold mines in South Africa, need the air inside to be cooled and circulated to allow miners to work at such great depth.

Wiki



Detail...

Wednesday, October 6, 2010

Fuel from Sewage

Sew­age sludge could be used to make bio­die­sel fu­el in a pro­cess that’s with­in a few pe­r­cent­age points of be­ing cost-com­pet­i­tive with con­ven­tion­al fu­el, a new re­port in­di­cates.

A four pe­r­cent re­duc­tion in the cost of mak­ing this al­ter­na­tive fu­el would make it “com­pet­i­tive” with tra­di­tion­al pe­tro­le­um-based die­sel fu­el, ac­cord­ing to the au­thor, Da­vid M. Karg­bo of the U.S. En­vi­ron­men­tal Pro­tec­tion Agen­cy.

How­ev­er, he cau­tions that there are still “huge chal­lenges” in­volved in re­duc­ing the price and in sat­is­fy­ing likely reg­u­la­tory con­cerns. The find­ings by Karg­bo, who is with the agen­cy’s Re­gion III Of­fice of In­nova­t­ion in Phil­a­del­phia, ap­pear in En­er­gy & Fu­els, a jour­nal of the Amer­i­can Chem­i­cal So­ci­e­ty.

Tra­di­tion­al pe­tro­le­um-based fu­els are in­creas­ingly be­set by en­vi­ron­men­tal, po­lit­i­cal and supply con­cerns, so re­search in­to al­ter­na­tive fu­els is gain­ing in pop­u­lar­ity.

Con­ven­tion­al die­sel fu­el, like gas­o­line, is ex­tracted from pe­tro­le­um, or crude oil, and is used to pow­er many trucks, boats, bus­es, and farm equip­ment. An al­ter­na­tive to con­ven­tion­al die­sel is bio­die­sel, which is de­rived from al­ter­na­tive sources to crude oil, such as veg­e­ta­ble oil or an­i­mal fat. How­ev­er, these sources are rel­a­tively ex­pen­sive, and the high­er prices have lim­it­ed the use of bio­die­sel.

Kargbo ar­gues that a cheape­r al­ter­na­tive would be to make biodie­sel from mu­nic­i­pal sew­age sludge, the sol­id ma­te­ri­al left be­hind from the treat­ment of sew­age at wastew­a­ter treat­ment plants. The Un­ited States alone pro­duces about sev­en mil­lion tons of sew­age sludge yearly.

To boost biodie­sel pro­duc­tion, sew­age treat­ment plants could would have to use mi­crobes that pro­duce high­er amounts of oil than the mi­crobes cur­rently used for wastew­a­ter treat­ment, Karg­bo said. That step alone, he added, could in­crease bio­die­sel pro­duc­tion to the 10 bil­lion gal­lon mark, which is more than tri­ple the na­tion’s cur­rent biodie­sel pro­duc­tion ca­pacity.

“Cur­rently the es­ti­mat­ed cost of pro­duc­tion is $3.11 per gal­lon of biodie­sel. To be com­pet­i­tive, this cost should be re­duced to lev­els that are at or be­low [re­cent] petro die­sel costs of $3.00 per gal­lon,” the re­port says.

How­ev­er, the chal­lenges that re­main in both low­er­ing this cost and in sat­is­fy­ing reg­u­la­tory and en­vi­ron­men­tal con­cerns re­main “huge,” Kargbo wrote. Ques­tions sur­round meth­ods of col­lect­ing the sludge, separa­t­ion of the bio­die­sel from oth­er ma­te­ri­als, main­tain­ing bio­die­sel qual­ity, and un­wanted soap forma­t­ion dur­ing pro­duc­tion, and the re­mov­al of phar­ma­ceu­ti­cal con­tam­i­nants from the sludge.

None­the­less, “bio­die­sel pro­duc­tion from sludge could be very prof­it­a­ble in the long run,” he added.

World Science

Detail...

Comets


Comets may have come from other solar systems.

Many of the best known comets, in­clud­ing Hal­ley, Hale-Bopp and Mc­Naught, may have been born or­bit­ing oth­er stars, ac­cord­ing to a new the­o­ry.

The pro­pos­al comes from a team of as­tro­no­mers led by Hal Lev­i­son of the South­west Re­search In­sti­tute in Boul­der, Co­lo., who used com­put­er sim­ula­t­ions to show that the Sun may have cap­tured small icy bod­ies from “si­b­ling” stars when it was young.


Sci­en­tists be­lieve the Sun formed in a clus­ter of hun­dreds of stars closely packed with­in a dense gas cloud. Each star would have formed many small icy bod­ies, Lev­i­son and col­leagues say—comets. These would have aris­en from the same disk-shaped zone of gas and dust, sur­round­ing each star, from which plan­ets formed.

Most of these comets were slung out of these fledg­ling plan­e­tary sys­tems due to gravita­t­ional in­ter­ac­tions with newly form­ing gi­ant plan­ets, the the­o­ry goes. The comets would then have be­come ti­ny, free-float­ing mem­bers of the clus­ter.

The Sun’s clus­ter came to a vi­o­lent end, how­ev­er, when its gas was blown out by the hot­test young stars, ac­cord­ing to Lev­i­son and col­leagues. The new mod­els show that the Sun then gravita­t­ionally cap­tured a large cloud of comets as the clus­ter dis­persed.

“When it was young, the Sun shared a lot of spit with its sib­lings, and we can see that stuff to­day,” said Lev­i­son, whose re­search is pub­lished in the June 10 ad­vance on­line is­sue of the re­search jour­nal Pro­ceed­ings of the Na­tio­n­al Aca­de­my of Sci­en­ces.

“The pro­cess of cap­ture is sur­pris­ingly ef­fi­cient and leads to the ex­cit­ing pos­si­bil­ity that the cloud con­tains a pot­pour­ri that sam­ples ma­te­ri­al from a large num­ber of stel­lar sib­lings of the Sun,” added Mar­tin Dun­can of Queen’s Uni­vers­ity, Can­a­da, a co-author of the stu­dy.

The team cites as ev­i­dence a bubble-shaped re­gion of comets, known as the Oort cloud, that sur­rounds the Sun, ex­tend­ing half­way to the near­est star. It has been com­monly as­sumed this cloud formed from the Sun’s proto-plan­e­tary disk, the struc­ture from which plan­ets formed. But be­cause de­tailed mod­els show that comets from the so­lar sys­tem pro­duce a much more ane­mic cloud than ob­served, anoth­er source is needed, Lev­i­son’s group con­tends.

“More than 90 per­cent of the ob­served Oort cloud comets [must] have an extra-so­lar orig­in,” as­sum­ing the Sun’s proto-plan­e­tary disk can be used to es­ti­mate the Oort Cloud’s in­dig­e­nous popula­t­ion, Lev­i­son said.

World Science


Detail...

Solar System

Solar system’s distant ice-rocks come into focus
Be­yond where Nep­tune—of­fi­cially our so­lar sys­tem’s fur­thest plan­et—cir­cles the Sun, there float count­less faint, icy rocks.

They’re called trans-Nep­tu­ni­an ob­jects, and one of the big­gest is Plu­to—once clas­si­fied as a plan­et, but now des­ig­nat­ed as a “d­warf plan­et.” This re­gion al­so sup­plies us with comets such as fa­mous Com­et Hal­ley.

Now, as­tro­no­mers us­ing new tech­niques to cull the da­ta ar­chives of NASA’s Hub­ble Space Tel­e­scope have added 14 new trans-Nep­tu­ni­an ob­jects to the known cat­a­log. Their meth­od, they say, promises to turn up hun­dreds more.


“Trans-Neptunian ob­jects in­ter­est us be­cause they are build­ing blocks left over from the forma­t­ion of the so­lar sys­tem,” said Ce­sar Fuentes, form­erly with the Har­vard-Smith­son­ian Cen­ter for As­t­ro­phys­ics and now at North­ern Ar­i­zo­na Uni­vers­ity. He is the lead au­thor of a pa­per on the find­ings, to ap­pear in The As­t­ro­phys­i­cal Jour­nal.

As trans-Nep­tu­ni­an ob­jects, or TNOs, slowly or­bit the sun, they move against the star­ry back­ground, ap­pearing as streaks of light in time ex­po­sure pho­tographs. The team de­vel­oped soft­ware to scan hun­dreds of Hub­ble im­ages for such streaks. Af­ter prom­is­ing can­di­dates were flagged, the im­ages were vis­u­ally ex­am­ined to con­firm or re­fute each disco­very.

Most TNOs are lo­cat­ed near the eclip­tic—a line in the sky mark­ing the plane of the so­lar sys­tem, an out­growth of the fact that the so­lar sys­tem formed from a disk of ma­te­ri­al, as­tro­no­mers say. There­fore, the re­search­ers search­ed for objects near the eclip­tic.

They found 14 bodies, in­clud­ing one “bi­na­ry,” that is, a pair whose mem­bers or­bit each oth­er. All were more than 100 mil­lion times faint­er than ob­jects vis­i­ble to the un­aided eye. By meas­ur­ing their mo­tion across the sky, as­tro­no­mers cal­cu­lat­ed an or­bit and dis­tance for each ob­ject. Com­bin­ing the dis­tance, bright­ness and an es­ti­mat­ed re­flec­ti­vity al­lowed them to cal­cu­late the ap­prox­i­mate size. The new­found TNOs range in size from an es­ti­mat­ed 25 to 60 miles (40-100 km) across.

Un­like plan­ets, which tend to orbit very near the ecliptic, some TNOs have or­bits quite tilted from that line. The team ex­am­ined the size dis­tri­bu­tion of ob­jects with both types of or­bits to gain clues about how the popula­t­ion has evolved over the past 4.5 bil­lion years.

Most smaller TNO’s are thought to be shat­tered re­mains of big­ger ones. Over bil­lions of years, these ob­jects smack to­geth­er, grind­ing each oth­er down. The team found that the size dis­tri­bu­tion of TNOs with flat ver­sus tilted orbits is about the same as ob­jects get faint­er and smaller. There­fore, both popula­t­ions have si­m­i­lar col­li­sion­al his­to­ries, the re­searchers said.

The study ex­am­ined only one-third of a square de­gree of the sky, so there’s much more ar­ea to sur­vey. Hun­dreds of ad­di­tion­al TNOs may lurk in the Hub­ble ar­chives at high­er eclip­tic lat­i­tudes, said Fuentes and his col­leagues, who plan to con­tin­ue their search. “We have prov­en our abil­ity to de­tect and char­ac­ter­ize TNOs even with da­ta in­tend­ed for com­pletely dif­fer­ent pur­pos­es,” Fuentes said.


World Science



Detail...

Tuesday, September 14, 2010

Thermosetting Polymer

A thermosetting plastic, also known as a thermoset, is polymer material that irreversibly cures. The cure may be done through heat (generally above 200 °C (392 °F)), through a chemical reaction (two-part epoxy, for example), or irradiation such as electron beam processing.

Thermoset materials are usually liquid or malleable prior to curing and designed to be molded into their final form, or used as adhesives. Others are solids like that of the molding compound used in semiconductors and integrated circuits (IC's).

According to IUPAC recommendation: A thermosetting polymer is a prepolymer in a soft solid or viscous state that changes irreversibly into an infusible, insoluble polymer network by curing. Curing can be induced by the action of heat or suitable radiation, or both. A cured thermosetting polymer is called a thermoset


Process
The curing process transforms the resin into a plastic or rubber by a cross-linking process. Energy and/or catalysts are added that cause the molecular chains to react at chemically active sites (unsaturated or epoxy sites, for example), linking into a rigid, 3-D structure. The cross-linking process forms a molecule with a larger molecular weight, resulting in a material with a higher melting point. During the reaction, the molecular weight has increased to a point so that the melting point is higher than the surrounding ambient temperature, the material forms into a solid material.

Uncontrolled reheating of the material results in reaching the decomposition temperature before the melting point is obtained. Therefore, a thermoset material cannot be melted and re-shaped after it is cured. This implies that thermosets cannot be recycled, except as filler material.


Wiki


Detail...

Failure Analysis

Failure analysis is the process of collecting and analyzing data to determine the cause of a failure. It is an important discipline in many branches of manufacturing industry, such as the electronics industry, where it is a vital tool used in the development of new products and for the improvement of existing products. It relies on collecting failed components for subsequent examination of the cause or causes of failure using a wide array of methods, especially microscopy and spectroscopy. The NDT or nondestructive testing methods are valuable because the failed products are unaffected by analysis, so inspection always starts using these methods.


Forensic investigation
Forensic inquiry into the failed process or product is the starting point of failure analysis. Such inquiry is conducted using scientific analytical methods such as electrical and mechanical measurements, or by analysing failure data such as product reject reports or examples of previous failures of the same kind. The methods of forensic engineering are especially valuable in tracing product defects and flaws. They may include fatigue cracks, brittle cracks produced by stress corrosion cracking or environmental stress cracking for example. Witness statements can be valuable for reconstructing the likely sequence of events and hence the chain of cause and effect. Human factors can also be assessed when the cause of the failure is determined. There are several useful methods to prevent product failures occurring in the first place, including FMEA and FTA, methods which can be used during prototyping to analyse failures before a product is marketed.

Failure theories can only be constructed on such data, but when corrective action is needed quickly, the precautionary principle demands that measures be put in place. In aircraft accidents for example, all planes of the type involved can be grounded immediately pending the outcome of the inquiry.

Another aspect of failure analysis is associated with No Fault Found (NFF) which is a term used in the field of failure analysis to describe a situation where an originally reported mode of failure can't be duplicated by the evaluating technician and therefore the potential defect can't be fixed.

NFF can be attributed to oxidation, defective connections of electrical components, temporary shorts or opens in the circuits, software bugs, temporary environmental factors, but also to the operator error. Large number of devices that are reported as NFF during the first troubleshooting session often return to the failure analysis lab with the same NFF symptoms or a permanent mode of failure.

The term Failure analysis also applies to other fields such as business management and military strategy.

Methods of Analysis
The failure analysis of many different products involves the use of the following tools and techniques:

Microscopes
Optical microscope
Liquid crystal
Scanning acoustic microscope (SAM)
Scanning Acoustic Tomography (SCAT)
Atomic Force Microscope (AFM)
Stereomicroscope
Photo emission microscope (PEM)
X-ray microscope
Infra-red microscope
Scanning SQUID microscope


Sample Preparation
Jet-etcher
Plasma etcher
Back Side Thinning Tools
Mechanical Back Side Thinning
Laser Chemical Back Side Etching


Spectroscopic Analysis
Transmission line pulse spectroscopy (TLPS)
Auger electron spectroscopy
Deep Level Transient Spectroscopy (DLTS)


Wiki

Detail...

Polymorphism

Polymorphism in materials science is the ability of a solid material to exist in more than one form or crystal structure. Polymorphism can potentially be found in any crystalline material including polymers, minerals, and metals, and is related to allotropy, which refers to elemental solids. The complete morphology of a material is described by polymorphism and other variables such as crystal habit, amorphous fraction or crystallographic defects. Polymorphism is relevant to the fields of pharmaceuticals, agrochemicals, pigments, dyestuffs, foods, and explosives.

When polymorphism exists as a result of difference in crystal packing, it is called packing polymorphism. Polymorphism can also result from the existence of different conformers of the same molecule in conformational polymorphism. In pseudopolymorphism the different crystal types are the result of hydration or solvation. An example of an organic polymorph is glycine, which is able to form monoclinic and hexagonal crystals. Silica is known to form many polymorphs, the most important of which are; α-quartz, β-quartz, tridymite, cristobalite, coesite, and stishovite.

An analogous phenomenon for amorphous materials is polyamorphism, when a substance can take on several different amorphous modifications.


Polymorphism is important in the development of pharmaceutical ingredients. Many drugs receive regulatory approval for only a single crystal form or polymorph. In a classic patent case the pharmaceutical company GlaxoSmithKline defended its patent for the polymorph type II of the active ingredient in Zantac against competitors while that of the polymorph type I had already expired. Polymorphism in drugs can also have direct medical implications. Medicine is often administered orally as a crystalline solid and dissolution rates depend on the exact crystal form of a polymorph.

Cefdinir is a drug appearing in 11 patents from 5 pharmaceutical companies in which a total of 5 different polymorphs are described. The original inventor Fujisawa now Astellas (with US partner Abbott) extended the original patent covering a suspension with a new anhydrous formulation. Competitors in turn patented hydrates of the drug with varying water content, which were described with only basic techniques such as infrared spectroscopy and XRPD, a practice criticised by in one review because these techniques at the most suggest a different crystal structure but are unable to specify one. These techniques also tend to overlook chemical impurities or even co-components. Abbott researchers realised this the hard way when, in one patent application, it was ignored that their new cefdinir crystal form was, in fact, that of a pyridinium salt. The review also questioned whether the polymorphs offered any advantages to the existing drug: something clearly demanded in a new patent.

Acetylsalicylic acid elusive 2nd polymorph was first discovered by Vishweshwar et al. fine structural details were given by Bond et al. A new crystal type was found after attempted co-crystallization of aspirin and levetiracetam from hot acetonitrile. The form II is stable only at 100 K and reverts back to form I at ambient temperature. In the (unambiguous) form I, two salicylic molecules form centrosymmetric dimers through the acetyl groups with the (acidic) methyl proton to carbonyl hydrogen bonds, and, in the newly-claimed form II, each salicylic molecule forms the same hydrogen bonds, but then with two neighbouring molecules instead of one. With respect to the hydrogen bonds formed by the carboxylic acid groups, both polymorphs form identical dimer structures.

Paracetamol powder has poor compression properties:
this poses difficulty in making tablets, so a new polymorph of paracetamol is discovered which is more compressible.

due to differences in solubility of polymorphs, one polymorph may be more active therapeutically than another polymorph of same drug

cortisone acetate exists in at least five different polymorphs, four of which are unstable in water and change to a stable form.

carbamazepine(used in epilepsy and trigeminal neuralgia) beta-polymorph developed from solvent of high dielectric constant ex aliphatic alcohol, whereas alpha polymorph crystallized from solvents of low dielectric constant such as carbon tetrachloride

estrogen and chloroamphenicol also show polymorphism

Wiki

Detail...