Tuesday, November 9, 2010

Geothermal

The geothermal gradient varies with location and is typically measured by determining the bottom open-hole temperature after borehole drilling. To achieve accuracy the drilling fluid needs time to reach the ambient temperature. This is not always achievable for practical reasons.

In stable tectonic areas in the tropics a temperature-depth plot will converge to the annual average surface temperature. However, in areas where deep permafrost developed during the Pleistocene a low temperature anomaly can be observed that persists down to several hundred metres.[14] The Suwałki cold anomaly in Poland has led to the recognition that similar thermal disturbances related to Pleistocene-Holocene climatic changes are recorded in boreholes throughout Poland, as well as in Alaska, northern Canada, and Siberia.





In areas of Holocene uplift and erosion (Fig. 1) the initial gradient will be higher than the average until it reaches an inflection point where it reaches the stabilized heat-flow regime. If the gradient of the stabilized regime is projected above the inflection point to its intersect with present-day annual average temperature, the height of this intersect above present-day surface level gives a measure of the extent of Holocene uplift and erosion. In areas of Holocene subsidence and deposition (Fig. 2) the initial gradient will be lower than the average until it reaches an inflection point where it joins the stabilized heat-flow regime.

In deep boreholes, the temperature of the rock below the inflection point generally increases with depth at rates of the order of 20 K/km or more.[citation needed] Fourier's law of heat flow applied to the Earth gives q = Mg where q is the heat flux at a point on the Earth's surface, M the thermal conductivity of the rocks there, and g the measured geothermal gradient. A representative value for the thermal conductivity of granitic rocks is M = 3.0 W/mK. Hence, using the global average geothermal conducting gradient of 0.02 K/m we get that q = 0.06 W/m². This estimate, corroborated by thousands of observations of heat flow in boreholes all over the world, gives a global average of 6×10−2 W/m². Thus, if the geothermal heat flow rising through an acre of granite terrain could be efficiently captured, it would light four 60 watt light bulbs.

A variation in surface temperature induced by climate changes and the Milankovitch cycle can penetrate below the Earth's surface and produce an oscillation in the geothermal gradient with periods varying from daily to tens of thousands of years and an amplitude which decreases with depth and having a scale depth of several kilometers.[15][16] Melt water from the polar ice caps flowing along ocean bottoms tends to maintain a constant geothermal gradient throughout the Earth's surface.[15]

If that rate of temperature change were constant, temperatures deep in the Earth would soon reach the point where all known rocks would melt. We know, however, that the Earth's mantle is solid because it transmits S-waves. The temperature gradient dramatically decreases with depth for two reasons. First, radioactive heat production is concentrated within the crust of the Earth, and particularly within the upper part of the crust, as concentrations of uranium, thorium, and potassium are highest there: these three elements are the main producers of radioactive heat within the Earth. Second, the mechanism of thermal transport changes from conduction, as within the rigid tectonic plates, to convection, in the portion of Earth's mantle that convects. Despite its solidity, most of the Earth's mantle behaves over long time-scales as a fluid, and heat is transported by advection, or material transport. Thus, the geothermal gradient within the bulk of Earth's mantle is of the order of 0.3 kelvin per kilometer, and is determined by the adiabatic gradient associated with mantle material (peridotite in the upper mantle).

This heating up can be both beneficial or detrimental in terms of engineering: Geothermal energy can be used as a means for generating electricity, by using the heat of the surrounding layers of rock underground to heat water and then routing the steam from this process through a turbine connected to a generator.

On the other hand, drill bits have to be cooled not only because of the friction created by the process of drilling itself but also because of the heat of the surrounding rock at great depth. Very deep mines, like some gold mines in South Africa, need the air inside to be cooled and circulated to allow miners to work at such great depth.

Wiki



Detail...

Wednesday, October 6, 2010

Fuel from Sewage

Sew­age sludge could be used to make bio­die­sel fu­el in a pro­cess that’s with­in a few pe­r­cent­age points of be­ing cost-com­pet­i­tive with con­ven­tion­al fu­el, a new re­port in­di­cates.

A four pe­r­cent re­duc­tion in the cost of mak­ing this al­ter­na­tive fu­el would make it “com­pet­i­tive” with tra­di­tion­al pe­tro­le­um-based die­sel fu­el, ac­cord­ing to the au­thor, Da­vid M. Karg­bo of the U.S. En­vi­ron­men­tal Pro­tec­tion Agen­cy.

How­ev­er, he cau­tions that there are still “huge chal­lenges” in­volved in re­duc­ing the price and in sat­is­fy­ing likely reg­u­la­tory con­cerns. The find­ings by Karg­bo, who is with the agen­cy’s Re­gion III Of­fice of In­nova­t­ion in Phil­a­del­phia, ap­pear in En­er­gy & Fu­els, a jour­nal of the Amer­i­can Chem­i­cal So­ci­e­ty.

Tra­di­tion­al pe­tro­le­um-based fu­els are in­creas­ingly be­set by en­vi­ron­men­tal, po­lit­i­cal and supply con­cerns, so re­search in­to al­ter­na­tive fu­els is gain­ing in pop­u­lar­ity.

Con­ven­tion­al die­sel fu­el, like gas­o­line, is ex­tracted from pe­tro­le­um, or crude oil, and is used to pow­er many trucks, boats, bus­es, and farm equip­ment. An al­ter­na­tive to con­ven­tion­al die­sel is bio­die­sel, which is de­rived from al­ter­na­tive sources to crude oil, such as veg­e­ta­ble oil or an­i­mal fat. How­ev­er, these sources are rel­a­tively ex­pen­sive, and the high­er prices have lim­it­ed the use of bio­die­sel.

Kargbo ar­gues that a cheape­r al­ter­na­tive would be to make biodie­sel from mu­nic­i­pal sew­age sludge, the sol­id ma­te­ri­al left be­hind from the treat­ment of sew­age at wastew­a­ter treat­ment plants. The Un­ited States alone pro­duces about sev­en mil­lion tons of sew­age sludge yearly.

To boost biodie­sel pro­duc­tion, sew­age treat­ment plants could would have to use mi­crobes that pro­duce high­er amounts of oil than the mi­crobes cur­rently used for wastew­a­ter treat­ment, Karg­bo said. That step alone, he added, could in­crease bio­die­sel pro­duc­tion to the 10 bil­lion gal­lon mark, which is more than tri­ple the na­tion’s cur­rent biodie­sel pro­duc­tion ca­pacity.

“Cur­rently the es­ti­mat­ed cost of pro­duc­tion is $3.11 per gal­lon of biodie­sel. To be com­pet­i­tive, this cost should be re­duced to lev­els that are at or be­low [re­cent] petro die­sel costs of $3.00 per gal­lon,” the re­port says.

How­ev­er, the chal­lenges that re­main in both low­er­ing this cost and in sat­is­fy­ing reg­u­la­tory and en­vi­ron­men­tal con­cerns re­main “huge,” Kargbo wrote. Ques­tions sur­round meth­ods of col­lect­ing the sludge, separa­t­ion of the bio­die­sel from oth­er ma­te­ri­als, main­tain­ing bio­die­sel qual­ity, and un­wanted soap forma­t­ion dur­ing pro­duc­tion, and the re­mov­al of phar­ma­ceu­ti­cal con­tam­i­nants from the sludge.

None­the­less, “bio­die­sel pro­duc­tion from sludge could be very prof­it­a­ble in the long run,” he added.

World Science

Detail...

Comets


Comets may have come from other solar systems.

Many of the best known comets, in­clud­ing Hal­ley, Hale-Bopp and Mc­Naught, may have been born or­bit­ing oth­er stars, ac­cord­ing to a new the­o­ry.

The pro­pos­al comes from a team of as­tro­no­mers led by Hal Lev­i­son of the South­west Re­search In­sti­tute in Boul­der, Co­lo., who used com­put­er sim­ula­t­ions to show that the Sun may have cap­tured small icy bod­ies from “si­b­ling” stars when it was young.


Sci­en­tists be­lieve the Sun formed in a clus­ter of hun­dreds of stars closely packed with­in a dense gas cloud. Each star would have formed many small icy bod­ies, Lev­i­son and col­leagues say—comets. These would have aris­en from the same disk-shaped zone of gas and dust, sur­round­ing each star, from which plan­ets formed.

Most of these comets were slung out of these fledg­ling plan­e­tary sys­tems due to gravita­t­ional in­ter­ac­tions with newly form­ing gi­ant plan­ets, the the­o­ry goes. The comets would then have be­come ti­ny, free-float­ing mem­bers of the clus­ter.

The Sun’s clus­ter came to a vi­o­lent end, how­ev­er, when its gas was blown out by the hot­test young stars, ac­cord­ing to Lev­i­son and col­leagues. The new mod­els show that the Sun then gravita­t­ionally cap­tured a large cloud of comets as the clus­ter dis­persed.

“When it was young, the Sun shared a lot of spit with its sib­lings, and we can see that stuff to­day,” said Lev­i­son, whose re­search is pub­lished in the June 10 ad­vance on­line is­sue of the re­search jour­nal Pro­ceed­ings of the Na­tio­n­al Aca­de­my of Sci­en­ces.

“The pro­cess of cap­ture is sur­pris­ingly ef­fi­cient and leads to the ex­cit­ing pos­si­bil­ity that the cloud con­tains a pot­pour­ri that sam­ples ma­te­ri­al from a large num­ber of stel­lar sib­lings of the Sun,” added Mar­tin Dun­can of Queen’s Uni­vers­ity, Can­a­da, a co-author of the stu­dy.

The team cites as ev­i­dence a bubble-shaped re­gion of comets, known as the Oort cloud, that sur­rounds the Sun, ex­tend­ing half­way to the near­est star. It has been com­monly as­sumed this cloud formed from the Sun’s proto-plan­e­tary disk, the struc­ture from which plan­ets formed. But be­cause de­tailed mod­els show that comets from the so­lar sys­tem pro­duce a much more ane­mic cloud than ob­served, anoth­er source is needed, Lev­i­son’s group con­tends.

“More than 90 per­cent of the ob­served Oort cloud comets [must] have an extra-so­lar orig­in,” as­sum­ing the Sun’s proto-plan­e­tary disk can be used to es­ti­mate the Oort Cloud’s in­dig­e­nous popula­t­ion, Lev­i­son said.

World Science


Detail...

Solar System

Solar system’s distant ice-rocks come into focus
Be­yond where Nep­tune—of­fi­cially our so­lar sys­tem’s fur­thest plan­et—cir­cles the Sun, there float count­less faint, icy rocks.

They’re called trans-Nep­tu­ni­an ob­jects, and one of the big­gest is Plu­to—once clas­si­fied as a plan­et, but now des­ig­nat­ed as a “d­warf plan­et.” This re­gion al­so sup­plies us with comets such as fa­mous Com­et Hal­ley.

Now, as­tro­no­mers us­ing new tech­niques to cull the da­ta ar­chives of NASA’s Hub­ble Space Tel­e­scope have added 14 new trans-Nep­tu­ni­an ob­jects to the known cat­a­log. Their meth­od, they say, promises to turn up hun­dreds more.


“Trans-Neptunian ob­jects in­ter­est us be­cause they are build­ing blocks left over from the forma­t­ion of the so­lar sys­tem,” said Ce­sar Fuentes, form­erly with the Har­vard-Smith­son­ian Cen­ter for As­t­ro­phys­ics and now at North­ern Ar­i­zo­na Uni­vers­ity. He is the lead au­thor of a pa­per on the find­ings, to ap­pear in The As­t­ro­phys­i­cal Jour­nal.

As trans-Nep­tu­ni­an ob­jects, or TNOs, slowly or­bit the sun, they move against the star­ry back­ground, ap­pearing as streaks of light in time ex­po­sure pho­tographs. The team de­vel­oped soft­ware to scan hun­dreds of Hub­ble im­ages for such streaks. Af­ter prom­is­ing can­di­dates were flagged, the im­ages were vis­u­ally ex­am­ined to con­firm or re­fute each disco­very.

Most TNOs are lo­cat­ed near the eclip­tic—a line in the sky mark­ing the plane of the so­lar sys­tem, an out­growth of the fact that the so­lar sys­tem formed from a disk of ma­te­ri­al, as­tro­no­mers say. There­fore, the re­search­ers search­ed for objects near the eclip­tic.

They found 14 bodies, in­clud­ing one “bi­na­ry,” that is, a pair whose mem­bers or­bit each oth­er. All were more than 100 mil­lion times faint­er than ob­jects vis­i­ble to the un­aided eye. By meas­ur­ing their mo­tion across the sky, as­tro­no­mers cal­cu­lat­ed an or­bit and dis­tance for each ob­ject. Com­bin­ing the dis­tance, bright­ness and an es­ti­mat­ed re­flec­ti­vity al­lowed them to cal­cu­late the ap­prox­i­mate size. The new­found TNOs range in size from an es­ti­mat­ed 25 to 60 miles (40-100 km) across.

Un­like plan­ets, which tend to orbit very near the ecliptic, some TNOs have or­bits quite tilted from that line. The team ex­am­ined the size dis­tri­bu­tion of ob­jects with both types of or­bits to gain clues about how the popula­t­ion has evolved over the past 4.5 bil­lion years.

Most smaller TNO’s are thought to be shat­tered re­mains of big­ger ones. Over bil­lions of years, these ob­jects smack to­geth­er, grind­ing each oth­er down. The team found that the size dis­tri­bu­tion of TNOs with flat ver­sus tilted orbits is about the same as ob­jects get faint­er and smaller. There­fore, both popula­t­ions have si­m­i­lar col­li­sion­al his­to­ries, the re­searchers said.

The study ex­am­ined only one-third of a square de­gree of the sky, so there’s much more ar­ea to sur­vey. Hun­dreds of ad­di­tion­al TNOs may lurk in the Hub­ble ar­chives at high­er eclip­tic lat­i­tudes, said Fuentes and his col­leagues, who plan to con­tin­ue their search. “We have prov­en our abil­ity to de­tect and char­ac­ter­ize TNOs even with da­ta in­tend­ed for com­pletely dif­fer­ent pur­pos­es,” Fuentes said.


World Science



Detail...

Tuesday, September 14, 2010

Thermosetting Polymer

A thermosetting plastic, also known as a thermoset, is polymer material that irreversibly cures. The cure may be done through heat (generally above 200 °C (392 °F)), through a chemical reaction (two-part epoxy, for example), or irradiation such as electron beam processing.

Thermoset materials are usually liquid or malleable prior to curing and designed to be molded into their final form, or used as adhesives. Others are solids like that of the molding compound used in semiconductors and integrated circuits (IC's).

According to IUPAC recommendation: A thermosetting polymer is a prepolymer in a soft solid or viscous state that changes irreversibly into an infusible, insoluble polymer network by curing. Curing can be induced by the action of heat or suitable radiation, or both. A cured thermosetting polymer is called a thermoset


Process
The curing process transforms the resin into a plastic or rubber by a cross-linking process. Energy and/or catalysts are added that cause the molecular chains to react at chemically active sites (unsaturated or epoxy sites, for example), linking into a rigid, 3-D structure. The cross-linking process forms a molecule with a larger molecular weight, resulting in a material with a higher melting point. During the reaction, the molecular weight has increased to a point so that the melting point is higher than the surrounding ambient temperature, the material forms into a solid material.

Uncontrolled reheating of the material results in reaching the decomposition temperature before the melting point is obtained. Therefore, a thermoset material cannot be melted and re-shaped after it is cured. This implies that thermosets cannot be recycled, except as filler material.


Wiki


Detail...

Failure Analysis

Failure analysis is the process of collecting and analyzing data to determine the cause of a failure. It is an important discipline in many branches of manufacturing industry, such as the electronics industry, where it is a vital tool used in the development of new products and for the improvement of existing products. It relies on collecting failed components for subsequent examination of the cause or causes of failure using a wide array of methods, especially microscopy and spectroscopy. The NDT or nondestructive testing methods are valuable because the failed products are unaffected by analysis, so inspection always starts using these methods.


Forensic investigation
Forensic inquiry into the failed process or product is the starting point of failure analysis. Such inquiry is conducted using scientific analytical methods such as electrical and mechanical measurements, or by analysing failure data such as product reject reports or examples of previous failures of the same kind. The methods of forensic engineering are especially valuable in tracing product defects and flaws. They may include fatigue cracks, brittle cracks produced by stress corrosion cracking or environmental stress cracking for example. Witness statements can be valuable for reconstructing the likely sequence of events and hence the chain of cause and effect. Human factors can also be assessed when the cause of the failure is determined. There are several useful methods to prevent product failures occurring in the first place, including FMEA and FTA, methods which can be used during prototyping to analyse failures before a product is marketed.

Failure theories can only be constructed on such data, but when corrective action is needed quickly, the precautionary principle demands that measures be put in place. In aircraft accidents for example, all planes of the type involved can be grounded immediately pending the outcome of the inquiry.

Another aspect of failure analysis is associated with No Fault Found (NFF) which is a term used in the field of failure analysis to describe a situation where an originally reported mode of failure can't be duplicated by the evaluating technician and therefore the potential defect can't be fixed.

NFF can be attributed to oxidation, defective connections of electrical components, temporary shorts or opens in the circuits, software bugs, temporary environmental factors, but also to the operator error. Large number of devices that are reported as NFF during the first troubleshooting session often return to the failure analysis lab with the same NFF symptoms or a permanent mode of failure.

The term Failure analysis also applies to other fields such as business management and military strategy.

Methods of Analysis
The failure analysis of many different products involves the use of the following tools and techniques:

Microscopes
Optical microscope
Liquid crystal
Scanning acoustic microscope (SAM)
Scanning Acoustic Tomography (SCAT)
Atomic Force Microscope (AFM)
Stereomicroscope
Photo emission microscope (PEM)
X-ray microscope
Infra-red microscope
Scanning SQUID microscope


Sample Preparation
Jet-etcher
Plasma etcher
Back Side Thinning Tools
Mechanical Back Side Thinning
Laser Chemical Back Side Etching


Spectroscopic Analysis
Transmission line pulse spectroscopy (TLPS)
Auger electron spectroscopy
Deep Level Transient Spectroscopy (DLTS)


Wiki

Detail...

Polymorphism

Polymorphism in materials science is the ability of a solid material to exist in more than one form or crystal structure. Polymorphism can potentially be found in any crystalline material including polymers, minerals, and metals, and is related to allotropy, which refers to elemental solids. The complete morphology of a material is described by polymorphism and other variables such as crystal habit, amorphous fraction or crystallographic defects. Polymorphism is relevant to the fields of pharmaceuticals, agrochemicals, pigments, dyestuffs, foods, and explosives.

When polymorphism exists as a result of difference in crystal packing, it is called packing polymorphism. Polymorphism can also result from the existence of different conformers of the same molecule in conformational polymorphism. In pseudopolymorphism the different crystal types are the result of hydration or solvation. An example of an organic polymorph is glycine, which is able to form monoclinic and hexagonal crystals. Silica is known to form many polymorphs, the most important of which are; α-quartz, β-quartz, tridymite, cristobalite, coesite, and stishovite.

An analogous phenomenon for amorphous materials is polyamorphism, when a substance can take on several different amorphous modifications.


Polymorphism is important in the development of pharmaceutical ingredients. Many drugs receive regulatory approval for only a single crystal form or polymorph. In a classic patent case the pharmaceutical company GlaxoSmithKline defended its patent for the polymorph type II of the active ingredient in Zantac against competitors while that of the polymorph type I had already expired. Polymorphism in drugs can also have direct medical implications. Medicine is often administered orally as a crystalline solid and dissolution rates depend on the exact crystal form of a polymorph.

Cefdinir is a drug appearing in 11 patents from 5 pharmaceutical companies in which a total of 5 different polymorphs are described. The original inventor Fujisawa now Astellas (with US partner Abbott) extended the original patent covering a suspension with a new anhydrous formulation. Competitors in turn patented hydrates of the drug with varying water content, which were described with only basic techniques such as infrared spectroscopy and XRPD, a practice criticised by in one review because these techniques at the most suggest a different crystal structure but are unable to specify one. These techniques also tend to overlook chemical impurities or even co-components. Abbott researchers realised this the hard way when, in one patent application, it was ignored that their new cefdinir crystal form was, in fact, that of a pyridinium salt. The review also questioned whether the polymorphs offered any advantages to the existing drug: something clearly demanded in a new patent.

Acetylsalicylic acid elusive 2nd polymorph was first discovered by Vishweshwar et al. fine structural details were given by Bond et al. A new crystal type was found after attempted co-crystallization of aspirin and levetiracetam from hot acetonitrile. The form II is stable only at 100 K and reverts back to form I at ambient temperature. In the (unambiguous) form I, two salicylic molecules form centrosymmetric dimers through the acetyl groups with the (acidic) methyl proton to carbonyl hydrogen bonds, and, in the newly-claimed form II, each salicylic molecule forms the same hydrogen bonds, but then with two neighbouring molecules instead of one. With respect to the hydrogen bonds formed by the carboxylic acid groups, both polymorphs form identical dimer structures.

Paracetamol powder has poor compression properties:
this poses difficulty in making tablets, so a new polymorph of paracetamol is discovered which is more compressible.

due to differences in solubility of polymorphs, one polymorph may be more active therapeutically than another polymorph of same drug

cortisone acetate exists in at least five different polymorphs, four of which are unstable in water and change to a stable form.

carbamazepine(used in epilepsy and trigeminal neuralgia) beta-polymorph developed from solvent of high dielectric constant ex aliphatic alcohol, whereas alpha polymorph crystallized from solvents of low dielectric constant such as carbon tetrachloride

estrogen and chloroamphenicol also show polymorphism

Wiki

Detail...

Tuesday, August 31, 2010

Engineering Geology


Engineering Geology is the application of the geologic sciences to engineering practice for the purpose of assuring that the geologic factors affecting the location, design, construction, operation and maintenance of engineering works are recognized and adequately provided for. Engineering geologists investigate and provide geologic and geotechnical recommendations, analysis, and design associated with human development. The realm of the engineering geologist is essentially in the area of earth-structure interactions, or investigation of how the earth or earth processes impact human made structures and human activities.

Engineering geologic studies may be performed during the planning, environmental impact analysis, civil or structural engineering design, value engineering and construction phases of public and private works projects, and during post-construction and forensic phases of projects. Works completed by engineering geologists include; geologic hazards, geotechnical, material properties, landslide and slope stability, erosion, flooding, dewatering, and seismic investigations, etc. Engineering geologic studies are performed by a geologist or engineering geologist that is educated, trained and has obtained experience related to the recognition and interpretation of natural processes, the understanding of how these processes impact man-made structures (and vice versa), and knowledge of methods by which to mitigate for hazards resulting from adverse natural or man-made conditions. The principal objective of the engineering geologist is the protection of life and property against damage caused by geologic conditions.

Engineering geologic practice is also closely related to the practice of geological engineering, geotechnical engineering, soils engineering, environmental geology and economic geology. If there is a difference in the content of the disciplines described, it mainly lies in the training or experience of the practitioner

Wiki

Detail...

Monday, August 30, 2010

Sedimentary Rock

Sedimentary rock is a type of rock that is formed by sedimentation of material at the Earth's surface and within bodies of water. Sedimentation is the collective name for processes that cause mineral and/or organic particles (detritus) to settle and accumulate or minerals to precipitate from a solution. Particles that form a sedimentary rock by accumulating are called sediment. Before being deposited, sediment was formed by weathering and erosion in a source area, and then transported to the place of deposition by water, wind, mass movement or glaciers which are called agents of denudation.

The sedimentary rock cover of the continents of the Earth's crust is extensive, but the total contribution of sedimentary rocks is estimated to be only 5% of the total volume of the crust. Sedimentary rocks are only a thin veneer over a crust consisting mainly of igneous and metamorphic rocks.

Sedimentary rocks are deposited in layers as strata, forming a structure called bedding. The study of sedimentary rocks and rock strata provides information about the subsurface that is useful for civil engineering, for example in the construction of roads, houses, tunnels, canals or other constructions. Sedimentary rocks are also important sources of natural resources like coal, fossil fuels, drinking water or ores.

The study of the sequence of sedimentary rock strata is the main source for scientific knowledge about the Earth's history, including palaeogeography, paleoclimatology and the history of life.

The scientific discipline that studies the properties and origin of sedimentary rocks is called sedimentology. Sedimentology is both part of geology and physical geography and overlaps partly with other disciplines in the Earth sciences, such as pedology, geomorphology, geochemistry or structural geology.



Classification

Clastic
Clastic sedimentary rocks are composed of discrete fragments or clasts of materials derived from other minerals. They are composed largely of quartz with other common minerals including feldspar, amphiboles, clay minerals, and sometimes more exotic igneous and metamorphic minerals.

Clastic sedimentary rocks, such as limestone or sandstone, were formed from rocks that have been broken down into fragments by weathering, which then have been transported and deposited elsewhere.

Clastic sedimentary rocks may be regarded as falling along a scale of grain size, with shale being the finest with particles less than 0.002 mm, siltstone being a little bigger with particles between 0.002 to 0.063 mm, and sandstone being coarser still with grains 0.063 to 2 mm, and conglomerates and breccias being more coarse with grains 2 to 263 mm. Breccia has sharper particles, while conglomerate is categorized by its rounded particles. Particles bigger than 263 mm are termed blocks (angular) or boulders (rounded). Lutite, Arenite and Rudite are general terms for sedimentary rock with clay/silt-, sand- or conglomerate/breccia-sized particles.

The classification of clastic sedimentary rocks is complex because there are many variables involved. Particle size (both the average size and range of sizes of the particles), composition of the particles (in sandstones, this includes quartz arenites, arkoses, and lithic sandstones), the cement, and the matrix (the name given to the smaller particles present in the spaces between larger grains) must all be taken into consideration.

Shales, which consist mostly of clay minerals, are generally further classified on the basis of composition and bedding. Coarser clastic sedimentary rocks are classified according to their particle size and composition. Orthoquartzite is a very pure quartz sandstone; arkose is a sandstone with quartz and abundant feldspar; greywacke is a sandstone with quartz, clay, feldspar, and metamorphic rock fragments present, which was formed from the sediments carried by turbidity currents.

All rocks disintegrate when exposed to mechanical and chemical weathering at the Earth's surface.


Lower Antelope Canyon was carved out of the surrounding sandstone by both mechanical weathering and chemical weathering. Wind, sand, and water from flash flooding are the primary weathering agents.Mechanical weathering is the breakdown of rock into particles without producing changes in the chemical composition of the minerals in the rock. Ice is the most important agent of mechanical weathering. Water percolates into cracks and fissures within the rock, freezes, and expands. The force exerted by the expansion is sufficient to widen cracks and break off pieces of rock. Heating and cooling of the rock, and the resulting expansion and contraction, also aids the process. Mechanical weathering contributes further to the breakdown of rock by increasing the surface area exposed to chemical agents.

Chemical weathering is the breakdown of rock by chemical reaction. In this process the minerals within the rock are changed into particles that can be easily carried away. Air and water are both involved in many complex chemical reactions. The minerals in igneous rocks may be unstable under normal atmospheric conditions, those formed at higher temperatures being more readily attacked than those formed at lower temperatures. Igneous rocks are commonly attacked by water, particularly acid or alkaline solutions, and all of the common igneous rock forming minerals (with the exception of quartz, which is very resistant) are changed in this way into clay minerals and chemicals in solution.

Rock particles in the form of clay, silt, sand, and gravel are transported by the agents of erosion (usually water, and less frequently, ice and wind) to new locations and redeposited in layers, generally at a lower elevation.

These agents reduce the size of the particles, sort them by size, and then deposit them in new locations. The sediments dropped by streams and rivers form alluvial fans, flood plains, deltas, and on the bottom of lakes and the sea floor. The wind may move large amounts of sand and other smaller particles. Glaciers transport and deposit great quantities of usually unsorted rock material as till.

These deposited particles eventually become compacted and cemented together, forming clastic sedimentary rocks. Such rocks contain inert minerals that resist mechanical and chemical breakdown, such as quartz. Quartz is one of the most mechanically and chemically resistant minerals. Highly weathered sediments can contain several heavy and stable minerals, best illustrated by the ZTR index.

Organic
Organic sedimentary rocks contain materials generated by living organisms, and include carbonate minerals created by organisms, such as corals, mollusks, and foraminifera, which cover the ocean floor with layers of calcium carbonate, which can later form limestone. Other examples include stromatolites, the flint nodules found in chalk (which is itself a biochemical sedimentary rock, a form of limestone), and coal and oil shale (derived from the remains of tropical plants and subjected to heat).

Chemical
Chemical sedimentary rocks form when minerals in solution become supersaturated and precipitate. In marine environments, this is a method for the formation of limestone. Another common environment in which chemical sedimentary rocks form is a body of water that is evaporating. Evaporation decreases the amount of water without decreasing the amount of dissolved material. Therefore, the dissolved material can become oversaturated and precipitate. Sedimentary rocks from this process can include the evaporite minerals halite (rock salt), sylvite, barite and gypsum.

Wiki

Detail...

Metamorphic Rock




Metamorphic rock is the result of the transformation of an existing rock type, the protolith, in a process called metamorphism, which means "change in form". The protolith is subjected to heat and pressure (temperatures greater than 150 to 200 °C and pressures of 1500 bars[1]) causing profound physical and/or chemical change. The protolith may be sedimentary rock, igneous rock or another older metamorphic rock. Metamorphic rocks make up a large part of the Earth's crust and are classified by texture and by chemical and mineral assemblage (metamorphic facies). They may be formed simply by being deep beneath the Earth's surface, subjected to high temperatures and the great pressure of the rock layers above it. They can form from tectonic processes such as continental collisions, which cause horizontal pressure, friction and distortion. They are also formed when rock is heated up by the intrusion of hot molten rock called magma from the Earth's interior. The study of metamorphic rocks (now exposed at the Earth's surface following erosion and uplift) provides us with information about the temperatures and pressures that occur at great depths within the Earth's crust



Metamorphic minerals

Metamorphic minerals are those that form only at the high temperatures and pressures associated with the process of metamorphism. These minerals, known as index minerals, include sillimanite, kyanite, staurolite, andalusite, and some garnet.

Other minerals, such as olivines, pyroxenes, amphiboles, micas, feldspars, and quartz, may be found in metamorphic rocks, but are not necessarily the result of the process of metamorphism. These minerals formed during the crystallization of igneous rocks. They are stable at high temperatures and pressures and may remain chemically unchanged during the metamorphic process. However, all minerals are stable only within certain limits, and the presence of some minerals in metamorphic rocks indicates the approximate temperatures and pressures at which they formed.

The change in the particle size of the rock during the process of metamorphism is called recrystallization. For instance, the small calcite crystals in the sedimentary rock limestone change into larger crystals in the metamorphic rock marble, or in metamorphosed sandstone, recrystallization of the original quartz sand grains results in very compact quartzite, in which the often larger quartz crystals are interlocked. Both high temperatures and pressures contribute to recrystallization. High temperatures allow the atoms and ions in solid crystals to migrate, thus reorganizing the crystals, while high pressures cause solution of the crystals within the rock at their point of contact.

Foliation

Folded foliation in a metamorphic rock from near Geirangerfjord, NorwayThe layering within metamorphic rocks is called foliation (derived from the Latin word folia, meaning "leaves"), and it occurs when a rock is being shortened along one axis during recrystallization. This causes the platy or elongated crystals of minerals, such as mica and chlorite, to become rotated such that their long axes are perpendicular to the orientation of shortening. This results in a banded, or foliated, rock, with the bands showing the colors of the minerals that formed them.

Textures are separated into foliated and non-foliated categories. Foliated rock is a product of differential stress that deforms the rock in one plane, sometimes creating a plane of cleavage. For example, slate is a foliated metamorphic rock, originating from shale. Non-foliated rock does not have planar patterns of strain.

Rocks that were subjected to uniform pressure from all sides, or those that lack minerals with distinctive growth habits, will not be foliated. Slate is an example of a very fine-grained, foliated metamorphic rock, while phyllite is medium, schist coarse, and gneiss very coarse-grained. Marble is generally not foliated, which allows its use as a material for sculpture and architecture.

Another important mechanism of metamorphism is that of chemical reactions that occur between minerals without them melting. In the process atoms are exchanged between the minerals, and thus new minerals are formed. Many complex high-temperature reactions may take place, and each mineral assemblage produced provides us with a clue as to the temperatures and pressures at the time of metamorphism.

Metasomatism is the drastic change in the bulk chemical composition of a rock that often occurs during the processes of metamorphism. It is due to the introduction of chemicals from other surrounding rocks. Water may transport these chemicals rapidly over great distances. Because of the role played by water, metamorphic rocks generally contain many elements absent from the original rock, and lack some that originally were present. Still, the introduction of new chemicals is not necessary for recrystallization to occur.

Wiki

Detail...

Closed-Circuit Television

Closed-circuit television (CCTV) is the use of video cameras to transmit a signal to a specific place, on a limited set of monitors.

It differs from broadcast television in that the signal is not openly transmitted, though it may employ point to point (P2P), point to multipoint, or mesh wireless links. CCTV is often used for surveillance in areas that may need monitoring such as banks, casinos, airports, military installations, and convenience stores. It is also an important tool for distance education
In industrial plants, CCTV equipment may be used to observe parts of a process from a central control room, for example when the environment is not suitable for humans. CCTV systems may operate continuously or only as required to monitor a particular event. A more advanced form of CCTV, utilizing Digital Video Recorders (DVRs), provides recording for possibly many years, with a variety of quality and performance options and extra features (such as motion-detection and email alerts). More recently, decentralized IP-based CCTV cameras, some equipped with megapixel sensors, support recording directly to network-attached storage devices, or internal flash for completely stand-alone operation.

Surveillance of the public using CCTV is particularly common in the UK, where there are reportedly more cameras per person than in any other country in the world.
There and elsewhere, its increasing use has triggered a debate about security versus privacy.


The first closed-circuit television cameras used in public spaces were crude, conspicuous, low definition black and white systems without the ability to zoom or pan. Modern CCTV cameras use small high definition colour cameras that can not only focus to resolve minute detail, but by linking the control of the cameras to a computer, objects can be tracked semi-automatically. The technology that enable this is often referred to as Video Content Analysis (VCA), and is currently being developed by a large number of technological companies around the world. The current technology enable the systems to recognize if a moving object is a walking person, a crawling person or a vehicle. It can also determine the color of the object. NEC claim to have a system that can identify a person's age by evaluating a picture of him/her. Other technologies claim to be able to identify people by their biometrics.


CCTV monitoring station run by the West Yorkshire Police at the Elland Road football ground in LeedsThe system identifies where a person is, how he is moving and whether he is a person or for instance a car. Based on this information the system developers implement features such as blurring faces or "virtual walls" that block the sight of a camera where it is not allowed to film. It is also possible to provide the system with rules, such as for example "sound the alarm whenever a person is walking close to that fence" or in a museum "set off an alarm if a painting is taken down from the wall".

VCA can also be used for forensics after the film has been made. It is then possible to search for certain actions within the recorded video. For example if you know a criminal is driving a yellow car, you can set the system to search for yellow cars and the system will provide you with a list of all the times where there is a yellow car visible in the picture. These conditions can be made more precise by searching for "a person moving around in a certain area for a suspicious amount of time", for example if someone is standing around an ATM machine without using it.


Surveillance camera outside a McDonalds highway drive-inMaintenance of CCTV systems is important in case forensic examination is necessary after a crime has been committed.

In crowds the system is limited to finding anomalies, for instance a person moving in the opposite direction to the crowd, which might be a case in airports where passengers are only supposed to walk in one direction out of a plane, or in a subway where people are not supposed to exit through the entrances.[citation needed]

VCA also has the ability to track people on a map by calculating their position from the images. It is then possible to link many cameras and track a person through an entire building or area. This can allow a person to be followed without having to analyze many hours of film. Currently the cameras have difficulty identifying individuals from video alone, but if connected to a key-card system, identities can be established and displayed as a tag over their heads on the video.


Monitoring station of a small office buildingThere is also a significant difference in where the VCA technology is placed, either the data is being processed within the cameras (on the edge) or by a centralized server. Both technologies have their pros and cons.

The implementation of automatic number plate recognition produces a potential source of information on the location of persons or groups.

There is no technological limitation preventing a network of such cameras from tracking the movement of individuals. Reports have also been made of plate recognition misreading numbers leading to the billing of the entirely wrong person.[37] In the UK, car cloning is a crime where, by altering, defacing or replacing their number plates with stolen ones, perpetrators attempt to avoid speeding and congestion charge fines and even to steal petrol from garage forecourts.

CCTV critics see the most disturbing extension to this technology as the recognition of faces from high-definition CCTV images.[citation needed] This could determine a person's identity without alerting him that his identity is being checked and logged. The systems can check many thousands of faces in a database in under a second.

The combination of CCTV and facial recognition has been tried as a form of mass surveillance, but has been ineffective because of the low discriminating power of facial recognition technology and the very high number of false positives generated. This type of system has been proposed to compare faces at airports and seaports with those of suspected terrorists or other undesirable entrants.


Eye-in-the-sky surveillance dome camera watching from a high steel poleComputerized monitoring of CCTV images is under development, so that a human CCTV operator does not have to endlessly look at all the screens, allowing an operator to observe many more CCTV cameras. These systems do not observe people directly. Instead they track their behaviour by looking for particular types of body movement behavior, or particular types of clothing or baggage.

The theory behind this is that in public spaces people behave in predictable ways. People who are not part of the 'crowd', for example car thieves, do not behave in the same way. The computer can identify their movements, and alert the operator that they are acting out of the ordinary. Recently in the latter part of 2006, news reports on UK television brought to light newly developed technology that uses microphones[clarification needed] in conjunction with CCTV.

If a person is observed to be shouting in an aggressive manner (e.g., provoking a fight), the camera can automatically zoom in and pinpoint the individual and alert a camera operator. Of course this then lead to the discussion that the technology can also be used to eavesdrop and record private conversations from a reasonable distance (e.g., 100 metres or about 330 feet).

The same type of system can track identified individuals as they move through the area covered by CCTV. Such applications have been introduced in the early 2000s, mainly in the USA, France, Israel and Australia.[citation needed] With software tools, the system is able to develop three-dimensional models of an area, and to track and monitor the movement of objects within it.

To many, the development of CCTV in public areas, linked to computer databases of people's pictures and identity, presents a serious breach of civil liberties. Critics fear the possibility that one would not be able to meet anonymously in a public place or drive and walk anonymously around a city.[citation needed] Demonstrations or assemblies in public places could be affected as the state would be able to collate lists of those leading them, taking part, or even just talking with protesters in the street.
Retention, storage and preservation
The long-term storage and archiving of CCTV recordings is an issue of concern in the implementation of a CCTV system. Re-usable media such as tape may be cycled through the recording process at regular intervals. There are statutory limits on retention of data.

Recordings are kept for several purposes. Firstly, the primary purpose for which they were created (e.g. to monitor a facility). Secondly, they need to be preserved for a reasonable amount of time to recover any evidence of other important activity they might document (e.g. a group of people passing a facility the night a crime was committed). Finally, the recordings may be evaluated for historical, research or other long-term information of value they may contain (e.g. samples kept to help understand trends for a business or community).

Recordings are more commonly stored using hard disk drives in lieu of video cassette recorders. The quality of digital recordings are subject to compression ratios, images stored per second, image size and duration of image retention before being overwritten. Different vendors of digital video recorders use different compression standards and varying compression ratios.

Wiki


Detail...

Mount Sinabung



Mount Sinabung (Indonesian: Gunung Sinabung) is a Pleistocene-to-Holocene stratovolcano of andesite and dacite in the Karo plateau of Karo Regency, North Sumatra, Indonesia. Many lava flows are on its flanks and the last known eruption had occurred in the year 1600. Solfataric activity (cracks where steam, gas, and lava are emitted) were last seen at the summit in 1912, but no other documented events had taken place until the eruption in the early hours of 29 August 2010.


Geology
Most of Indonesian volcanism stems from the Sunda Arc, created by the subduction of the Indo-Australian Plate under the Eurasian Plate. This arc is bounded on the north-northwest by the Andaman Islands, a chain of basaltic volcanoes, and on the east by the Banda Arc, also created by subduction.[3]

Sinabung is a long andesitic-dacitic stratovolcano with a total of four volcanic craters, only one being active

On 29 August 2010, the volcano experienced a minor eruption after several days of rumbling.[5] Ash spewed into the atmosphere up to 1.5 kilometres (0.93 mi) and lava was seen overflowing the crater.[5] The volcano had been inactive for centuries with the most recent eruption occurring in 1600.[5]

Mount Sinabung is classified as category “B”, which means it is not necessary for it to be monitored intensively. Other volcanoes, in category “A”, must be monitored frequently, the head of the National Volcanology Agency, named only as Surono, told Xinhua over phone from the province

Wiki

Detail...

Igneous Rock

Igneous rock (derived from the Latin word igneus meaning of fire, from ignis meaning fire) is one of the three main rock types, the others being sedimentary and metamorphic rock. Igneous rock is formed through the cooling and solidification of magma or lava. Igneous rock may form with or without crystallization, either below the surface as intrusive (plutonic) rocks or on the surface as extrusive (volcanic) rocks. This magma can be derived from partial melts of pre-existing rocks in either a planet's mantle or crust. Typically, the melting is caused by one or more of three processes: an increase in temperature, a decrease in pressure, or a change in composition. Over 700 types of igneous rocks have been described, most of them having formed beneath the surface of Earth's crust. These have diverse properties, depending on their composition and how they were formed.


Geological significance
The upper 16 kilometres (10 mi) of Earth's crust is composed of approximately 95% igneous rocks with only a thin, widespread covering of sedimentary and metamorphic rocks.[1]

Igneous rocks are geologically important because:

their minerals and global chemistry give information about the composition of the mantle, from which some igneous rocks are extracted, and the temperature and pressure conditions that allowed this extraction, and/or of other pre-existing rock that melted;
their absolute ages can be obtained from various forms of radiometric dating and thus can be compared to adjacent geological strata, allowing a time sequence of events;
their features are usually characteristic of a specific tectonic environment, allowing tectonic reconstitutions (see plate tectonics);
in some special circumstances they host important mineral deposits (ores): for example, tungsten, tin, and uranium are commonly associated with granites and diorites, whereas ores of chromium and platinum are commonly associated with gabbros.

Intrusive igneous rocks

Close-up of granite (an intrusive igneous rock) exposed in Chennai, India.Intrusive igneous rocks are formed from magma that cools and solidifies within the crust of a planet. Surrounded by pre-existing rock (called country rock), the magma cools slowly, and as a result these rocks are coarse grained. The mineral grains in such rocks can generally be identified with the naked eye. Intrusive rocks can also be classified according to the shape and size of the intrusive body and its relation to the other formations into which it intrudes. Typical intrusive formations are batholiths, stocks, laccoliths, sills and dikes.

The central cores of major mountain ranges consist of intrusive igneous rocks, usually granite. When exposed by erosion, these cores (called batholiths) may occupy huge areas of the Earth's surface.

Coarse grained intrusive igneous rocks which form at depth within the crust are termed as abyssal; intrusive igneous rocks which form near the surface are termed hypabyssal.

Extrusive igneous rocks

Basalt (an extrusive igneous rock in this case); light coloured tracks show the direction of lava flow.Extrusive igneous rocks are formed at the crust's surface as a result of the partial melting of rocks within the mantle and crust. Extrusive Igneous rocks cool and solidify quicker than intrusive igneous rocks. Since the rocks cool very quickly they are fine grained.

The melted rock, with or without suspended crystals and gas bubbles, is called magma. Magma rises because it is less dense than the rock from which it was created. When it reaches the surface, magma extruded onto the surface either beneath water or air, is called lava. Eruptions of volcanoes into air are termed subaerial whereas those occurring underneath the ocean are termed submarine. Black smokers and mid-ocean ridge basalt are examples of submarine volcanic activity.

The volume of extrusive rock erupted annually by volcanoes varies with plate tectonic setting. Extrusive rock is produced in the following proportions:[2]

divergent boundary: 73%
convergent boundary (subduction zone): 15%
hotspot: 12%.
Magma which erupts from a volcano behaves according to its viscosity, determined by temperature, composition, and crystal content. High-temperature magma, most of which is basaltic in composition, behaves in a manner similar to thick oil and, as it cools, treacle. Long, thin basalt flows with pahoehoe surfaces are common. Intermediate composition magma such as andesite tends to form cinder cones of intermingled ash, tuff and lava, and may have viscosity similar to thick, cold molasses or even rubber when erupted. Felsic magma such as rhyolite is usually erupted at low temperature and is up to 10,000 times as viscous as basalt. Volcanoes with rhyolitic magma commonly erupt explosively, and rhyolitic lava flows typically are of limited extent and have steep margins, because the magma is so viscous.

Felsic and intermediate magmas that erupt often do so violently, with explosions driven by release of dissolved gases — typically water but also carbon dioxide. Explosively erupted pyroclastic material is called tephra and includes tuff, agglomerate and ignimbrite. Fine volcanic ash is also erupted and forms ash tuff deposits which can often cover vast areas.

Because lava cools and crystallizes rapidly, it is fine grained. If the cooling has been so rapid as to prevent the formation of even small crystals after extrusion, the resulting rock may be mostly glass (such as the rock obsidian). If the cooling of the lava happened slowly, the rocks would be coarse-grained.

Because the minerals are mostly fine-grained, it is much more difficult to distinguish between the different types of extrusive igneous rocks than between different types of intrusive igneous rocks. Generally, the mineral constituents of fine-grained extrusive igneous rocks can only be determined by examination of thin sections of the rock under a microscope, so only an approximate classification can usually be made in the field.

Wiki

Detail...

Friday, July 16, 2010

Injection Moulder Provides 1600kN Clamping Force

This machine replaces the current Engel Victory and E-Victory 150 machines.

The Engel Victory 160 features a redesigned clamping cylinder, mould-fixing platen and C frame.

The use of the Flex-Links with force dividers allows for unbeatable clamping-unit quality.


They reduce the deflection of the moving mould-fixing platen to a minimum and ensure smoothly distributed force transmission to the mould across the whole mould-mounting surface.

All injection units are safeguarded by safety fences and safety gates.

Hydraulic injection units of size 750 are alternatively available as encapsulated types, without additional safety guarding.

The selection of electrical injection units has now been extended to include the 940 injection unit.

The Engel Victory 160's new ecodrive hydraulic drive system is now also available.

The new system has a fixed displacement pump and servomotor instead of the standard hydraulics and asynchronous motor used previously.

This means the machine's speed is directly linked to the drive speed.

The new servohydraulic ecodrive keeps the speed down.

In other words, the drive is only active during movements, with energy consumption close to zero when the machine is idle.

This makes it possible to reduce energy consumption by 70 per cent.


Tooling University


Detail...

Telsonic Outlines Key to Ultrasonic Welding

Telsonic's Martin Frost discusses how thinking ahead can assist designers and integrators of ultrasonic welding to avoid pitfalls and make the most of the flexible and robust joining technology.

Reducing the time required to get a product to market is often a key element of today's manufacturing strategies.

Achieving these objectives involves taking a 'right first time' approach to minimise the lengthy development stages that can often be associated with a new product.


The increasing use of SLA and 3D printed models is a tremendous aid to visualising the shape, size, sections and features of the finished component, giving designers and production engineers a real insight into the production and performance criteria associated with the part.

Reducing the design and development time, however, means that the product designers have to make carefully considered and informed design inputs more quickly.

The most effective way of ensuring that the ultrasonic welding process will be consistent and predictable is wherever possible to design for assembly.

Focus should be made on the polymers that will be used along with preparation in part geometry and tolerances at the design stage.

This will simplify and enhance the reliability of the subsequent assembly and welding process.

Considering the potential requirement for specific weld features on the part is essential at the outset of the design process, as this in turn will have an impact on the component and the mould tool design.

These are important decisions, which will ultimately have an influence on the production process and part functionality and that should be based on sound advice sought through collaboration with the manufacturer of the ultrasonic technology.

Ultrasonic welding depends upon the response of the materials being joined.

Materials such as polystyrene, ABS and polycarbonate all respond well to ultrasonic energy.

Other materials, however, including polyethylene and tougher grades of nylon, are more difficult to weld with ultrasonics.

Reinforced materials, such as those with fillers, can have a positive or negative impact on the ultrasonic welding process dependant on the fill type and quantity.

The best results will always be obtained when the components to be welded are produced from the same material.

Dissimilar materials can be joined using ultrasonics providing they are in the same chemically compatible family and have similar melting points.

It is also possible to weld completely dissimilar materials using a joint design that will allow one material to be reformed and encapsulate the other mechanically, thus securing it in place.

Next to the selection of materials, part design and, in particular, joint design hold the key to creating a robust and repeatable weld.

Ultrasonic weld specialists and the internet offer plenty of valuable ground-rule information on joint designs - even Telsonic has its own design recommendation manual.

This information should be viewed as a guide only, as complex or delicate components require a sensitive approach to welding.

As an example, in these instances it is important to establish that the parts to be welded are actually capable of holding up to the forces applied by the process, however small.

Other dilemmas faced by the designer include ensuring that a joint can be achieved reliably - at speed and with a sufficiently wide process window.

Too narrow a process window will result in repeated machine setting and possibly higher reject rates.

The proposed production method - manual or automated - can also have an influence on both the product design and the welding process.

A further consideration may be the presence of other components, especially delicate parts or electronics that may be part of the assembly.

At this level it is important to have more than just a basic appreciation of the ultrasonic welding process.

Seeking advice from the supplier will ensure that the solution is based upon extensive experience of how best to apply the technology to the task in hand.

Examples where this type of collaboration has been successful include the creation of joint designs and part preparation where the ultrasonic energy is focused into the joint efficiently and the weld completed swiftly, without dissipation to other areas of the component or even a change in weld frequency.

These principles not only result in a successful weld but eliminate the risk of damage to any internal components.

Where there is to be more than one weld on a given part, both the component and joint design should be reviewed to ensure that energy from multiple weld sequences does not have an adverse effect on any of the previous weld points.

Good joint design in delicate components should be mindful of the amount of energy required to achieve the weld.

A typical example of the approach would be the use of an 'energy director' joint, as opposed to a 'shear joint'.

This design, when combined with application experience, still achieves both strength and a hermetic seal, but with a reduction in the energy required to make the weld of up to 40 per cent.

The use of location features, often surrounding the weld joint, make pre-welding assembly more robust by ensuring that the individual parts are positioned repeatably every time, with the added benefit of assisting the welding process.

The importance of dimensional tolerances and component stability must not be overlooked if the welding process is to remain consistent during production.

Parts can vary due to inconsistent moulding conditions or poor handling and storage post moulding, while parts are cooling.

Any resultant inconsistencies within the shape and size of the individual parts due to these factors or inappropriate component tolerances, will be reflected in the results achieved from the welding process.

With the design complete and component parts moulded, the physical parts should then be reviewed carefully at QC level to scrutinise the ultrasonic weld preparation features for size and accuracy.

The joint itself should be viewed and respected as a precise collapse of polymer melt, sized and positioned to provide a predictable and process controllable way to achieve fuse strength and not just a token sacrificial bead of plastic.

Having defined materials, joint design, tolerances and moulded a part fit for sustainable and quality production, it is essential to ensure that the welding process is not compromised by the use of inappropriate or underpowered equipment.

Attempting to use equipment that is incapable of generating the required amount of ultrasonic power or lacking the control functionality required for the task in hand, will undoubtedly result in continuous adjustments to pressure and amplitude, and ultimately guesswork in trying to make the application 'fit' the processing capabilities of the machine.

This is especially important for high-volume precision components and those used in medical devices or other safety-critical applications, where it is essential to produce parts to a consistent specification and quality.

In these instances it is essential to invest in a supplier specialist with design capability, laboratory development facilities, a broad range of machines and modules, together with the expertise to develop a robust production solution in partnership with the design house, integrator and manufacturer.

Tooling University

Detail...

Purging Compound Reduces Extrusion Downtime

Package film extrusion plants practising frequent material changeovers, have been successful in using a chemical purging compound to improve quality and reduce downtime.

Package film extrusion plants have been successful in using SuperNova chemical purging compound from Novachem to improve quality and reduce downtime.

One operator found that frequent material changeovers, particularly when using Eval or Surlyn, can yield gels and specks in production output - even after running scrap for two or three hours, gels and speck were apparent.

This led to frequent unscheduled die teardowns - at least once a month - and hours of lost production time.

The Application Specialists at Novachem were able to create custom purging processes to help remove leftover production material that can degrade the machinery during transitions and startup.

After analyzing the plant's needs, a site-specific regimen of SuperNova chemical purging compound was recommended.

This usually involves use of the compound before every material transition, and following each teardown and cleaning.

After using a tailor-engineered application of SuperNova chemical purging compound as recommended by the application specialists at Novachem, changeovers yielded no more gels and little or no specking after shutdowns.

Based on the typical success of custom purging processes created by Novachem's application experts, plant productivity and worker efficiency have improved dramatically.

Some plants have saved up to 12h/month on their changeovers, and have been able to avoid up to three shifts a month on teardowns.

As a bonus, there is little or no product waste due to gels or specks.

Tooling University

Detail...

Double-Strand Core Extrusion Reduces Costs

In extruding plastics profiles, double-strand extrusion produces two profiles simultaneously, reducing the capital investment and the required floor space for the extrusion line.


In profile manufacture, coextrusion can cut costs substantially, said KraussMaffei.

For example, producing a profile with a regrind core covered with virgin material in all visible areas sharply reduces material costs.

Schuco International has recently invested in a profile system using core extrusion technology.

Schuco International is a global player, developing and marketing complete systems using plastics, aluminium and steel.

One current project combines double-strand extrusion with core technology.

Double-strand extrusion produces two profiles simultaneously.

This reduces both the capital investment and the required floorspace for the extrusion line.

Schuco embarked on a cooperative project with Greiner Extrusion and KraussMaffei Berstorff to develop a double-strand extrusion system for producing its five-chamber main window profiles.

The big challenges were to design the die, to split up and manage the melt streams, and develop a cost-effective extruder concept.

The combined know-how of the three partners made it possible to meet these challenges in a remarkably short time.

* Pressure-optimized channel system - the material ratios in Schuco's main window profiles are around 60% virgin PVC and 40% regrind.

The new system uses two extruders, both of which supply both strands.

The melt streams are split via a pressure-optimized channel system so that they reach the dies in the required pattern.

The two extruders need to be positioned very close together in order to feed the channel system effectively.

The concept uses two separate KMD 90-36/P profile extruders from KraussMaffei Berstorff, each on its own base frame.

The main control cabinets are positioned at a distance from the extrusion line.

Two extra compact control units, positioned close to the extruder output zone, house the die control circuits and the operator panels.

Both operator panels (one for each extruder) are on the operator side.

Each extruder can be operated separately, or the two extruders can be operated in synchrony.

This gives Schuco maximum flexibility to respond to future requirements.

* About KraussMaffei - KraussMaffei is the only supplier worldwide of the three key machine technologies for the plastics and rubber compounding and processing industries.

The KraussMaffei brand stands for comprehensive solutions for injection and reaction moulding, while the KraussMaffei Berstorff brand covers the whole spectrum of extrusion systems, including complete extrusion lines.

KraussMaffei has a unique wealth of know-how across the whole range of processing methods.

As a technology partner, it links this know-how with innovative engineering to deliver application-specific and integrated solutions.

KraussMaffei operates a network of 70 subsidiaries and sales agencies close to customers worldwide.

Tooling University

Detail...

Saturday, July 10, 2010

Brain Structure

Sci­en­tists have found that the size of dif­fer­ent parts of peo­ple’s brains cor­re­spond to their per­son­al­i­ties. For ex­am­ple, con­sci­en­tious peo­ple tend to have a big­ger lat­er­al pre­fron­tal cor­tex, a brain re­gion in­volved in plan­ning and con­trol­ling act­ions.


Psy­chol­o­gists com­monly break down all per­son­al­ity traits in­to five fac­tors: con­sci­en­tiousness, ex­tra­ver­sion, neu­rot­i­cism, agree­a­ble­ness, and open­ness/in­tel­lect. Re­search­ers Col­in De­Young at the Uni­vers­ity of Min­ne­so­ta and col­leagues wanted to know if these fac­tors cor­re­lat­ed with the size of struc­tures in the brain.

The scientists gave 116 vol­un­teers a ques­tion­naire to de­scribe their per­son­al­ity, then gave them a brain im­ag­ing test that meas­ured the rel­a­tive size of dif­fer­ent parts of the brain. Sev­er­al links were found be­tween the size of cer­tain brain re­gions and per­son­al­ity. The re­search ap­pears in the jour­nal Psy­cho­log­i­cal Sci­ence.


For ex­am­ple, “ev­ery­body, I think, has a com­mon sense of what ex­tro­ver­sion is – some­one who is talk­a­tive, out­go­ing, brash,” said De­Young. “They get more pleas­ure out of things like so­cial in­ter­ac­tion, amuse­ment parks, or really just about an­y­thing, and they’re al­so more mo­ti­vat­ed to seek re­ward, which is part of why they’re more as­sertive.” That quest for re­ward is thought to be a lead­ing fac­tor in ex­tro­ver­sion.

Ear­li­er stud­ies had found parts of the brain that are ac­tive in con­sid­er­ing re­wards. So DeY­oung and his col­leagues rea­soned that those re­gions should be big­ger in ex­tro­verts. In­deed, they found that one of those re­gions, the me­di­al or­bi­to­front­al cor­tex – just above and be­hind the eyes – was sig­nif­i­cantly larg­er in very extro­verted study sub­jects.

The study found si­m­i­lar as­socia­t­ions for con­sci­en­tiousness, which is as­sociated with plan­ning; neu­rot­i­cism, a ten­den­cy to ex­pe­ri­ence neg­a­tive emo­tions that is as­sociated with sen­si­ti­vity to threat and pun­ish­ment; and agree­a­ble­ness, which re­lates to parts of the brain that al­low us to un­der­stand each oth­er’s emo­tions, in­ten­tions, and men­tal states. Only open­ness/in­tel­lect did­n’t as­sociate clearly with any of the pre­dicted brain struc­tures, the re­search­ers found.

“This starts to in­di­cate that we can ac­tu­ally find the bi­o­log­i­cal sys­tems that are re­spon­si­ble for these pat­terns of com­plex be­hav­ior and ex­pe­ri­ence that make peo­ple in­di­vid­u­als,” said De­Young. He points out, though, that this does­n’t mean your per­son­al­ity is fixed from birth; the brain grows and changes as it grows. Ex­pe­ri­ences change the brain as it de­vel­ops, and those changes in the brain can change per­son­al­ity.

World Science

Detail...

Thursday, July 1, 2010

Transformations of Energy

One form of energy can often be readily transformed into another with the help of a device- for instance, a battery, from chemical energy to electric energy; a dam: gravitational potential energy to kinetic energy of moving water (and the blades of a turbine) and ultimately to electric energy through an electric generator. Similarly, in the case of a chemical explosion, chemical potential energy is transformed to kinetic energy and thermal energy in a very short time. Yet another example is that of a pendulum. At its highest points the kinetic energy is zero and the gravitational potential energy is at maximum. At its lowest point the kinetic energy is at maximum and is equal to the decrease of potential energy. If one (unrealistically) assumes that there is no friction, the conversion of energy between these processes is perfect, and the pendulum will continue swinging forever

Energy gives rise to weight and is equivalent to matter and vice versa. The formula E = mc², derived by Albert Einstein (1905) quantifies the relationship between mass and rest energy within the concept of special relativity. In different theoretical frameworks, similar formulas were derived by J. J. Thomson (1881), Henri Poincaré (1900), Friedrich Hasenöhrl (1904) and others (see Mass-energy equivalence#History for further information). Since c2 is extremely large relative to ordinary human scales, the conversion of ordinary amount of mass (say, 1 kg) to other forms of energy can liberate tremendous amounts of energy (~9x1016 joules), as can be seen in nuclear reactors and nuclear weapons. Conversely, the mass equivalent of a unit of energy is minuscule, which is why a loss of energy from most systems is difficult to measure by weight, unless the energy loss is very large. Examples of energy transformation into matter (particles) are found in high energy nuclear physics.

In nature, transformations of energy can be fundamentally classed into two kinds: those that are thermodynamically reversible, and those that are thermodynamically irreversible. A reversible process in thermodynamics is one in which no energy is dissipated (spread) into empty energy states available in a volume, from which it cannot be recovered into more concentrated forms (fewer quantum states), without degradation of even more energy. A reversible process is one in which this sort of dissipation does not happen. For example, conversion of energy from one type of potential field to another, is reversible, as in the pendulum system described above. In processes where heat is generated, quantum states of lower energy, present as possible exitations in fields between atoms, act as a reservoir for part of the energy, from which it cannot be recovered, in order to be converted with 100% efficiency into other forms of energy. In this case, the energy must partly stay as heat, and cannot be completely recovered as usable energy, except at the price of an increase in some other kind of heat-like increase in disorder in quantum states, in the universe (such as an expansion of matter, or a randomization in a crystal).

As the universe evolves in time, more and more of its energy becomes trapped in irreversible states (i.e., as heat or other kinds of increases in disorder). This has been referred to as the inevitable thermodynamic heat death of the universe. In this heat death the energy of the universe does not change, but the fraction of energy which is available to do produce work through a heat engine, or be transformed to other usable forms of energy (through the use of generators attached to heat engines), grows less and less

Wiki


Detail...

Depleted Uranium

Depleted uranium (DU) is uranium primarily composed of the isotope uranium-238 (U-238). Natural uranium is about 99.27 percent U-238, 0.72 percent U-235, and 0.0055 percent U-234. U-235 is used for fission in nuclear reactors and nuclear weapons. Uranium is enriched in U-235 by separating the isotopes by mass. The byproduct of enrichment, called depleted uranium or DU, contains less than one third as much U-235 and U-234 as natural uranium. The external radiation dose from DU is about 60 percent of that from the same mass of natural uranium.


DU is also found in reprocessed spent nuclear reactor fuel, but that kind can be distinguished from DU produced as a byproduct of uranium enrichment by the presence of U-236.[3] In the past, DU has been called Q-metal, depletalloy, and D-38.

DU is useful because of its very high density of 19.1 g/cm3. Civilian uses include counterweights in aircraft, radiation shielding in medical radiation therapy and industrial radiography equipment, and containers used to transport radioactive materials. Military uses include defensive armor plating and armor-piercing projectiles.

The use of DU in munitions is controversial because of questions about potential long-term health effects.[4][5] Normal functioning of the kidney, brain, liver, heart, and numerous other systems can be affected by uranium exposure, because in addition to being weakly radioactive, uranium is a toxic metal.[6] It is weakly radioactive and remains so because of its long physical half-life (4.468 billion years for uranium-238), but has a considerably shorter biological half-life. The aerosol produced during impact and combustion of depleted uranium munitions can potentially contaminate wide areas around the impact sites or can be inhaled by civilians and military personnel.[7] During a three week period of conflict in 2003 in Iraq, 1,000 to 2,000 tonnes of DU munitions were used, mostly in cities.[8]

The actual acute and chronic toxicity of DU is also a point of medical controversy. Multiple studies using cultured cells and laboratory rodents suggest the possibility of leukemogenic, genetic, reproductive, and neurological effects from chronic exposure.[4] A 2005 epidemiology review concluded: "In aggregate the human epidemiological evidence is consistent with increased risk of birth defects in offspring of persons exposed to DU."[9] The World Health Organization states that no consistent risk of reproductive, developmental, or carcinogenic effects have been reported in humans.[10][11] However, the objectivity of this report has been called into question.


Wiki

Detail...

Wednesday, June 23, 2010

Kinetic Energy

The kinetic energy of an object is the extra energy which it possesses due to its motion. It is defined as the work needed to accelerate a body of a given mass from rest to its current velocity. Having gained this energy during its acceleration, the body maintains this kinetic energy unless its speed changes. Negative work of the same magnitude would be required to return the body to a state of rest from that velocity.


The kinetic energy of a single object is completely frame-dependent (relative): it can take any non-negative value, by choosing a suitable inertial frame of reference. For example, a bullet racing by a non-moving observer has kinetic energy in the reference frame of this observer, but the same bullet has zero kinetic energy in the reference frame which moves with the bullet. By contrast, the total kinetic energy of a system of objects is not completely removable by a suitable choice of the inertial reference frame, unless all the objects have the same velocity. In any other case the total kinetic energy is at least equal to a non-zero minimum which is independent of the inertial reference system. This kinetic energy (if present) contributes to the system's invariant mass, which is seen as the same value in all reference frames, and by all observers.

The kinetic energy of an object of mass m traveling at a speed v is mv2/2, provided v is much less than the speed of light

History and etymology
The adjective "kinetic" has its roots in the Greek word κίνηση (kinesis) meaning "motion" – the same root as in the word cinema (referring to motion pictures).

The principle in classical mechanics that E ∝ mv² was first theorized by Gottfried Leibniz and Johann Bernoulli, who described kinetic energy as the "living force", vis viva. Willem 's Gravesande of the Netherlands provided experimental evidence of this relationship. By dropping weights from different heights into a block of clay, 's Gravesande determined that their penetration depth was proportional to the square of their impact speed. Émilie du Châtelet recognized the implications of the experiment and published an explanation.[1]

The terms "kinetic energy" and "work" with their present scientific meanings date back to the mid 19th century. Early understandings of these ideas can be attributed to Gaspard-Gustave Coriolis who in 1829 published the paper titled Du Calcul de l'Effet des Machines outlining the mathematics of kinetic energy. William Thomson, later Lord Kelvin, is given the credit for coining the term "kinetic energy" c. 1849 - 1851
There are various forms of energy: chemical energy, heat, electromagnetic radiation, potential energy (gravitational, electric, elastic, etc.), nuclear energy, rest energy. These can be categorized in two main classes: potential energy and kinetic energy.

Kinetic energy can be best understood by examples that demonstrate how it is transformed to and from other forms of energy. For example, a cyclist will use chemical energy that was provided by food to accelerate a bicycle to a chosen speed. This speed can be maintained without further work, except to overcome air-resistance and friction. The energy has been converted into kinetic energy – the energy of motion – but the process is not completely efficient and heat is also produced within the cyclist.

The kinetic energy in the moving cyclist and the bicycle can be converted to other forms. For example, the cyclist could encounter a hill just high enough to coast up, so that the bicycle comes to a complete halt at the top. The kinetic energy has now largely been converted to gravitational potential energy that can be released by freewheeling down the other side of the hill. (Since the bicycle lost some of its energy to friction, it will never regain all of its speed without further pedaling. Note that the energy is not destroyed; it has only been converted to another form by friction.) Alternatively the cyclist could connect a dynamo to one of the wheels and also generate some electrical energy on the descent. The bicycle would be traveling more slowly at the bottom of the hill because some of the energy has been diverted into making electrical power. Another possibility would be for the cyclist to apply the brakes, in which case the kinetic energy would be dissipated through friction as thermal energy.

Like any physical quantity which is a function of velocity, the kinetic energy of an object depends on the relationship between the object and the observer's frame of reference. Thus, the kinetic energy of an object is not invariant.

Examples
Spacecraft use chemical energy to take off and gain considerable kinetic energy to reach orbital velocity. This kinetic energy gained during launch will remain constant while in orbit because there is almost no friction. However it becomes apparent at re-entry when the kinetic energy is converted to heat.

Kinetic energy can be passed from one object to another. In the game of billiards, the player gives kinetic energy to the cue ball by striking it with the cue stick. If the cue ball collides with another ball, it will slow down dramatically and the ball it collided with will accelerate to a speed as the kinetic energy is passed on to it. Collisions in billiards are effectively elastic collisions, where (by definition) kinetic energy is preserved. In inelastic collisions, kinetic energy is dissipated as: heat, sound, binding energy (breaking bound structures), or other kinds of energy.

Flywheels are being developed as a method of energy storage (see Flywheel energy storage). This illustrates that kinetic energy can also be rotational.

Wiki

Detail...

Injection Molding

Injection molding (British English: moulding) is a manufacturing process for producing parts from both thermoplastic and thermosetting plastic materials. Material is fed into a heated barrel, mixed, and forced into a mold cavity where it cools and hardens to the configuration of the mold cavity.[1] After a product is designed, usually by an industrial designer or an engineer, molds are made by a moldmaker (or toolmaker) from metal, usually either steel or aluminum, and precision-machined to form the features of the desired part. Injection molding is widely used for manufacturing a variety of parts, from the smallest component to entire body panels of cars.

The first man-made plastic was invented in Britain in 1851 by Alexander Parkes. He publicly demonstrated it at the 1862 International Exhibition in London, calling the material he produced "Parkesine." Derived from cellulose, Parkesine could be heated, molded, and retain its shape when cooled. It was, however, expensive to produce, prone to cracking, and highly flammable.

In 1868, American inventor John Wesley Hyatt developed a plastic material he named Celluloid, improving on Parkes' invention so that it could be processed into finished form. Together with his brother Isaiah, Hyatt patented the first injection molding machine in 1872.[3] This machine was relatively simple compared to machines in use today. It worked like a large hypodermic needle, using a plunger to inject plastic through a heated cylinder into a mold. The industry progressed slowly over the years, producing products such as collar stays, buttons, and hair combs.

The industry expanded rapidly in the 1940s because World War II created a huge demand for inexpensive, mass-produced products. In 1946, American inventor James Watson Hendry built the first screw injection machine, which allowed much more precise control over the speed of injection and the quality of articles produced. This machine also allowed material to be mixed before injection, so that colored or recycled plastic could be added to virgin material and mixed thoroughly before being injected. Today screw injection machines account for the vast majority of all injection machines. In the 1970s, Hendry went on to develop the first gas-assisted injection molding process, which permitted the production of complex, hollow articles that cooled quickly. This greatly improved design flexibility as well as the strength and finish of manufactured parts while reducing production time, cost, weight and waste.

The plastic injection molding industry has evolved over the years from producing combs and buttons to producing a vast array of products for many industries including automotive, medical, aerospace, consumer products, toys, plumbing, packaging, and construction

Applications
Injection molding is used to create many things such as wire spools, packaging, bottle caps, automotive dashboards, pocket combs, and most other plastic products available today. Injection molding is the most common method of part manufacturing. It is ideal for producing high volumes of the same object.[5] Some advantages of injection molding are high production rates, repeatable high tolerances, the ability to use a wide range of materials, low labor cost, minimal scrap losses, and little need to finish parts after molding. Some disadvantages of this process are expensive equipment investment, potentially high running costs, and the need to design moldable parts.

Examples of Polymers Best Suited for the Process
Most polymers may be used, including all thermoplastics, some thermosets, and some elastomers.[7] In 1995 there were approximately 18,000 different materials available for injection molding and that number was increasing at an average rate of 750 per year. The available materials are alloys or blends of previously developed materials meaning that product designers can choose from a vast selection of materials, one that has exactly the right properties. Materials are chosen based on the strength and function required for the final part, but also each material has different parameters for molding that must be taken into account.[8] Common polymers like Epoxy and phenolic are examples of thermosetting plastics while nylon, polyethylene, and polystyrene are thermoplastic

Equipment
Injection molding machines consist of a material hopper, an injection ram or screw-type plunger, and a heating unit.[2] They are also known as presses, they hold the molds in which the components are shaped. Presses are rated by tonnage, which expresses the amount of clamping force that the machine can exert. This force keeps the mold closed during the injection process. Tonnage can vary from less than 5 tons to 6000 tons, with the higher figures used in comparatively few manufacturing operations. The total clamp force needed is determined by the projected area of the part being molded. This projected area is multiplied by a clamp force of from 2 to 8 tons for each square inch of the projected areas. As a rule of thumb, 4 or 5 tons/in2 can be used for most products. If the plastic material is very stiff, it will require more injection pressure to fill the mold, thus more clamp tonnage to hold the mold closed.[10] The required force can also be determined by the material used and the size of the part, larger parts require higher clamping force.

Mold
Since molds have been expensive to manufacture, they were usually only used in mass production where thousands of parts were being produced. Typical molds are constructed from hardened steel, pre-hardened steel, aluminum, and/or beryllium-copper alloy. The choice of material to build a mold from is primarily one of economics; in general, steel molds cost more to construct, but their longer lifespan will offset the higher initial cost over a higher number of parts made before wearing out. Pre-hardened steel molds are less wear-resistant and are used for lower volume requirements or larger components. The typical steel hardness is 38-45 on the Rockwell-C scale. Hardened steel molds are heat treated after machining. These are by far the superior in terms of wear resistance and lifespan. Typical hardness ranges between 50 and 60 Rockwell-C (HRC). Aluminum molds can cost substantially less, and, when designed and machined with modern computerized equipment, can be economical for molding tens or even hundreds of thousands of parts. Beryllium copper is used in areas of the mold that require fast heat removal or areas that see the most shear heat generated.[12] The molds can be manufactured either by CNC machining or by using Electrical Discharge Machining processes.

Wiki

Detail...

Thermophoresis

Thermophoresis, also called thermomigration, thermodiffusion, or Sorét effect, or Ludwig-Soret effect, is a phenomenon observed when a mixture of two or more types of motile particles (particles able to move) are subjected to the force of a temperature gradient and the different types of particles respond to it differently. The term "Sorét effect" (or Ludwig-Sorét effect) is normally intended to mean thermophoresis in liquids only. The word "thermophoresis" is most often intended to mean the behavior in aerosols, not liquids, but the broader meaning is also common. The mechanisms of thermophoresis in liquid mixtures differ from those in gas mixtures, and are generally not as well understood.

The phenomenon is observed at the scale of one millimeter or less. An example that may be observed by the naked eye with good lighting is when the hot rod of an electric heater is surrounded by tobacco smoke: the smoke goes away from the immediate vicinity of the hot rod. As the small particles of air nearest the hot rod are heated, they create a fast flow away from the rod, down the temperature gradient. They have acquired higher kinetic energy with their higher temperature. When they collide with the large, slower-moving particles of the tobacco smoke they push the latter away from the rod. The force that has pushed the smoke particles away from the rod is an example of a thermophoretic force.

Thermodiffusion is labeled "positive" when particles move from a hot to cold region and "negative" when the reverse is true. Typically the heavier/larger species in a mixture exhibits positive thermophoretic behavior while the lighter/smaller species exhibit negative behavior. In addition to the sizes of the various types of particles and the steepness of the temperature gradient, the heat conductivity and heat absorption of the particles play a role.
Thermophoresis has a number of practical applications. The basis for applications is that, because different particle types move differently under the force of the temperature gradient, the particle types can be separated by that force after they've been mixed together, or prevented from mixing together if they're already separated. Impurity ions may move from the cold side of a semiconductor wafer towards the hot side, since the higher temperature makes the transition structure required for atomic jumps more achievable. The diffusive flux may occur in either direction (either up or down the temperature gradient), dependent on the materials involved. Thermophoretic force has been used in commercial precipitators for applications similar to electrostatic precipitators. It is exploited in the manufacturing of optical fiber in vapor deposition processes. It can be important as a transport mechanism in fouling. Thermophoresis has also been shown to have potential in facilitating drug discovery by allowing the detection of aptamer binding by comparison of the bound versus unbound motion of the target molecule.[1] This approach has been termed microscale thermophoresis. Furthermore, thermophoresis has been demonstrated as a versatile technique for manipulating single biological macromolecules, such as genomic-length DNA, in micro- and nanochannels by means of light-induced local heating.[2] Thermophoresis is also used to separate polymers in the area of the field flow fractionation.
Wiki

Detail...

AISI-SAE grades


Not update yet..

Detail...

Martensite


Martensite, named after the German metallurgist Adolf Martens (1850–1914), most commonly refers to a very hard form of steel crystalline structure, but it can also refer to any crystal structure that is formed by displacive transformation. It includes a class of hard minerals occurring as lath- or plate-shaped crystal grains. When viewed in cross-section, the lenticular (lens-shaped) crystal grains appear acicular (needle-shaped), which is how they are sometimes incorrectly describedIn the 1890s, Martens studied samples of different steels under a microscope, and found that the hardest steels had a regular crystalline structure. He was the first to explain the cause of the widely differing mechanical properties of steels. Martensitic structures have since been found in many other practical materials, including shape memory alloys and transformation-toughened ceramics.

The martensite is formed by rapid cooling (quenching) of austenite which traps carbon atoms that do not have time to diffuse out of the crystal structure. This martensitic reaction begins during cooling when the austenite reaches the martensite start temperature (Ms) and the parent austenite becomes mechanically unstable. At a constant temperature below Ms, a fraction of the parent austenite transforms rapidly, then no further transformation will occur. When the temperature is decreased, more of the austenite transforms to martensite. Finally, when the martensite finish temperature (Mf) is reached, the transformation is complete. Martensite can also form by application of stress (this property is frequently used in toughened ceramics like yttria stabilised zirconia and in special steels like TRIP steels(i.e. transformation induced plasticity steels)). Thus Martensite can be thermally induced or stress induced.

One of the differences between the two phases is that martensite has a body centered tetragonal crystal structure, whereas austenite has a face center cubic (FCC) structure. The transition between these two structures requires very little thermal activation energy because it is a martensitic transformation, which results in the subtle but rapid rearrangement of atomic positions, and has been known to occur even at cryogenic temperatures. Martensite has a lower density than austenite, so that the martensitic transformation results in a relative change of volume.[1]

Martensite is not shown in the equilibrium phase diagram of the iron-carbon system because it is not an equilibrium phase. Equilibrium phases form by slow cooling rates allowing sufficient time for diffusion, whereas martensite is usually formed by fast cooling rates. Since chemical processes (the attainment of equilibrium) accelerate at higher temperature, martensite is easily destroyed by the application of heat. This process is called tempering. In some alloys, the effect is reduced by adding elements such as tungsten that interfere with cementite nucleation, but, more often than not, the phenomenon is exploited instead. Since quenching can be difficult to control, many steels are quenched to produce an overabundance of martensite, then tempered to gradually reduce its concentration until the right structure for the intended application is achieved. Too much martensite leaves steel brittle, too little leaves it soft.

Wiki

Detail...