+- +-

+-User

Welcome, Guest.
Please login or register.
 
 
 
Forgot your password?

+-Stats ezBlock

Members
Total Members: 41
Latest: GWarnock
New This Month: 0
New This Week: 0
New Today: 0
Stats
Total Posts: 8919
Total Topics: 232
Most Online Today: 2
Most Online Ever: 52
(November 29, 2017, 04:04:44 am)
Users Online
Members: 0
Guests: 1
Total: 1

Modify Profile

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - AGelbert

Pages: 1 ... 578 579 [580] 581 582
8686
Renewables / Re: A High-Renewables Tomorrow, Today:
« on: October 14, 2013, 03:40:00 pm »
Renewable Energy Patents Are Surging

by Pete Danko
 
The world of patents is a bit screwy these days, with trolls warping a system that was designed to encourage innovation by protecting and rewarding innovators. Still, it has to be seen as an encouraging sign for renewable energy that the number of patents issued in the broad field has skyrocketed of late.

Researchers from the Massachusetts Institute of Technology and the Santa Fe Institute said that annual renewable-energy patents in the United States increased fivefold from the quarter-century preceding 2000 to the decade that followed, from fewer than 200 per year to more than 1,000 annually by 2009. Fossil-fuel-related patents were also up, but just threefold, from 100 per year pre-2000 to 300 per year in 2009.

Perhaps the most hopeful news in the study was the suggestions that increases in research funding can have a cumulative, long-lasting impact that can help keep innovation rolling along even through investment ups and downs.



For instance, a large increase in energy research following the oil shocks of the 1970s and 1980s was followed by a steep dropoff, the researchers said. But report co-author Jessika Trancik, an assistant professor of engineering systems at MIT, said the effect of those investments has helped drive this current patent boom. From the study:


We find that both market-driven investment and publicly-funded R&D act as base multipliers for each other in driving technological development at the global level. We also find that the effects of these investments persist over long periods of time, supporting the notion that technology-relevant knowledge is preserved.

The two most prominent forms of renewable energy (excluding hydropower) have not surprisingly been getting a lot of the innovation focus. “(B)etween 2004 and 2009, the number of patents issued annually for solar energy increased by 13 percent per year, while those for wind energy increased 19 percent per year, on average; these growth rates approach or exceed the rates for technologies such as semiconductors and digital communications,” MIT said.



But while growing markets can help bring about the investment necessary for innovations in these technologies, the researchers said markets alone won’t do the job.


To the extent that markets for these technologies grow fast enough, economic opportunity drives an increase in patenting and knowledge creation. It is important to emphasize that the growth of markets for low-carbon energy technologies, which improve on an aspect of performance (carbon emissions) not commonly captured by market price (and therefore not visible to the consumer), has depended strongly on public policy. We also note that policies are likely needed to fund research and incentivize market growth further until these technologies become cost-competitive and can take off on their own.

The paper, “Determinants of the Pace of Global Innovation in Energy Technologies,” was to published in the open-source, peer reviewed journal Plos One. A submitted version is available online as a PDF.

http://www.earthtechling.com/2013/10/renewable-energy-patents-are-surging/

8687
General Discussion / Re: Darwin
« on: October 13, 2013, 04:29:42 pm »


No transitional Fossils  ???


In his book, Origin of Species, Darwin himself admitted that at the time of writing, the fossils discovered made it look like a series of acts of creation between each main order of life. Although he urged people to look for links between them, he also admitted that if no such links were discovered, then his theory would be incorrect. It therefore makes no sense what-so-ever, that man should unearth thousands of fossils every year, representing highly ‘developed’ and sophisticated life forms, and yet mysteriously there are no intermediate, half-developed life forms discovered between layers. No matter how much digging around we do, there really is no back door away from this issue. In fact so much of evolution’s credibility depends on this starting issue, nothing in this belief system evens begins to carry any weight whilst this anomaly exists. A bit like needing to throw a six to start a game of ludo.

A recent article by Jonathan Sarfati  Ph.D., F.M. explains;

Teaching about Evolution and the Nature of Science discusses the fossil record in several places. Creationists and evolutionists, with their different assumptions, predict different things about the fossil record. If living things had really evolved from other kinds of creatures, then there would have been many intermediate or transitional forms, with halfway structures. However, if different kinds had been created separately, the fossil record should show creatures appearing abruptly and fully formed.

The transitional fossils problem

Charles Darwin was worried that the fossil record did not show what his theory predicted:


Why is not every geological formation and every stratum full of such intermediate links? Geology assuredly does not reveal any such finely graduated organic chain; and this is the most obvious and serious objection which can be urged against the theory.1

Is it any different today?

The late Dr Colin Patterson, senior paleontologist of the British Museum of Natural History, wrote a book, Evolution. In reply to a questioner who asked why he had not included any pictures of transitional forms, he wrote:

I fully agree with your comments about the lack of direct illustration of evolutionary transitions in my book. If I knew of any, fossil or living, I would certainly have included them … . I will lay it on the line—there is not one such fossil for which one could make a watertight argument.2


The renowned evolutionist (and Marxist) Stephen Jay Gould wrote:

The absence of fossil evidence for intermediary stages between major transitions in organic design, indeed our inability, even in our imagination, to construct functional intermediates in many cases, has been a persistent and nagging problem for gradualistic accounts of evolution.3

And:

I regard the failure to find a clear ‘vector of progress’ in life’s history as the most puzzling fact of the fossil record.4


As Sunderland points out:

It of course would be no puzzle at all if he [Gould] had not decided before he examined the evidence that common-ancestry evolution was a fact, ‘like apples falling from a tree,’ and that we can only permit ourselves to discuss possible mechanisms to explain that assumed fact.5


The gaps are huge

Teaching about Evolution avoids discussing the vast gulf between non-living matter and the first living cell, single-celled and multicelled creatures, and invertebrates and vertebrates. The gaps between these groups should be enough to show that molecules-to-man evolution is without foundation.

There are many other examples of different organisms appearing abruptly and fully formed in the fossil record. For example, the first bats, pterosaurs, and birds were fully fledged flyers.  

Turtles are a well designed and specialized group of reptiles, with a distinctive shell protecting the body’s vital organs. However, evolutionists admit ‘Intermediates between turtles and cotylosaurs, the primitive reptiles from which [evolutionists believe] turtles probably sprang, are entirely lacking.’ They can’t plead an incomplete fossil record because ‘turtles leave more and better fossil remains than do other vertebrates.’6 The ‘oldest known sea turtle’ was a fully formed turtle, not at all transitional.   It had a fully developed system for excreting salt, without which a marine reptile would quickly dehydrate. This is shown by skull cavities which would have held large salt-excreting glands around the eyes.7

All 32 mammal orders appear abruptly and fully formed in the fossil record. The evolutionist paleontologist George Gaylord Simpson wrote in 1944:


The earliest and most primitive members of every order already have the basic ordinal characters, and in no case is an approximately continuous series from one order to another known. In most cases the break is so sharp and the gap so large that the origin of the order is speculative and much disputed.8


There is little to overturn that today.9

Excuses

Like most evolutionary propaganda, Teaching about Evolution makes assertions that there are many transitional forms, and gives a few ‘examples.’ An article in Refuting Evolution contains the gleeful article by the evolutionist (and atheist) E.O. Wilson, ‘Discovery of a Missing Link.’ He claimed to have studied ‘nearly exact intermediates between solitary wasps and the highly social modern ants.’ But another atheistic evolutionist, W.B. Provine, says that Wilson’s ‘assertions are explicitly denied by the text … . Wilson’s comments are misleading at best.’10

Teaching about Evolution emphasizes Archaeopteryx and an alleged land mammal-to-whale transition series, so they are covered in chapters 4 and 5 of Refuting Evolution. Teaching about Evolution also makes the following excuse on page 57:

Some changes in populations might occur too rapidly to leave many transitional fossils. 
 
Also, many organisms were very unlikely to leave fossils because of their habitats or because they had no body parts that could easily be fossilized. 


Darwin also excused the lack of transitional fossils by ‘the extreme imperfection of the fossil record.’ But as we have seen, even organisms that leave excellent fossils, like turtles, are lacking in intermediates. Michael Denton points out that 97.7 percent of living orders of land vertebrates are represented as fossils and 79.1 percent of living families of land vertebrates—87.8 percent if birds are excluded, as they are less likely to become fossilized.11

It’s true that fossilization requires specific conditions. Normally, when a fish dies, it floats to the top and rots and is eaten by scavengers. Even if some parts reach the bottom, the scavengers take care of them. Scuba divers don’t find the sea floor covered with dead animals being slowly fossilized. The same applies to land animals. Millions of buffaloes (bison) were killed in North America last century, but there are very few fossils.

In nature, a well-preserved fossil generally requires rapid burial (so scavengers don’t obliterate the carcass), and cementing agents to harden the fossil quickly. Teaching about Evolution has some good photos of a fossil fish with well-preserved features (p. 3) and a jellyfish (p. 36). Such fossils certainly could not have formed gradually—how long do dead jellyfish normally retain their features? If you wanted to form such fossils, the best way might be to dump a load of concrete on top of the creature! Only catastrophic conditions can explain most fossils—for example, a global flood and its aftermath of widespread regional catastrophism. (see topic: Evidence for a Global Flood )

Teaching about Evolution goes on to assert after the previous quote:

However, in many cases, such as between primitive fish and amphibians, amphibians and reptiles, reptiles and mammals, and reptiles and birds, there are excellent transitional fossils.


But Teaching about Evolution provides no evidence for this! We can briefly examine some of the usual evolutionary claims below (for reptile-to-bird, see the next chapter on birds):

Fish to amphibian: Some evolutionists believe that amphibians evolved from a Rhipidistian fish, something like the coelacanth. It was believed that they used their fleshy, lobed fins for walking on the sea-floor before emerging on the land. This speculation seemed impossible to disprove, since according to evolutionary/long-age interpretations of the fossil record, the last coelacanth lived about 70 million years ago. But a living coelacanth (Latimeria chalumnae) was discovered in 1938. And it was found that the fins were not used for walking but for deft maneuvering when swimming. Its soft parts were also totally fish-like, not transitional. It also has some unique features—it gives birth to live young after about a year’s gestation, it has a small second tail to help its swimming, and a gland that detects electrical signals.12 The earliest amphibian, Ichthyostega (mentioned on p. 39 of Teaching about Evolution), is hardly transitional, but has fully formed legs and shoulder and pelvic girdles, while there is no trace of these in the Rhipidistians.


Amphibian to reptile: Seymouria is a commonly touted intermediate between amphibians and reptiles. But this creature is dated (by evolutionary dating methods) at 280 million years ago, about 30 million years younger than the ‘earliest’ true reptiles Hylonomus and Paleothyris. That is, reptiles are allegedly millions of years older than their alleged ancestors! Also, there is no good reason for thinking it was not completely amphibian in its reproduction. The jump from amphibian to reptile eggs requires the development of a number of new structures and a change in biochemistry—see the section below on soft part changes.


Reptile to mammal: The ‘mammal-like reptiles’ are commonly asserted to be transitional. But according to a specialist on these creatures:


Each species of mammal-like reptile that has been found appears suddenly in the fossil record and is not preceded by the species that is directly ancestral to it. It disappears some time later, equally abruptly, without leaving a directly descended species.13

Evolutionists believe that the earbones of mammals evolved from some jawbones of reptiles. But Patterson recognized that there was no clear-cut connection between the jawbones of ‘mammal-like reptiles’ and the earbones of mammals. In fact, evolutionists have argued about which bones relate to which.14


The function of possible intermediates

The inability to imagine functional intermediates is a real problem. If a bat or bird evolved from a land animal, the transitional forms would have forelimbs that were neither good legs nor good wings. So how would such things be selected? The fragile long limbs of hypothetical halfway stages of bats and pterosaurs would seem more like a hindrance than a help.

Soft part changes

Of course, the soft parts of many creatures would also have needed to change drastically, and there is little chance of preserving them in the fossil record. For example, the development of the amniotic egg would have required many different innovations, including:

The shell.


The two new membranes—the amnion and allantois.


Excretion of water-insoluble uric acid rather than urea (urea would poison the embryo).


Albumen together with a special acid to yield its water.


Yolk for food.


A change in the genital system allowing the fertilization of the egg before the shell hardens.15


Another example is the mammals—they have many soft-part differences from reptiles, for example:

Mammals have a different circulatory system, including red blood cells without nuclei, a heart with four chambers instead of three and one aorta instead of two, and a fundamentally different system of blood supply to the eye.


Mammals produce milk, to feed their young.


Mammalian skin has two extra layers, hair and sweat glands.


Mammals have a diaphragm, a fibrous, muscular partition between the thorax and abdomen, which is vital for breathing. Reptiles breathe in a different way.


Mammals keep their body temperature constant (warm-bloodedness), requiring a complex temperature control mechanism.


The mammalian ear has the complex organ of Corti, absent from all reptile ears.16


Mammalian kidneys have a ‘very high ultrafiltration rate of the blood.’ This means the heart must be able to produce the required high blood pressure. Mammalian kidneys excrete urea instead of uric acid, which requires different chemistry. They are also finely regulated to maintain constant levels of substances in the blood, which requires a complex endocrine system.19


by Jonathan Sarfati, Ph.D., F.M.

First published in Refuting Evolution
Chapter 3
1.C.R. Darwin, Origin of Species, 6th edition, 1872 (London: John Murray, 1902), p. 413. 
2.C. Patterson, letter to Luther D. Sunderland, 10 April 1979, as published in Darwin’s Enigma (Green Forest, AR: Master Books, 4th ed. 1988), p. 89. Patterson later tried to backtrack somewhat from this clear statement, apparently alarmed that creationists would utilize this truth. 
3.S.J. Gould, in Evolution Now: A Century After Darwin, ed. John Maynard Smith, (New York: Macmillan Publishing Co., 1982), p. 140. Teaching about Evolution pages 56–57 publishes a complaint by Gould about creationists quoting him about the rarity of transitional forms. He accuses creationists of representing him as denying evolution itself. This complaint is unjustified. Creationists make it very clear that he is a staunch evolutionist the whole point is that he is a ‘hostile witness.’ 
4.S.J. Gould, The Ediacaran Experiment, Natural History 93(2):14–23, Feb. 1984. 
5.L. Sunderland, ref. 2, p. 47–48. 
6.Reptiles, Encyclopedia Britannica 26:704–705, 15th ed., 1992. 
7.Ren Hirayama, Oldest Known Sea Turtle, Nature 392(6678):705–708, 16 April 1998; comment by Henry Gee, p. 651, same issue. 
8.
9.G.G. Simpson, Tempo and Mode in Evolution (NY: Columbia University Press, 1944), p. 105–106. 
10.A useful book on the fossil record is D.T. Gish, Evolution: The Fossils STILL Say NO! (El Cahon, CA: Institute for Creation Research, 1995). 
11.Teaching about Evolution and the Nature of Science, A Review by Dr Will B. Provine. Available from , 18 February 1999. 
12.M. Denton, Evolution, a Theory in Crisis (Chevy Chase, MD: Adler & Adler, 1985), p. 190. 
13.M. Denton, footnote 13, p. 157, 178–180; see also W. Roush, ‘Living Fossil’ Is Dethroned, Science 277(5331):1436, 5 September 1997, and No Stinking Fish in My Tail, Discover, March 1985, p. 40. 
14.T.S. Kemp, The Reptiles that Became Mammals, New Scientist 92:583, 4 March 1982. 
15.C. Patterson, Morphological Characters and Homology; in K.A. Joysey and A.E. Friday (eds.), Problems of Phylogenetic Reconstruction, Proceedings of an International Symposium held in Cambridge, The Systematics Association Special Volume 21 (Academic Press, 1982), 21–74. 
16.M. Denton, footnote 13, p. 218–219. 
17.D. Dewar, The Transformist Illusion, 2nd edition, (Ghent, NY: Sophia Perennis et Universalis, 1995), p. 223–232. 
18.T.S. Kemp, Mammal-like Reptiles and the Origin of Mammals (New York: Academic Press, 1982), p. 309–310.

http://www.thematrix.co.uk/texttopic.asp?ID=22

8688
Advances in Health Care / Medical Attention
« on: October 13, 2013, 03:56:11 pm »
Surly,
for filling in a lot of holes in my knowledge of video. Fascinating!

That 4K does look like quite a memory hog, not to mention the size the data highway to refresh is going to have to be.

If our brains are the ultimate "standard" for video capture and processing, then a lot more of our brains must be used than scientists now think is used to store video memory. We associate smells quickly with a memory so we have a fairly significant area doing that.  But when it comes right doen to it, if I say I "remember" an event or a person, it is because I have "film clips" of the event or the person stored in my brain somewhere.

At any rate, thank you for the food for thought. I think too much sometimes but it's fun to do.  ;D

Off topic but fall is in full force in Colchester. It's quite pretty and, thanks to global warming,
 I'm saving a lot on heating bills.


Last year the first frost (average September 15 for this area) was 23 days late. It is now October 13, 2013 and we still haven't had our first frost.

Around here, the first snow comes within a few days of November 1. I expect that will be late as well. I will post here as these events come to pass.

On a negative note, I just came back from a doctor visit. He only talked about high cholesterol.

However, a few days after the doctor visit, the doctor's office send me my lab work (Comprehensive metabolic panel - CMP). It shows a low Total Protein reading (mine is 5.9 and normal is 6.5 - 8.3 g/dl). It's a bad sign. What really bends me out of shape is that the doctor said nothing about it and only mentioned my somewhat high "bad" cholesterol (186).

I am CERTAIN he had those CMP lab reports and the nurses that e-mail me my lab test "chose" not to send the negative one until the doctor "approved" it. Why? Because I normally get all my test results before the doctor visit and I DID get the other two (AiC and lipid profile taken from blood samples the same day as the CMP blood sample) before but this CMP work was sent to me AFTER the doctor visit.

Anything negative on a lab report that may require extra attention for a patient obviously has some in-house protocol forcing the nurses to put a hold on it until the doctor gives the go-ahead. I am not a local and have seen how these folks fall all over themselves to help a local and GO OUT OF THEIR WAY to avoid helping an outsider while they play at being dumb and treating everybody exactly the same. No, I can't go to another doctor. This guy is probably the most non-prejudiced of the three doctors in Vermont I have had the misfortune to deal with. I have to use my wits to "keep them honest" and get proper medical attention.

It's the doctor's office that requires the lab visit be only 3 days to a week before the doctor visit to "ensure" current results. We specifically had asked to have the lab work done about ten days prior to ensure the doctor had the results and they said it must be 3 days to a week. So it goes.

At any rate, they are going to have their hands full if they think I am not going to research the daylights out of this and make an e-mail track record of my requests so they can't dance around the extra tests I will need. For now, I am going to request (politely, of course) an SPEP (serum protein electrophoresis).

The Total Protein score is made of Albumin and Globulins. My Albumin is okay so the globulins are too low. It is important to find out which one of them is contributing to a low score in order to diagnose the underlying health issue. Globulins are divided up into Alpha one, Alpha  two, Beta and Gamma groups. When they do an SPEP, they get a chart that looks like this:


The green covers what a normal SPEP test looks like. The reddish color is just one type of abnormality (that I probably DO NOT have because it would provide a "too high" Total Protein reading rather than a "too low" Total Protein reading).

We'll see how this turns out. It may be a fluke or instrument error. Somebody told me once, "it's all downhill after 45". Yep.

8689
General Discussion / Darwin
« on: October 13, 2013, 01:39:11 am »
If Darwin was alive today, he would be arguing AGAINST the validity of the Theory of Evolution!  :o


The End of Irreducible Complexity?

 
by Dr. Georgia Purdom, AiG–U.S.on

October 6, 2009

The titles of two recent science news articles caught my attention, “More ‘Evidence’ of Intelligent Design Shot Down by Science” and “Intelligent Design ‘Evidence’ Unproven by Real Science.”1 The evidence in question is a molecular machine. Members of the Intelligent Design Movement and creation scientists have often stated that molecular machines are irreducibly complex and could not be formed by evolution. However, evolutionists now claim the mechanism of “pre-adaptation” is a way that these molecular machines could have evolved.

What Is a Molecular Machine and Why Is It Irreducibly Complex?

Molecular machines are complex structures located inside of cells or on the surface of cells. One popular example is the bacterial flagella. This whip-like structure is composed of many proteins, and its rotation propels bacteria through their environment. The molecular machine of interest in a recent PNAS article is a protein transport machine located in the mitochondria.2 This machine transports proteins across the membrane of mitochondria so they can perform the very important function of making energy.

Molecular machines are considered to be irreducibly complex. An irreducibly complex machine is made of a number of essential parts, and all these parts must be present for it to function properly. If even one of these parts is missing the machine is non-functional. Evolution, which supposedly works in a stepwise fashion over long periods of time, can’t form these complex machines. Evolution is not goal-oriented; it cannot work towards a specific outcome. If a part of the machine would happen to form by random chance mutation (which itself is not plausible, see Are mutations part of the “engine” of evolution?), but the other parts of the machine were not formed at the same time, then the organism containing that individual part (by itself non-functional) would not have a particular survival advantage and would not be selected for. Since the part offers no advantage to the organism, it would likely be lost from the population, and evolution would be back to square one in forming the parts for the machine. There is essentially no way to collect the parts over time because the individual parts do not have a function (without the other parts) and do not give the organism a survival advantage. Remember, all the necessary parts must be present for the machine to be functional and convey a survival advantage that could be selected for.

So How Can Evolution Account for Irreducibly Complex Molecular Machines?

The inability to find mechanisms that add information to the genome necessary to form parts for the molecular machines and the inability of Darwinian evolution to collect parts for the machines (no direction or goal) have led evolutionists to develop the idea of “pre-adaptation.” Simply stated, “pre-adaptation” is the formation of new parts for a new molecular machine (from currently existing parts that perform another function) before the machine is needed by the organism. Some quotes will help clarify.

Study authors Abigail Clements et al. state, “We proposed that simple “core” machines were established in the first eukaryotes by drawing on pre-existing bacterial proteins that had previously provided distinct functions.”3

Sebastian Poggio, co-author of the study, stated, “[The pieces] were involved in some other, different function. They were recruited and acquired a new function.”4

Wired Science writer, Brandon Keim, puts it this way: “[T]he necessary pieces for one particular cellular machine . . . were lying around long ago. It was simply a matter of time before they came together into a more complex entity.” He also states,

“The process by which parts accumulate until they’re ready to snap together is called preadaptation. It’s a form of “neutral evolution,” in which the buildup of parts provides no immediate advantage or disadvantage. Neutral evolution falls outside the descriptions of Charles Darwin. But once the pieces gather, mutation and natural selection can take care of the rest . . . .”5

These quotes conjure up images of Lego building blocks from my childhood days. The same blocks could be put together in many different ways to form different structures. The study authors suggest proteins that perform one function can be altered (via mutation6) and used for a different function. This eliminates the need to add new genetic information and requires only a modification of current information. Clements et al. state, “This model agrees with Jacob’s proposition of evolution as a “tinkerer,” building new machines from salvaged parts.”7

The problem with this concept is why would evolution “keep” parts that are intermediate between their old function and a new function? The parts or proteins are more or less stuck between a rock and a hard place. They likely don’t perform their old function because they have been altered by mutation, and they don’t perform their new function in a molecular machine because not all the parts are present yet.8 Studies have shown that bacteria tend to lose genetic information that is not needed in their current environment.

For example, the well known microbial ecologist Richard Lenski has shown that bacteria cultured in a lab setting for several years will lose information for making flagella from their genome.9 Bacteria are being supplied with nutrients and do not need flagella to move to find a food source. Bacteria are model organisms when it comes to economy and efficiency, and those bacteria that lose the information to make flagella are at an advantage over bacteria that are taking energy and nutrients to build structures that are not useful in the current environment. Thus, even if new parts for a new molecular machine could be made via mutation from parts or proteins used for another function, the process of natural selection would eliminate them. The parts or proteins no longer serve their old function, and they cannot serve their new function until all the parts for the machine are present.

In particular, notice the use of verbs in the quotes above, such as drawing on, recruited, came together, and snap together. These are all action verbs that invoke the image of someone or something putting the parts together. Going back to the Lego analogy, an intelligent designer (me!) is required to put the Lego blocks together to form different structures. Just leaving the blocks lying on the floor or shaking them up in their storage container doesn’t result in anything but a big mess of blocks! Although the powers to “tinker” and “snap together” are conferred on mutation and natural selection, they are incapable of designing and building molecular machines.

Conclusion

Pre-adaptation is another “just so” evolutionary story that attempts to avoid the problems of necessary information gain and the goal-less nature of evolution. It fails to answer how parts that are intermediate between their old and new functions would be selected for and accumulated to build a molecular machine.

Michael Gray, cell biologist at Dalhousie University, states, “You look at cellular machines and say, why on earth would biology do anything like this? It’s too bizarre. But when you think about it in a neutral evolutionary fashion, in which these machineries emerge before there’s a need for them, then it makes sense.”10 It only makes sense if you start with the presupposition that evolution is true and confer powers to mutation and natural selection that the evidence shows they do not have.

Clements et al. write, “There is no question that molecular machines are remarkable devices, with independent modules capable of protein substrate recognition, unfolding, threading, and translocation through membranes.”11

The evidence is clear, as Romans 1:20 states, that the Creator God can be known through His creation. Many people will stand in awe of the complexities of molecular machines and still deny they are the result of God’s handiwork. But that doesn’t change the truth of His Word that He is the Creator of all things.

http://www.answersingenesis.org/articles/aid/v4/n1/end-of-irreducible-complexity


  ;D

8690
General Discussion / Re: Bees are Smart
« on: October 12, 2013, 11:08:45 pm »
Thanks Surly. I'll keep adding bit by bit.

Agreed about the author being somewhat long winded. He claims that film (not digital film but the exposure type film for movies) actually has burred frames. Am I correct to assume hat he is wrong? His assertion is that the blurring is actually necessary for us to see it smoother than the flip page child frame by frame type collection of still photos.

I saw some of those early stop image animations with puppet soldiers made in the 1940s (out there on u-tube someplace). They take a photo, then move all the puppets and animal figures a tiny bit and then take another picture and so on and so forth. It looks jerky no matter how small the movement.

Doesn't this mean that blur is needed or does it mean we just have to jack up the frame rate to 220/sec? I have been piloting an aircraft when a blur goes by of another aircraft I wasn't focused on. Maybe the blur is just a function of focusing more than speed but it's interesting to think about it.

If they ever figure out how to decode the signals to the brain from the eye, we will get a spectacular camera technology.

I do have a tendency to believe we "stream" rather than shoot a series of still photographs we translate into motion in our brains because: 

1. I remember those strobe lights in the discos several decades ago where each flash shows you a picture of reality but NOBODY, even though they are dancing and jumping around, looks like they are moving!

2. When I look at a physical object versus what is on the screen of a computer or television, the resolution simply does not compare. Reality seems to be a lot more nuanced, pixelated or whatever than a frame by frame series of pictures.

3. Looking through a window is still far better in detail than looking into a digital screen at a movie of looking out a window. Something is still missing (besides 3D).

The only bearing all this has on bees is, well, uh... Give me time, I'll think of something.  ;D

   

8691
Geopolitics / Power Structures in Human Society: Pros and Cons Part 1
« on: October 12, 2013, 10:38:03 pm »
Why the 1% is responsible for more than 80% of humanity's carbon footprint and why Homo sapiens is doomed unless the 1% lead the way in a sustainable life style.

Today humanity faces the fact that the parasitic relationship of Homo sapiens with the biosphere is depleting the resources hitherto relied on to maintain a standard of living somewhere above that of other earthly hominids like the chimps or gorillas that are, unlike us,  engaged in a symbiotic relationship with the biosphere. The chimps engage in rather brutal wars with other chimp tribes where the victors set about to kill and eat very young chimps of the vanquished tribe. This is clearly a strategy to gain some advantage by killing off the offspring of the competition. It cannot be, in and of itself, considered morally wrong or evil behavior.

Dominance behavior and territoriality between same sex and opposite sexes also can be filed under the category of "successful behavior characteristics" for species perpetuation. Behavior that appears on the surface to have no species perpetuation purpose (like male chimps humping less dominant males or sexually mature adolescent seals, locked out of mating by bulls with huge harems, violently thrashing, and often killing, small seal pups that stray into their area) are a function of hormone biochemistry, not good or evil.

Some scientists might say this is just Darwinian behavior to winnow out the less flexible, less intelligent or weaker members of a species. I don't agree. I believe it is a downside of hormones that distracts species from more productive behavior but unfortunately cannot be avoided if you are going to guarantee the survival of a species by programming in strong sex drives.

I repeat, excessive aggression or same sex sexual activity as a dominance display is a downside to the "strong sex drive" successful species perpetuation characteristic. This "downside", when combined with a large brain capable of advanced tool making, can cause the destruction of other species through rampant predation and poisoning of life form resources in the biosphere.

The Darwinian mindset accepts competition among species in the biosphere, where species routinely engage in fighting and killing each other for a piece of the resource pie, as a requirement for the survival of the fittest. Based on this assumption, all species alive today are the pinnacle of evolution.

Really? How does a meteor impact fit into this "survival of the fittest" meme? It doesn't. Why? Because any multicellular organism can easily be wiped out by random, brute force, natural catastrophes like a meteor impact or extensive volcanism. Darwinists are quite willing to accept the random nature of the initial creation of single celled life on earth (even though the latest advances in science show that any cell is an incredibility and irreducibly complex piece of biomachinery that absolutely HAS to have several parts working in unison or none of them work at all)  but refuse to accept that the present multispecies survival is just as random.

It's more like "survival of the luckiest" than "survival of the fittest". From a strictly Darwinian perspective, the extremophiles are the real pinnacle of evolution because of their ability to survive just about anyhting that is thrown at them. There is a type of Archaebacteria that can live in an almost 32% salt concentration called halophiles. Halophiles can be found anywhere with a concentration of salt five times greater than the salt concentration of the ocean, such as the Great Salt Lake in Utah, Owens Lake in California, the Dead Sea, and in evaporation ponds. [





Carbon assimilation by Halococcus salifodinae, an archaebacterial

If you want to talk about survival of the fittest, look at this humble organism: Halococcus is able to survive in its high-saline habitat by preventing the dehydration of its cytoplasm. To do this they use a solute which is either found in their cell structure or is drawn from the external environment. Special chlorine pumps allow the organisms to retain chloride to maintain osmotic balance with the salinity of their habitat. The cells are cocci, 0.6-1.5 micrometres long with sulfated polysaccharide walls.

The cells are organtrophic, using amino acids, organic acids, or carbohydrates for energy. In some cases they are also able to photosynthesize.


Halococcus archaea

This primitive life form is organtrophic AND, not or, in some cases, photosynthetic!
Now that's what I call a life form able to handle just about any catastrophe thrown at it.

The more complex a life form becomes, the less flexible, adaptable and the more fragile it becomes. That is why I think the Darwinian approach to species interaction in the biosphere severely understates the fragility of "higher" organisms. Just as a type of fungus can infect the brain of an ant species to climb before it dies and thereby aid in fungal sporulation, it is not beyond the realm of possibility that the symbiotic bacteria that constitute a high percentage of the human genes  (we cannot metabolize our food without them so they are an inseparable part of being a human) actually drove our evolution to simply to aid in the spread of the bacteria. No, I don't believe that for a second but it shows that Darwinian "logic" can be used to claim the exact opposite of what the Darwinians claim is the "fittest" species. 

Laugh if you want, but which is a higher organism, the fungus or the ant? A recent article in "The Scientist" explored the possibility that human evolution (evolution, of course, must include human intelligent development of advanced tool making for war, transportation and food resource exploitation) can be explained as bacteria driven. We may be a mobile expression of symbiotic bacteria trying to spread all over the biosphere by ensuring their human hosts do whatever it takes to blanket the planet for God and bacteria (not necessarily in that order  :icon_mrgreen:)!

Quote
It is estimated that there are 100 times as many microbial genes as human genes associated with our bodies. Taken together, these microbial communities are known as the human microbiome.

Quote
These findings have the potential to change the landscape of medicine. And they also have important philosophical and ethical implications.

A key premise of some microbiome researchers is that the human genome coevolved with the genomes of countless microbial species. If this is the case, it raises deep questions about our understanding of what it really means to be human.

Quote
If the microbiome, on a species level, coevolved with the human genome and, on an individual level, is a unique and enduring component of biological identity, then the microbiome may need to be thought of more as “a part of us” than as a part of the environment.

Quote
More important in the context of ethical considerations is the possibility that if the adult microbiome is indeed relatively stable, then such early childhood manipulations of the microbiome may be used to engineer permanent changes that will be with the child throughout life. There is thus the potential that an infant’s microbiome may be “programmable” for optimal health and other traits.2

The article assumes WE are the ones that could engage in the "programming". It doesn't mention WHO EXACTLY was doing all that "programming" during our alleged evolution.

There is a greater quantity of microbial genes than what are considered "human" genes but it's really just one package. Genes drive genetics and evolutionary traits, do they not? I made a big joke about it in the article comments:
Quote
Perhaps the scientific nomenclature for "us versus them" organism energy transfer relationships need to be expanded upon; terms such as parasitic, commensal, symbiotic, etc. don't address the fact that the 'them' is really a part of "us". Pregnant women don't think of their future children as parasites (which is what they technically are - even the beefed up immune system the future moms get is a function of that short lived organism, the placenta). 
Perhaps we are just some giant "pre-frontal cortex" type of ambulatory appendage which exists for the purpose of spreading bacterial colonies.

Oh, the irony of self-awareness and tool making intelligence being an evolutionary device in the service of getting that bacterial colony to vault over the edge of the giant petri dish called Earth.
Can you picture the scientific community awarding Escherichia coli a PhD? Dr. E Coli, you are the best part of us!
 

We must now bow and scrape to the pinnacle of evolution, the reigning king of Darwinian evolutionary competition, that fine fecal fellow, Dr. Escherichia coli.

Now some folks out there on Wall Street might take offense to being outcompeted by Dr. E. coli. They might even say it's a shitty deal!  ;D  Others will have no problem relegating Wall Streeters and the rest of the 1% to the category of "lower life forms" in comparison to gut bacteria even if the other 99% of Homo sap are included.

A commenter named, Lee Davis was not amused by the implications of research in the direction the article was pointing:

Quote
Absolutely. "Manage" the Earth's biodiversity at your own peril. Destroy the rainforests at your own peril. Acidify the ocean with CO2 at your own peril. I read "Science and Survival" by Barry Commoner in 1964. Since then, human "management" of the planet has continued apace, with little regard for long term consequences. The only thing he called attention to that was actually changed was the halt in atmospheric nuclear testing, but we've managed to replace that pollution with the exhaust from nuclear power plant meltdowns. Half-assed demigods we certainly are, not playing with a full deck and with little understanding of how the game is played. Of course, we THINK we know it All now...and if we don't, our computing machines certainly do.



Click here for Part 2


1.   http://the-scientist.com/2012/03/01/who-are-we-really/#comment-464838811
2.   http://news.bbc.co.uk/2/hi/uk_news/politics/8393081.stm
3.   http://www.scb.se/Pages/TableAndChart____104319.aspx
4.  http://www.e3network.org/papers/Why_do_state_emissions_differ_so_widely.pdf
5.   http://www.executivetravelmagazine.com/articles/flying-on-private-jets
6.   http://www.guardian.co.uk/environment/2009/oct/29/private-jets-green
7.   http://www.ips-dc.org/reports/high_flyers
8.   http://www.greendrinkschina.org/news/chinas-per-capita-carbon-emissions-solidly-reach-developed-nation-levels/
9.   http://en.wikipedia.org/wiki/List_of_countries_by_carbon_dioxide_emissions_per_capita
10. http://www2.ucsc.edu/whorulesamerica/power/wealth.html
11. http://green.wikia.com/wiki/Carbon_Footprint_of_American_Cities

8692
General Discussion / Re: Bees are Smart
« on: October 11, 2013, 11:31:32 pm »

Human Eye Frames Per Second 2

  05/24/2001 5:00:05 AM MDT Albuquerque, Nm
  By Dustin D. Brand; Owner AMO

so, just how many frames per second can our human eye see past 100?
 
    In my previous article (Human Eye Frames Per Second), I mentioned I'd write another to settle once and for all just how many frames per second our human eye is capable of seeing, so here we are.

   Motion Blur is so important in movies and TV programming

   In my first article, I mentioned how important motion blur is pertaining to frames per second. On Computers, this is essentially non-existent. Motion blur in movies, which run at 24 frames per second are designed for the big screen projector, which blasts movies to the screen, each frame in it's entirety in the widescreen format one frame at a time. Because each frame is filmed in a certain way, motion blur is used, meaning the frames are not perfectly clear, they contain blur.

   The blur used in todays movies will eventually be replaced by completely digital movies (on very expensive screens, I should know, I worked with the technology at age 16), and with the advent of computer animation in movies, the process of replacing the blur on the film in movies is becoming more and more inevitable.

   Computer's don't work this way (with blur that is), and essentially neither does anything digital. With digital, you either have an exact perfectly clear image, or an exact perfectly blur image like in movies. From the transition from movies to the TV, or DVD digital, an extra 4 frames are added each second in a method called frame mixing, just to match correctly the device it's being displayed on, your TV. NTSC(American) and PAL(European) use different kinds of TV formats, each with different refresh rates and resolutions. 640x480 for NTSC and 800x600 lines for PAL. With HDTV, everything is digital, and essentially 60 frames now, but most of these broadcasts use frame mixing, and until 2006 you won't need to trash your regular TV, though it may be a good idea now.

  As many of you know, pause a DVD film movie during movement, or if you can a TV with your VCR and you'll see the blur (unless the image is static to begin with). Pause an animation DVD, or a cartoon on TV and you won't see the blur. Why is this so? Filmed movies, and Filmed TV shows work by blurring their subjects, actors, actresses, whatever. Filmed movies and TV are not taking a PERFECT snapshot image of the subject, each image is a blur, and a blur to the next giving the impression that everything is moving seamlessly (if nothing is moving in the scene, you can see a static image). In an animation or a cartoon, each frame or image of the 24/30 frames per second is perfect, there is no blur in the image - EVER.

   I touched very briefly on autofocus Cameras, and even the best most expensive cameras not even coming close to matching the capabilities of our human eye in focusing. The professional cameras you see reporters with are capable of taking pictures of EXTREMELY fast moving objects in perfectly still quality at and above 1/4000 of a second. What does a camera being able to take 4000 pictures in a second prove?

   Our infinitely seamless world.

   Professional cameras can take perfectly still pictures without any blur, and like in the case of video cameras, pictures with blur. So where is the limit? How quick can we take a picture, and how slow can we take a picture? SLOW time progressed pictures have been taken, you've probably seem them at night where all the cars tail lights are in a streak. You've probably also seen the "Photo finish" camera's take the winning tell tale sign of a close horse race. What all of this really means is that unless we slow time, or speed it up, there isn't any blur in our world. That is of course unless you're drunk, the room is spinning, or you're on some LSD trip. Ok besides that.

   Images in our world are infinitely streamed to us as I've said before. Living in this 3rd dimension as we do, our eyes able us to see depth/periphery, we can focus in very close, and as far as infinity. So is there really a limit to how many frames per second we can really see with our eyes?

   Our limit, is there one?
   Until someone proves me, all the scientists, optometrists, and the like wrong, there is no limit to how many frames per second our human eye can see. Theoretical limit yes, proven limit, NO.

   Think for just a second how dumb it would be to push the limit on video displays, devices and the like if our eyes couldn't tell the difference between an HDTV and a plain old TV or a Computer monitor and a Plasma display. Ok, in that second how many times do you think your eye "framed" this screen? The number of times the screen refreshed? Nope, the number of times your eye streamed this page to you, it's a number that is potentially infinite, or at least until we understand the complexity of our own mind. Just know that this number is much, much higher than what your monitor is capable of currently displaying to you, that is matching your own interpretation.

   Our Brain is smart enough however to "exact" 24 frames into motion, isn't it ignorant to say we can't distinguish 400, or even 4000 into motion? Heh, the sky's the limit, oh wait, then space...oh wait. Give us more, we notice the difference from 30-60, the difference from 60-120. It is possible the closer we get to our limit, be there one, the harder it is to get there, and there is a theory about this. Someone is across the room. Take one full step towards them. Now 1 half step towards them, then 1 half step of a half step, on and on until your 1 half of each movement you take. Will you ever get there? That my friend is open to debate, but in the mean time, will you take one step towards me?

   The Human Eye perceiving 220 Frames Per second has been proven, game developers, video card manufacturers, and monitor manufacturers all admit they've only scratched the surface of Frames Per Second. With a high quality non-interlaced display (like plasma or a large LCD FPD) and a nice video card capable of HDTV resolution, you can today see well above 120 FPS with a matching refresh rate. With some refresh rates as high as 400Hz on some non-interlaced displays, that display is capable of 400 FPS alone. Without the refresh rate in the way, and the right hardware capable of such fast rendering (frame buffer), it is possible to display as cameras are possible of recording 44,000 Frames Per Second. Imagine just for a moment if your display device were to be strictly governed by the input it was receiving. This is the case with computer video cards and displays in a way with adjustable resolutions, color depth, and refresh rates.

   Test your limit, you tell me...
   Look at your TV, or ANY image device, then look at the device not looking at the image it is displaying, for example the TV itself, or the Monitor itself. Tell me the image on the screen is more clear, more precise than the image of the TV or the monitor itself. You can't, that's why the more frames per second, the better, and the closer to reality it really appears to us. With 3d holograms right around the corner, the FPS subject or maybe 3DFPS will become even more important.

  The real limit is in the viewing device, not our eyes.

   The real limits here are evidenced by the viewing device, not our eyes, we can consistently pick up the flicker to prove that point. In Movies the screen is larger than life, and each screen is drawn instantaneously by the projector, but that doesn't mean you can't see the dust or scratches on each frame. With NTSC and PAL/SECAM TV's, each line is drawn, piece by piece (odd, then even lines) for each frame and refreshes at the Hertz. The frames displaying because of this is exactly the hertz divided by 2 or (odd line 1 hertz then even line 1 hertz). Do a search for high-speed video cameras and you'll find some capable of 44,000+ frames per second, that should give you a clue.

   CRT's be it PC monitors or TV's have to refresh with rates, known as the Hertz. Eye fatigue can happen because of the probe or line effect that happens after prolonged viewing, yes your eye sees this. Switch to your Periphery vision like I gave an example for in my first article and you can see the refresh rate. 60Hz and 50Hz also happens to be the frequency of the main power of the countries that use these Hertz in the TV refresh rates. Because of the way the technology works, by drawing each line individually, your Frame Rate/Refresh rate (not your FPS) is tied to your FPS.

If something is running at 60 FPS however your monitor is at 60 Hertz and is interlaced, which TV's are locked at, you're seeing 30 Frames Per Second. However, if you have a nice computer monitor (NON-INTERLACED), and it's set to 120Hertz (72+ is considered "flicker free"), and your video is running at 120 Frames Per Second, you're seeing exactly 120 Frames Per Second.

You may have heard that LCD's or Liquid Crystal Displays are "flicker free". LCD displays are capable of showing their FPS in a refresh rate, much like non-interlaced monitors are, example 75 Hertz is capable of 75 Frames Per Second. Technically, because an LCD pixel/transistor is either true or false, this technology is not only better, but faster than an electron gun on a phosphor like in a CRT, thus virtually eliminating flicker.

   Technically speaking: NTSC has 525 scan lines repeated 29.97 times per second = 33.37 msec/frame or roughly 30 Frames Per Second at 60Hz BECAUSE it's INTERLACED.
   Technically speaking: PAL has 625 scan lines repeated 25 times per second = 40 msec/frame or exactly 25 Frames Per Second at 50Hz BECAUSE it's INTERLACED.

   So how does 60Hertz relate in HDTV's? Well, with progressive scanning (the XBOX supports this with it's NVidia GPU), each frame is drawn on each pass meaning 60Hz supports 60 Frames Per Second, but as you've learned although the hertz and FPS are related, the hertz of the display does not necessarily mean that it is the frames per second. Frames per second are determined by the display device and how it draws each frame. Normal TV's don't support progressive Scan and thus redraws half the screen on each pass, first draws the odd lines (interlaced), then the even = 30 Frames Per Second maximum.

   As you've seen, it's not our human eyes, it's the display. More on this is the fact between interlaced and non-interlaced monitors. All computer CRT monitors are now made non-interlaced (and have been for quite some time), meaning the entire frame is refreshed at the refresh rate or Hertz. The frame is scanned all at once, thus the refresh rate can equal the Frames Per Second, but the Frames Per Second isn't going to go past the Refresh Rate because it's not possible on the display. Just because a video card is pushing 200 Frames Per Second, your display may be at 100Hz meaning it's only refreshing 100 times per second.

   Thus, the big misconception that our eyes can only see 30 frames or 60 frames per second is purely due to the fact that the mainstream displays can only show this, not that our eyes can't see more. For the time being, the frames per second capable of any display device isn't even close to the phrase "more than meets the eye".

   Definitions of relevance:


   CRT Cathode Ray Tube - The tube or flat tube making up a TV which utilizes an electron gun to manipulate phosphors at the front of the tube for varying color.

   NTSC originally developed in the United States by a committee called the National Television Standards Committee (525 lines).

   PAL standing for Phase Alternate Lines (625 lines)

   FPS - Frames Per Second - A Frame consists of an image completely drawn to a viewing device, example: Monitor

 
 
http://amo.net/NT/02-21-01FPS.html

                         

8693
General Discussion / Re: Bees are Smart
« on: October 11, 2013, 11:30:13 pm »
Surly and RE,
Here's what I dug up on human frame rate acuity. It's kind of long but I find it quite interesting. 



Human Eye Frames Per Second

  02/21/2001 10:30:00 AM MST Albuquerque, Nm
  By Dustin D. Brand; Owner AMO


How many frames per second can our wonderful eyes see?
   

 
    This article is dedicated to a friend of mine, Mike.

   There is a common misconception in human thinking that our eyes can only interpret 30 Frames Per Second. This misconception dates back to the first human films where in fact a horse was filmed proving actually that at certain points they were resting on a single leg during running. These early films evolved to run at 24 Frames Per Second, which has been the standard for close to a century.

   A Movie theatre film running at 24 FPS (Frames Per Second) has an explanation. A Movie theatre uses a projector and is projected on a large screen, thus each frame is shown on the screen all at once. Because Human Eyes are capable of implementing motion blur, and since the frames of a movie are being drawn all at once, motion blur is implemented in such few frames, which results in a lifelike perceptual picture. I'll explain the Human Eye and how it works in detail later on in this multi-page article.

   Now since the first CRT TV was released, televisions have been running at 30 Frames Per Second. TV's in homes today use the standard 60Hz (Hertz) refresh rate. This equates to 60/2 which equals 30 Frames Per Second. A TV works by drawing each horizontal line of resolution piece by piece using an electron gun to react with the phosphors on the TV screen. Secondly, because the frame rate is 1/2 the refresh rate, transitions between frames go a lot smoother. Without going into detail and making this a 30 page article discussing advanced physics, I think you'll understand those points.

   Moving on now with the frame rate. Motion blur again is a very important part to making videos look seamless. With motion blur, those two refreshes per frame give the impression of two frames to our eyes. This makes a really well encoded DVD look absolutely incredible. Another factor to consider is that neither movies or videos dip in frame rate when it comes to complex scenes. With no frame rate drops, the action is again seamless.

   Computer Games and their industry driving use of Frames Per Second
   
  It's easy to understand the TV and Movies and the technology behind them. Computers are much more complex. The most complex being the actual physiology /neuro-ethology of the visual system. Computer Monitors of a smaller size are much more expensive in cost related to a TV CRT (Cathode Ray Tube). This is because the phosphors and the dot pitch of Computer Monitors are much smaller and much more close together making much greater detail and much higher resolutions possible. Your Computer Monitor also refreshes much more rapidly, and if you look at your monitor through your peripheral vision you can actually watch these lines being drawn on your screen. You can also observe this technology difference by watching TV where a monitor is in the background on the TV.

   A frame or scene on a computer is first setup by your video card in a frame buffer. The frame/image is then sent to the RAMDAC (Random Access Memory Digital-Analog-Convertor) for final display on your display device. Liquid Crystal Displays, and FPD Plasma displays use a higher quality strictly digital representation, so the transfer of information, in this case a scene is much quicker. After the scene has been sent to the monitor it is perfectly rendered and displayed. One thing is missing however, the faster you do this, and the more frames you plan on sending to the screen per second, the better your hardware needs to be. Computer Programmers and Computer Game Developers which have been working strictly with Computers can't reproduce motion blur in these scenes. Even though 30 Frames are displaying per second the scenes don't look as smooth as on a TV. Well, that is until we get to more than 30 FPS.

   NVIDIA a computer video card maker who recently purchased 3dFx another computer video card maker just finished a GPU (Graphics Processing Unit) for the XBOX from Microsoft. Increasing amounts of rendering capabilities and memory as well as more transistors and instructions per second equate to more frames per second in a Computer Video Game or on Computer Displays in general. There is no motion blur, so the transition from frame to frame is not as smooth as in movies, that is at 30 FPS. In example, NVIDIA/3dfx put out a demo that runs half the screen at 30 fps, and the other half at 60 fps. The results? - there is a definite difference between the two scenes; 60 fps looking much better and smoother than the 30 fps.

   Even if you could put motion blur into games, it would be a waste. The Human Eye perceives information continuously, we do not perceive the world through frames. You could say we perceive the external visual world through streams, and only lose it when our eyes blink. In games, an implemented motion blur would cause the game to behave erratically; the programming wouldn't be as precise. An example would be playing a game like Unreal Tournament, if there was motion blur used, there would be problems calculating the exact position of an object (another player), so it would be really tough to hit something with your weapon. With motion blur in a game, the object in question would not really exist in any of the places where the "blur" is positioned, that is the object wouldn't exist at exactly coordinate XYZ. With exact frames, those without blur, each pixel, each object is exactly where it should be in the set space and time.

   The overwhelming solution to a more realistic game play, or computer video has been to push the human eye past the misconception of only being able to perceive 30 FPS. Pushing the Human Eye past 30 FPS to 60 FPS and even 120 FPS is possible, ask the video card manufacturers, an eye doctor, or a Physiologist.We as humans CAN and DO see more than 60 frames a second.

    With Computer Video Cards and computer programming, the actual frame rate can vary. Microsoft came up with a great way to handle this by being able to lock the frame rate when they were building one of their games (Flight Simulator).

   The Human Eye and it's real capabilities - tahDA!
   
    This is where this article gets even longer, but read on, please. I will explain to you how the Human Eye can perceive much past the misconception of 30 FPS and well past 60 FPS, even surpassing 200 FPS.

   We humans see light when its focused onto the retina of the eye by the lens. Light rays are perceived by our eyes as light enters - well, at the speed of light. I must stress the fact again that we live in an infinite world where information is continuously streamed to us.

Our retinas interpret light in several ways with two types of cells; the rods and the cones. Our rods and cells are responsible for all aspects of receiving the focused light rays from our retinas. In fact, rods and cones are the cells on the surface of the retina, and a lack thereof is a leading cause of blindness.

   Calculations such as intensity, color, and position (relative to the cell on the retina) are all forms of information transmitted by our retinas to our optic nerves. The optic nerve in turn sends this data through its pipeline (at the nerve impulse speed), on to the Visual Cortex portion of our Brains where it is interpreted.

   Rods are the simpler of the two cell types, as it really only interprets "dim light". Since Rods are light intensity specific cells, they respond very fast, and to this day rival the quickest response time of the fastest computer.

Rods control the amount of neurotransmitter released which is basically the amount of light that is stimulating the rod at that precise moment. Scientific study has proven upon microscopic examination of the retina that there is a much greater concentration of rods along the outer edges. One simple experiment taught to students studying the eye is to go out at night and look at the stars (preferably the Orion constellation) out of your peripheral vision (side view). Pick out a faint star from your periphery and then look at it directly. The star should disappear, and when you again turn and look at it from the periphery, it will pop back into view.

AGelbert note: This is why pilots are trained to look at runway lights peripherally and not fixate when making night landings.

   Cones are the second retina specialized cell type, and these are much more complex. Cones on our retinas are the RGB inputs that computer monitors and graphics use. The three basic parts to them absorb different wavelengths of light and release differing amounts of different neurotransmitters depending on the wavelength and intensity of that light. Think of our cones as RGB computer equivalents, and as such each cone has three receptors that receive red, green, or blue in the wavelength spectrum. Depending on the intensity of each wavelength, each receptor will release varying levels of neurotransmitter on through the optic nerve, and in the case of some colors, no neurotransmitter. Due to cones inherent 3 receptor nature vs 1, their response time is less than a rods due to the cones complex nature.

   Our Optic nerves are the visual information highway by which our lens, then retina with the specialized cells transmit the visual data on to our Brains Visual Cortex for interpretation. This all begins with a nerve impulse in the optic nerve triggered by rhodopsin in the retina, which takes all of a picosecond to occur. A picosecond is one trillionth of a second, so in reality, theoretically, we can calculate our eyes "response time" and then on to theoretical frames per second (but I won't even go there now). Keep reading.

   The optic nerves average in length from 2 to 3 centimeters, so its a short trip to reach our Visual Cortex. Ok, so like the data on the internet, the data traveling in our optic nerves eventually reaches its destination, in this case, the Visual Cortex - the processor/interpreter.

   Unfortunately, neuroscience only goes so far in understanding exactly how our visual cortex, in such a small place, can produce such amazing images unlike anything a computer can currently create. We only know so much, but scientists have theorized the visual cortex being a sort of filter, and blender, to stream the information into our consciousness. We're bound to learn, in many more years time, just how much we've underestimated our own abilities as humans once again. Ontogeny recapitulates phylogeny (history repeats itself).

   There are many examples to differentiate how the Human Visual System operates differently than say, an Eagles. One of these examples includes a snowflake, but let me create a new one.

   You're in an airplane flying looking down at all the tiny cars and buildings. You are in a fast moving object, but distance and speed place you above the objects below. Now, lets pretend that a plane going 100 times as fast quickly flies below you, it was a blur wasn't it?

   Regardless of any objects speed, it maintains a fixed position in space time. If the plane that just flew by was only going say, 1 times faster than you, you probably would have been able to see it. Since your incredible auto focus eye had been concentrated on the ground before it flew below, your visual cortex made the decision that it was there, but, well, moving really fast, and not as important. A really fast camera with a really fast shutter speed would have been able to capture the plane in full detail. Not to limit our eyes ability, since we did see the plane, but we didn't isolate the frame, we streamed it relative to the last object we were looking at, the ground, moving slowing below. 

 Our eyes, technically, are the most advanced auto focus system around - they even make the cameras look weak. Using the same scenario with an Eagle in the passenger seat, the Eagle, due to its eyes only using Rods, and its distance to its visual cortex being 1/16 of ours wouldn't have seen as much blur in the plane. However, from what we understand of the Visual Cortex, and Rods and Cones, even Eagles can see dizzy blurry objects at times.

   What is often called motion blur, is really how our unique vision handles motion, in a stream, not in a frame by frame. If our eyes only saw frames (IE: 30 images a second), like a single lens reflex camera, we'd see images pop in and out of existence and that would really be annoying and not as advantageous to us in our three dimensional space and bodies.

   So how can you test how many Frames Per Second we as Humans can see?

   My favorite test to mention to people is simply to look around their environment, then back at their TV, or monitor. How much more detail do you see vs your monitors? You see depth, shading, a wider array of colors, and its all streamed to you. Sure, we're smart enough to use a 24 frame movie and piece it together, and sure we can make real of video footage filmed in NTSC or PAL, but can you imagine the devices in the future?

   You can also do the more technical and less imaginative tests above, including the star gazing, and this tv/monitor test. A TV running at only 30 FPS is picking up a Computer monitor in the background in its view, and with the 30 FPS TV Output you see the screen refreshes on the computer monitor running at 60 FPS. This actually leads to eyestrain with computer monitors but has everything to do with lower refresh rates, and not higher.

   Don't underestimate your own eyes Buddy...
   We as humans have a very advanced visual system, please understand that a computer with all it's processor strength still doesn't match our own brain, or the complexity of a single Deoxyribonucleic Acid strand.

While some animals out there have sharper vision than us humans, there is usually something given up with it - for eagles there is color, and for owls it is the inability to move the eye in its socket. With our outstanding human visual, we can see in billions of colors (it has been tested that women see as much as 30% more colors than men do). Our eyes can indeed perceive well over 200 frames per second from a simple little display device (mainly so low because of current hardware, not our own limits).

Our eyes are also highly movable, able to focus in as close as an inch, or as far as infinity, and have the ability to change focus faster than the most complex and expensive high speed auto focus cameras. Our Human Visual system receives data constantly and is able to decode it nearly instantaneously. With our field of view being 170 degrees, and our fine focus being nearly 30 degrees, our eyes are still more advanced than even the most advanced visual technology in existence today.

   So what is the answer to how many frames per second should we be looking for? If current science is a clue, its somewhere in sync with full saturation of our Visual Cortex, just like in real life. That number my friend - is - well - way up there with what we know about our eyes and brains.

   It used to be, well, anything over 30 FPS is too much. (Is that why you're here, by chance?) :) Then, for a while it was, anything over 60 is sufficient. After even more new video cards, it became 72 FPS. Now, new monitors, new display types like organic LEDS, and FPDs offer to raise the bar even higher. Current LCD monitors response rates are nearing the microsecond barrier, much better than millisecond, and equating to even more FPS.

   If this old United States Air Force study is any clue to you, we've only scratched the surface in not only knowing our FPS limits, and coming up with hardware that can match, or even approach them.

   The USAF, in testing their pilots for visual response time, used a simple test to see if the pilots could distinguish small changes in light. In their experiment a picture of an aircraft was flashed on a screen in a dark room at 1/220th of a second. Pilots were consistently able to "see" the afterimage as well as identify the aircraft. This simple and specific situation not only proves the ability to perceive 1 image within 1/220 of a second, but the ability to interpret higher FPS.

   This article was updated: 7/27/2002 due to its popularity and to reflect in more detail the science involved with our eyes and their ability to interpret more than 60 FPS.

   To Mike (and everyone else), from Dustin D. Brand...
 
Second Part on next reply
 

8694
Renewables / Re: Photvoltaics (PV)
« on: October 11, 2013, 09:25:29 pm »
Solar VC Funding: "The Fear Is Gone" For Investors


Funding for solar energy projects and M&A soars, as investors and developers continue to confidently ride the demand wave.

 James Montgomery, Associate Editor, RenewableEnergyWorld.com 
 October 09, 2013

New Hampshire, USA -- Project funding and merger and acquisition (M&A) activity in the solar energy sector reached record levels from July-September of this year, reflecting an improved outlook for solar demand, according to a new report from Mercom Capital Group.

Global funding for solar energy spiked to $2.18 billion, more than twice the funding seen in 2Q13 ($915 million), Mercom says. Top VC recipients in the quarter included Solexel ($40 million, high-efficiency silicon solar cells), eSolar ($22 million, concentrated solar power developer), Clean Power Finance ($20 million, third-party solar PV financing), HelioVolt ($19 million, thin-film), and Dyesol ($16 million, dye-sensitized/organic solar cells). On the M&A side, the Applied Materials-Tokyo Electron deal was a major shakeup in the semiconductor sector but less so for silicon solar manufacturing.

The new normal is now. It's time to stop comparing today's levels of solar energy investment to the heady days of two or three years ago, when $400-$500 million quarters were routine and money flowed freely to solar technology developers jostled for positioning. "We're not seeing anything over $200 million in the last 3-6 quarters," noted Mercom CEO Raj Prabhu. "This is where we are: the new normal."

Strategic investors are stepping up. SK Group put more money into Heliovolt. Saudi conglomerate Tasnee invested in Dyesol. Over the past year Hanergy and Hanwha have been extending their reach into solar. Big strategic partners with a ton of money continue to hedge some bets on technology, possibly where they can leverage manufacturing experience.

Solar leasing's hot. Third-party solar finance companies raised approximately $584 million in the past quarter, with SolarCity, Sungevity, SunRun, etc. raising funds with help from banks. So far with three months to go, third-party solar leasing firms have raised roughly $2.5 billion this year, compared to just $2 billion in both 2012 and 2011. "This shows that it's still pretty healthy out there," Prabhu said. "Everything we're seeing is going up." Of course the ability to pull down lots of funding is especially important to third-party firms; it's not just enough to raise a few million of VC or private equity funding, but they need to raise tens and hundreds of millions of dollars and put that right into rooftop installations, Prabhu pointed out. "If they're not doing that they have a problem."

Also noteworthy during 3Q12 was SolarCity's acquisition of sales channel partner Paramount Solar, underscoring the importance of customer acquisition in the solar leasing model. "At the end of the day, anyone can go install" solar, Prabhu pointed out, but "the ability to go out and land the residential customers makes you unique."

Projects are popular


Full article here:

http://www.renewableenergyworld.com/rea/news/article/2013/10/solar-vc-funding-the-fear-is-gone-for-investors?cmpid=WNL-Friday-October11-2013


8695
Proof a one to two Decade Transition To 100% Renewable Energy is DOABLE



The other day, a knowledgeable mechanical engineer I know stated this concern about the colossal challenge and, in his opinion, impossibility of switching to renewable energy machines in time to avoid a collapse from an energy to manufacture and global industrial capacity limitation in our civilizational infrastructure.

He said:

Quote
I admire your enthusiasm, and I agree with many of the points you make. Yes ICE (Internal Combustion Machines) waste high EROEI (energy return on energy invested) consistently, yes fossil fuels and conventional engineering has a warped distorted perspective because of the ICE, and yes we have an oil oligarchy protecting its turf.

However say we hypothetically made all the oil companies dissappear tommorow and where able to suspend the laws of time and implement our favorite renewables of choice and then where tasked with making certain all of societies critical needs were met we'd have a tall order. The devil is in the details and quantities.

Its the magnitudes, it's 21 million barrels per day we are dependent on. Its created massive structural centralization that can only be sustained by incredible energetic inputs. Not enough wind, and not enough rare earth material for PV's to scale and replace. We have to structurally rearrange society to solve the problem. Distributed solar powered villages, not big cities and surely not suburbia. I fear we'll sink very useful resources and capital towards these energy sources (as we arguably have with wind) when the real answer is structural change.

I have shown evidence that there are several multiples of the energy we now consume available just from wind power. This data came from a recent study by Lawrence Livermore Laboratory Scientists.

He thinks we CAN'T do it even if we had enough wind because of the colossal challenge and, in his opinion, impossibility of switching to renewable enrgy machines in time to avoid a collapse from an energy required to manufacture and global industrial capacity limitation in our civilizational infrastructure.

His solution is to survive the coming collapse with small distributed energy systems and a radically scaled down carbon footprint. Sadly, that option will not be available to a large percentage of humanity.

Hoping for a more positive future scenario, I analyzed his concerns to see if they are valid and we have no other option but to face a collapse and a die off with the surviving population living at much lower energy use levels. :P

I'm happy to report that, although the mechanical engineer has just cause to be concerned, we can, in reality, transition to 100% Renewable Energy without overtaxing our civilizational resources.

This a slim hope but a real one based on history and the word's present manufacturing might. Read on.


I give you the logistics aiding marvel of WWII, the Liberty Ship. It was THE JIT (just in time), SIT (sometimes in time) and sometimes NIT (never in time because it was torpedoed) cargo delivery system that helped us win the war.

This was a mass produced ship. These ships are a testament to the ability to build an enormous quantity of machines on a global scale that the U.S. was capable of over half a century ago.

Quote
The Liberty ship model used two oil boilers and was propelled by a single-screw steam engine, which gave the liberty ship a cruise speed of 11 to 11.5 knots. The ships were 441.5 feet long, with a 57 foot beam and a 28 foot draft.



Quote
The ships were designed to minimize labor and material costs; this was done in part by replacing many rivets with welds. This was a new technique, so workers were inexperienced and engineers had little data to go on. Additionally, much of the shipyards' labor force had been replaced with women as men joined the armed forces. Because of this, early ships took quite a long time to build - the Patrick Henry taking 244 days -

but the average building time eventually came down to just 42 days.



Quote
A total of 2,710 Liberty ships were built, with an expected lifespan of just five years. A little more than 2,400 made it through the war, and 835 of these entered the US cargo fleet. Many others entered Greek and Italian fleets. Many of these ships were destroyed by leftover mines, which had been forgotten or inadequately cleared. Two ships survive today, both operating as museum ships. They are still seaworthy, and one (the Jeremiah O'Brien) sailed from San Francisco to England in 1994.

These ships had a design flaw. The grade of steel used to build them suffered from embrittlement. Cracks would propagate and in 3 cases caused the ships to split in half and sink. It was discovered and remediated.

Quote
Ships operating in the North Atlantic were often exposed to temperatures below a critical temperature, which changed the failure mechanism from ductile to brittle. Because the hulls were welded together, the cracks could propagate across very large distances; this would not have been possible in riveted ships.

A crack stress concentrator contributed to many of the failures. Many of the cracks were nucleated at an edge where a weld was positioned next to a hatch; the edge of the crack and the weld itself both acted as crack concentrators. Also contributing to failures was heavy overloading of the ships, which increased the stress on the hull. Engineers applied several reinforcements to the ship hulls to arrest crack propagation and initiation problems.



Heavily loaded ship

http://www.brighthubengineering.com/marine-history/88389-history-of-the-liberty-ships/

Today, several countries have, as do we, a much greater industrial capacity. It is inaccurate to claim that we cannot produce sufficient renewable energy devices in a decade or so to replace the internal combustion engine everywhere in our civilization. The industrial capacity is there and is easily provable by asking some simple questions about the fossil fuel powered ICE status quo:

How long do ICE powered machines last?

How much energy does it require to mine the raw materials and manufacture the millions of engines wearing out and being replaced day in and day out?

What happens if ALL THAT INDUSTRIAL CAPACITY is, instead, dedicated to manufacturing Renewable Energy machines?



IOW, if there is a ten to twenty year turnover NOW in our present civilization involving manufacture and replacement of the ICEs we use, why can't we retool and convert the entire ICE fossil fuel dependent civilization to a Renewable Energy Machine dependent civilization?

1) The industrial capacity is certainly there to do it EASILY in two decades and maybe just ten years with a concerted push.

2) Since Renewable Energy machines use LESS metal and do not require high temperature alloys, a cash for clunkers worldwide program could obtain more than enough metal raw material without ANY ADDITIONAL MINING (except for rare earth minerals - a drop in the bucket - compared to all the mining presently done for metals to build the ICE) by just recycling the ICE parts into Renewable Energy machines.

3) Just as in WWII, but on a worldwide scale, the recession/depression would end as millions of people were put to work on the colossal transition to Renewable Energy.


HOWEVER, despite our ABILITY to TRANSITION TO 100% RENEWABLE ENERGY, we "CAN'T DO IT" ??? because the fossil fuel industry has tremendous influence on the worldwide political power structure from the USA to Middle East to Russia to China.

In other words, it was NEVER

1. An energy problem,

2. A "laws of thermodynamics" problem,

3. A mining waste and pollution problem,

4. A lack of wind or sun problem,

5. An environmental problem,

6. An industrial capacity problem or

7. A technology problem.




EVERY SINGLE ONE OF THE ABOVE excuses for claiming Renewable Energy cannot replace Fossil Fuels are STRAWMEN presented to the public for the express purpose of convincing us of the half truth that without fossil fuels, civilization will collapse.

It was ALWAYS a POLITICAL PROBLEM of the fossil fuel industry not wanting to relinquish their stranglehold on the world's geopolitical make up.

It drives them insane to think that Arizona and New Mexico can provide more power than all the oil in the Middle East. Their leverage over lawmakers and laws to avoid environmental liability is directly proportional to their market share of global energy supplies.

They are threatened by Renewable Energy and have mobilized to hamper its growth as much as possible through various propaganda techniques using all the above strawmen.

It is TRUE that civilization will collapse and a huge die off will occur without fossil fuels IF, and ONLY IF, Renewable Energy does not replace fossil fuels. It is blatantly obvious that we need energy to run our civilization.

It is ALSO TRUE that if we continue to burn fossil fuels in ICEs, Homo sapiens will become extinct.
This is not hyperbole. We ALREADY have baked in conditions, that take about three decades to fully develop, that have placed us in a climate that existed over 3 million years ago.

We DID NOT thrive in those conditions or multiply. This is a fact. We didn't really start to populate the planet until about 10,000 years ago.

The climate 3 million years ago was, basically, mostly lethal to Homo Sapiens. To say that we have technology and can handle it is a massive dodge of our responsibility for causing this climate crisis (and ANOTHER strawman from Exxon "We will adapt to that" CEO).

Fossil fuel corporations DO NOT want to be held liable for the damage they have caused, so, even as they allow Renewable Energy to have a niche in the global energy picture, will use that VERY NICHE (see rare earth mining and energy to build PV and wind turbines) to blame Renewables for environmental damage.


 

In summary, the example of the Liberty ships is proof we CAN TRANSITION TO RENEWABLE ENERGY in, at most, a couple of decades if we decide to do it but WON'T do it because of the fossil fuel industry's stranglehold on political power, financing and laws along with the powerful propaganda machine they control.

 

What can we expect from the somewhat dismal prospects for Homo sapiens?

1) Terrible weather and melted polar ice caps with an increase in average wind velocity in turn causing more beach erosion from gradually rising sea level and wave action. The oceans will become more difficult to traverse because of high wave action and more turbulent seas. The acidification will increase the dead zones and reduce aquatic life diversity. But you've heard all this before so I won't dwell on the biosphere problems that promise to do us in.

2) As Renewable Energy devices continue to make inroads in fossil fuel profits, expect an engineered :evil4: partial civilizational collapse in a large city to underline the "you are all going to die without fossil fuels" propaganda pushed to avoid liability for the increasingly "in your face" climate extremes. ;)

3) Less democracy and less freedom of expression from some governments and more democracy and freedom of expression from other governments in

direct proportion to the percent penetration of Renewable energy machines in powering their countries (more RE, more freedom)

and an inverse proportion to the power of their "real politik" Fossil Fuel lobbies in countries. (more FF power, less freedom).


The bottom line, as Guy McPherson says, is that NATURE BATS LAST. Nature has millions of "bats". Homo sapiens has a putrid fascist parasite bleeding it to death and poisoning it at the same time. The parasite cannot survive without us so it is allowing us to get a tiny IV to keep us alive a little longer (a small percentage of renewable energy machines). It won't work.

But the parasite has a plan. The IV will be labelled a "parasite" (the villain and guilty party) when Homo sapiens finally figures out he is going to DIE if he doesn't fix this "bleeding and poison" problem. Then the real parasite will try to morph into a partially symbiotic organism and Homo sapiens will muddle through somehow.

I think that the parasite doesn't truly appreciate the severity of Mother Nature's "bat".

THREE FUTURE SCENARIOS:

1. If the parasite (as a metaphor for a fossil fuel powered civilization) does not DIE TOTALLY, I don't think any of us will make it. :emthdown:

2. If the the parasite takes MORE than 20 years to die, some of us will make it but most of us won't. :emthdown:

3. If, in 2017, when the north pole has the first ice free summer (as I estimate), all the governments of the Earth join in a crash program to deep six the use of fossil fuels within a ten year period, most of us will make it.

 

A word about political power and real politik living in a fossil fuel fascist dystopia.

IT simply DOES NOT MATTER what the 'real world", "real politik" geopolitical power structure mankind has now is.

IT DOES NOT MATTER how powerful the fossil fuel industry is in human affairs.

Fossil fuels have to go or Mother Nature will kill us, PERIOD.

Pass it on. You never know when somebody on the wrong side of the Darwinian fence will read it and join the effort to save humanity.

 

8696
Renewables / Fueled vs Electric Cars: The Great Race Begins
« on: October 11, 2013, 07:27:16 pm »
Fuel vs Electric Cars: The Great Race Begins
       


 Thomas Blakeslee 
 August 26, 2013  |  72 Comments

The amazing success of the Tesla model S proves that electric cars may have a chance of replacing liquid fueled vehicles in the long run. Skeptics point out that most of our electric power today comes from coal, which is dirty and inefficient. We must change to clean, renewable energy sources but is that really practical? The Tesla has proven that we can use photovoltaic solar power to recharge pure electric cars. Let’s calculate how much land is needed to renewably fuel a car using several possible electrical and biofuel approaches.

I recently purchased a Chevy Volt plug-in hybrid car. It is the perfect laboratory for this experiment because it can run on pure electricity or as a gasoline hybrid. In the electric mode it can go 38 miles on a 10.8 kilowatt-hour recharge. That’s 3.5 miles per kilowatt hour. Allowing for power transmission and charging losses, let's use 3 mi/kWh. I will compare the land use efficiency of several real approaches to renewable power using both liquid fuel and electricity. We will calculate the number of miles per year that can be driven using an acre of land to produce the power. We can then compare the miles/year/acre numbers for some real-world renewable energy approaches

Full article with knock down drag out comments war here:  :o  ;D

http://www.renewableenergyworld.com/rea/blog/post/2013/08/fueled-vs-electric-cars-the-great-race-begins-10

8697
Renewables / Photvoltaics (PV)
« on: October 11, 2013, 07:03:47 pm »


Solar Growth Outpaces Wind for First Time 

 Tildy Bayar, Associate Editor, Renewable Energy World 

LONDON -- A strong showing from global solar photovoltaic (PV) installations, coupled with a sharp fall in new wind capacity, has led to solar growth outpacing wind this year – for the first time ever.

Analysis from Bloomberg New Energy Finance predicts that 36.7 GW in new solar PV capacity will be added worldwide in 2013, compared with 35.5 GW in new wind installations (33.8 GW onshore and 1.7 GW offshore).

Both wind and solar PV broke records last year, with onshore and offshore wind adding 46.6 GW and solar PV adding 30.5 GW. But 2013’s slowdown in the two largest wind markets, China and the U.S., is opening the way for the rapidly growing PV market to overtake wind, BNEF said.

Justin Wu, BNEF’s head of wind analysis, said, "We forecast that wind installations will shrink by nearly 25 percent in 2013, to their lowest level since 2008, reflecting slowdowns in the U.S. and China caused by policy uncertainty.”

In the U.S., the repeated last-minute extension of the Production Tax Credit has created what analysts have called a perpetual boom-and-bust cycle. This year’s uncertainty led to a drop in investment, causing significant layoffs and facility closures across the wind supply chain.

In China, where the industry has suffered from curtailment due to insufficient infrastructure and tightened standards for wind turbines have slowed development, the sector has been expecting further policy announcements after the government raised this year's new-capacity target to 18 GW in January.

In particular, developers say China’s feed-in tariff (FiT) for offshore wind is too low given the higher costs of offshore development, leading to predictions that the nation will fail to meet its offshore goal of 5 GW by 2015. The government has said it will re-think the FiT, but has offered no timetable.

Globally, demand for wind turbines is predicted to shrink by 5 percent this year, for the first time since 2004.

But wind is still far from dire straits, BNEF reassured. “Falling technology costs, new markets and the growth of the offshore industry will ensure wind remains a leading renewable energy technology,” Wu said. 

In the solar sector, "the dramatic cost reductions in PV, combined with new incentive regimes in Japan and China, are making possible further, strong growth in volumes," said Jenny Chase, BNEF’s head of solar analysis.

In Japan, the fourth country to reach the 10 GW mark in cumulative solar capacity, the attractive FiT has led to rapid growth over the past year, with demand surging in the commercial and utility segments. China, which will be the largest solar market this year according to BNEF, has raised its renewable energy surcharge and revamped its subsidy regime, expanding performance-based incentives for distributed solar power in a bid to grow the domestic market after solar trade spats with Europe and the U.S.  The nation aims to more than quadruple its solar power generating capacity to 35 GW by 2015.

Growth in Asia will offset PV’s decline in traditional leading regions. "Europe is a declining market,” Chase said, “because many countries there are rapidly moving away from incentives, but it will continue to see new PV capacity added."

While the immediate future looks brighter for solar than wind, BNEF predicted that, despite 2013’s rankings upset, the maturing onshore wind and solar PV sectors will contribute almost equally to the world’s new electricity capacity additions between now and 2030. On- and offshore wind will grow from 5 percent of total installed power generation capacity in 2012 to 17 percent in 2030, while solar PV will increase from a lower base of 2 percent in 2012 to 16 percent by 2030, BNEF said.

 The analysis also predicted that technology suppliers in both wind and solar may see a move back to profit as soon as this year, after a prolonged period of oversupply and consolidation.

Michael Liebreich, BNEF’s chief executive, commented: “Cost cuts and a refocusing on profitable markets and business segments have bolstered the financial performance of wind turbine makers and the surviving solar manufacturers. Stock market investors have been noticing this change, and clean energy shares have rebounded by 66 percent since their lows of July 2012."

http://www.renewableenergyworld.com/rea/news/article/2013/09/solar-growth-outpaces-wind-for-first-time


8698
General Discussion / Going All In with Renewable Energy
« on: October 11, 2013, 06:56:21 pm »

Going All In with Renewable Energy

Is the goal of using 100 percent renewable energy crazy, idealistic or achievable?

 Elisa Wood, Contributing Editor 
 September 27, 2013  |  21 Comments 

After a monster tornado wiped out Greensburg, Kansas in 2007, killing 11 people, the community decided to rebuild with meaning. It set out to become one of the world's greenest communities.

Today the town is among a growing number of jurisdictions that generates all of its electricity from renewable energy.

Greensburg achieved a goal that many see as pie-in-the-sky. Former U.S. Vice President Al Gore several years ago drew jeers from his political critics when he proposed that the U.S. go all green within a decade. The jury remains out about the plausibility of a U.S.-size economy functioning with all renewables anytime soon. But Greensburg, with a population of less than 1,000 people, has demonstrated that it can work on a small scale. Others have done the same, among them Güssing, Austria; King Island, Australia; and Naturstrom, Germany.

It's not just cities with the ambition. Eight nations are 100 percent renewable or moving in that direction: Denmark, Iceland, Scotland, Costa Rica, Maldive Islands, Cook Islands, Tuvalu, and Tokelau. Add 42 cities, 49 regions, 8 utilities and 21 organizations, and going 'all green' looks like a bona fide trend.

Times have changed since the mid-2000s when a group that included the late Hermann Scheer, TIME magazine's 'Hero for the Green Century', first explored the idea. The group formed the Renewables 100 Policy Institute, but in the early years found that the concept was too "bleeding edge" for established non-profits, which declined to sign on.

"Now that is starting to change," said Diane Moss, the institute's founding director. The Renewables 100 Policy Institute held its first international conference in April, drawing a crowd of more than 200 people. The presenters were not from the fringe of the green world, but were representatives of established advocacy organizations, elected officials, corporate executives and the head of the California Independent System Operator Corp.

"If we want to fill our goal on a global scale it is important that regions like California, like Germany or other regions unify together in a movement to 100 renewable," said Harry Lehmann, Director of the German Federal Environment Agency at the conference. "We have to share our experience."

Today, the Renewables 100 Policy Institute is actively supporting the trend and reports on global progress via the Go 100 percent Renewable Energy project it created. An interactive map on the site tracks those pursuing and achieving the all-renewables goal. (The site is the source of the numbers above on how many jurisdictions the movement encompasses.)

full article here (plus some choice comments from yours truly  ;D and Leon Lemoine )


http://www.renewableenergyworld.com/rea/news/article/2013/09/going-all-in-with-renewable-energy

8699
Than you Surly. We are going to win this, my friends.

8700
Renewables / Re: Microgrids
« on: October 11, 2013, 06:25:14 pm »
Surly,
Absolutely! Post anything you want that you find here with or without attribution.

The more people know about the bad stuff that has happened and the good stuff that can and is happening, the better.

Pages: 1 ... 578 579 [580] 581 582

+-Recent Topics

Money by AGelbert
February 17, 2018, 07:55:28 pm

Human Life is Fragile but EVERY Life is Valuable by AGelbert
February 17, 2018, 07:44:09 pm

Fossil Fuels: Degraded Democracy and Profit Over Planet Pollution by AGelbert
February 17, 2018, 06:01:56 pm

The Big Picture of Renewable Energy Growth by AGelbert
February 17, 2018, 05:25:47 pm

War Provocations and Peace Actions by AGelbert
February 17, 2018, 05:13:50 pm

Corruption in Government by AGelbert
February 17, 2018, 05:02:39 pm

Electric Vehicles by AGelbert
February 17, 2018, 02:34:35 pm

Ocean Species Habits and Ocean Conservancy by AGelbert
February 16, 2018, 10:26:50 pm

Sustainable Food Production by AGelbert
February 16, 2018, 09:43:19 pm

Wind Power by AGelbert
February 16, 2018, 09:30:37 pm

Free Web Hit Counter By CSS HTML Tutorial