Archive for the ‘Future’ Category

Using Patterns in Space Dust to Detect Earth-Like Extrasolar Planets

October 11, 2008

NASA Dust Rings Exoplanets image

Interplanetary Space Dust Fingerprints
NASA’s Goddard Space Flight Center has been running supercomputer simulations of the impact of extrasolar planets on the dust that surrounds stars with orbiting bodies. The results seem to show that we could use patterns in that dust to detect planets smaller than what even advanced telescopes could detect. They mention the possibility of detecting planets as small as Mars (which is about 15% of Earth’s volume and 11% of the mass)!

Working with Marc Kuchner at NASA’s Goddard Space Flight Center in Greenbelt, Md., Stark modeled how 25,000 dust particles responded to the presence of a single planet — ranging from the mass of Mars to five times Earth’s — orbiting a sunlike star. Using NASA’s Thunderhead supercomputer at Goddard, the scientists ran 120 different simulations that varied the size of the dust particles and the planet’s mass and orbital distance.

“Our models use ten times as many particles as previous simulations. This allows us to study the contrast and shapes of ring structures,” Kuchner adds. From this data, the researchers mapped the density, brightness, and heat signature resulting from each set of parameters.

NASA Dust Rings Exoplanets image

You can actually check out the 120 simulation models by yourself in the Exozodi Simulation Catalog.

What This Means for the Future?
It’s less than certain that if humans colonize space, they’ll do it in their current biological form, so we won’t necessarily need Earth-like planets that can be terraformed. But it’s still a good idea to look, if only to have a more accurate map of the universe.

Last April I wrote about the discovery of the First Potentially Habitable Planet Outside the Solar System, but it had a radius 50% bigger than Earth and gravity about twice as strong. Now the race is on to discover smaller and more Earth-like exoplanets.

Source: NASA

See also:

Advertisements

Seminar on Global Catastrophic Risks

October 8, 2008

November 14, 2008
Computer History Museum, Mountain View, CA

Organized by: Institute for Ethics and Emerging Technologies, the Center for Responsible Nanotechnology and the Lifeboat Foundation

A day-long seminar on threats to the future of humanity, natural and man-made, and the pro-active steps we can take to reduce these risks and build a more resilient civilization. Seminar participants are strongly encouraged to pre-order and review the Global Catastrophic Risks volume edited by Nick Bostrom and Milan Cirkovic, and contributed to by some of the faculty for this seminar.

This seminar will precede the futurist mega-gathering Convergence 08, November 15-16 at the same venue, which is co-sponsored by the IEET, Humanity Plus (World Transhumanist Association), the Singularity Institute for Artificial Intelligence, the Immortality Institute, the Foresight Institute, the Long Now Foundation, the Methuselah Foundation, the Millenium Project, Reason Foundation and the Accelerating Studies Foundation.

SEMINAR FACULTY

  • Nick Bostrom Ph.D., Director, Future of Humanity Institute, Oxford University
  • Jamais Cascio, research affiliate, Institute for the Future
  • James J. Hughes Ph.D., Exec. Director, Institute for Ethics and Emerging Technologies
  • Mike Treder, Executive Director, Center for Responsible Nanotechnology
  • Eliezer Yudkowsky, Research Associate. Singularity Institute for Artificial Intelligence
  • William Potter Ph.D., Director, James Martin Center for Nonproliferation Studies

Register here

Elon Musk on SpaceX’s Goal

October 2, 2008

Elon Musk founder of SpaceX photo

Somehow we have to solve these problems and reduce the cost of human spaceflight by a factor of 100. That’s why I started SpaceX. By no means did I think victory was certain. On the contrary, I thought the chances of success were tiny, but that the goal was important enough to try anyway.

We’re making progress. If we succeed in recovery and reflight of our Falcon 9 rocket, which carries 11 tons of payload into orbit, it will be the first fully reusable orbital rocket and one of the most significant developments since the dawn of rocketry. At $35 million to manufacture, it’s already four times cheaper than comparable single-use vehicles from Boeing or Lockheed. However, since Falcon 9 costs only $200,000 to refuel (and reoxidize), an efficient refurbishment and launch operation would allow the production costs to be amortized over many flights. This has the potential to bring the per-launch price down to about $1 million, a hundredfold improvement over current costs. And if that happens, life will become sustainably multiplanetary in less than a century.

Update: There’s another interest piece about Elon Musk at the Mercury News.

Source: Esquire

See also: SpaceX Falcon 1 Rocket Reaches Orbit on 4th Try

SpaceX Falcon 1 Rocket Reaches Orbit on 4th Try

September 30, 2008

SpaceX Falcon 1 Rocket Launch photo

Stars My Destination
After the third try, Elon Musk, the founder of SpaceX, co-founder of Paypal, chairman of SolarCity and chairman of Tesla Motors (beat that resumé!) was interviewed by WIRED about the difficulties of making his space rockets reach orbit:

Wired.com: How do you maintain your optimism?

Musk: Do I sound optimistic?

Wired.com: Yeah, you always do.

Musk: Optimism, pessimism, fuck that; we’re going to make it happen. As God is my bloody witness, I’m hell-bent on making it work.

Falcon 1: The First Privately Developed Rocket to Orbit the Earth
Well kids, perseverance pays off. On the 4th try, the 70-foot Falcon 1 rocket reached orbit wit a 364-pound dummy payload: “The data shows we achieved a super precise orbit insertion — middle of the bullseye — and then went on to coast and restart the second stage, which was icing on the cake.” Check out the video of the highlights of the launch.

“This really means a lot,” Musk told a crowd of whooping employees. “There’s only a handful of countries on Earth that have done this. It’s usually a country thing, not a company thing. We did it.”

Musk pledged to continue getting rockets into orbit, saying the company has resolved design issues that plagued previous attempts.

Last month, SpaceX lost three government satellites and human ashes including the remains of astronaut Gordon Cooper and “Star Trek” actor James Doohan after its third rocket was lost en route to space. The company blamed a timing error for the failure that caused the rocket’s first stage to bump into the second stage after separation.

SpaceX’s maiden launch in 2006 failed because of a fuel line leak. Last year, another rocket reached about 180 miles above Earth, but its second stage prematurely shut off.

The Falcon 1, at $7.9 million each, is what you could call the budget model. In fact, $7.9 million is basically pocket changed compared to what government agencies like NASA are used to paying to contractors like Lockheed Martin & co.

SpaceX is also working on the Falcon 9 (12,500 kg to low Earth orbit, and over 4,640 kg to geosynchronous transfer orbit) and Falcon 9 Heavy (28,000 kg to low Earth orbit, and over 12,000 kg to geosynchronous transfer orbit) to help NASA reach the International Space Station, among other things. These should cost between $36.75 million and $104 million each depending on the model and mission, and the first launch is scheduled for the first quarter of 2009.

(more…)

Nanotube-Based Chemical Sensors to Defend Against Chemical Attacks

September 16, 2008

Chemical Sensor based on Nanotubes photo

I’ve written a fair bit about detection mechanisms (see links at the end of this post) because, as the old saying goes, an ounce of prevention is worth a pound of cure. Making our society more robust is the best way to reliably improve outcomes.

Nanotube-Based Chemical Sensors
Nanotubes strike again (what can’t we do with them?):

What is needed is a cheap way of detecting such gases and, having raised the alarm, of identifying which gas is involved so that anyone who has inhaled it can be treated. And that is what a team of chemical engineers at the Massachusetts Institute of Technology, led by Michael Strano, think they have created. Not only can their new sensor distinguish between chemical agents, it can detect them at previously unattainable concentrations—as low as 25 parts in a trillion.

The core of Dr Strano’s invention, which he recently described in the journal Angewandte Chemie, is an array of treated carbon nanotubes [and a micro gas chromatograph].

Gases are identified by the way they change the electrical signature of the nanotubes, and because of the way they are made, gas molecules don’t ‘stick’ very long to the nanotubes (less than a minute) and so the sensor has a long useful life.

Part of a Technological Immune System
At first, these sensors will probably be used in relatively enclosed public places, where gas attacks are more probably, and to track the movements of pollutants. But as the cost of these sensors go down, they could be integrated into a large distributed “technological immune system” (I previously wrote about putting radiation sensors in cellphones).

(more…)

One Step Closer to Space-Based Solar Power

September 15, 2008

Wireless Power Transmission Over a Long Distance
If space-based solar power is ever to be feasible, wireless transmission of power needs to be achieved. We are one step closer to that goal because of a successful experiment that recently took place.

A solid-state phased array transmitter located on the U.S. island of Maui (on Haleakala) and receivers located on the island of Hawai’i (Mauna Loa) and airborne.

The demonstration, achieved by Managed Energy Technologies LLC of the U.S. and sponsored by Discovery Communications, Inc., involved the transmission of RF energy over a distance of up to 148 kilometers (about 90 miles): almost 100-times further than a major 1970s power transmission performed by NASA in the Mojave Desert in California.

The 2008 project (which lasted only 5 months and cost less than $1M) proved that real progress toward Space Solar Power can be made quickly, affordably and internationally, including key participants from the U.S. and Japan.

A number of key technologies were integrated and tested together for the first time in this project, including solar power modules, solid-state FET amplifiers, and a novel “retrodirective” phase control system. In addition, the project developed the first ever “field-deployable” system-developing new information regarding the prospective economics of space solar power / wireless power transmission systems.

There is little doubt in my mind that with sufficiently advanced technology, space-based solar power is the most elegant way to produce lots of always-on clean power relatively cheaply. We might need more advanced nanotechnology before getting there (to create a space elevator to bring down the $/KG of getting materials into orbit), but it’s nothing that can’t be done in theory. Who knows how long it will take to happen, though. Large scale Enhanced Geothermal Systems could happen first, but the oil industry will probably get first dibs on drilling equipment and engineers for a good while longer and it’s not as sexy as a new “Moonshot” to politicians, so funding could be a problem.

Source:

See also:

Graphics Processing Units (GPUs): The Future of Scientific Distributed Computing

August 17, 2008

AMD/ATI Radeon GPU Card photo

General-Purpose Computing on Graphics Processing Units
It is pretty clear to me that when it comes to scientific distributed computing, GPUs are the future. Folding@home has already proven that, both by pulling far ahead of other projects in raw processing power and being certified by the Guinness World Records as “the most powerful distributed computing cluster in the world“, but also with the distribution of that processing power:

Out of a current total of about 2.7 petaFLOPS, 1.3 of those are coming from nVidia and ATI/AMD GPUs, and 1.1 from Playstation 3 gaming consoles. That leaves only about 0.3 petaFLOPS for regular CPUs, and they outnumber GPUs and consoles by about 5 to 1. To put things in perspective, 0.3 petaFLOPS is still about 4-5 times bigger than Rosetta@home, my favorite project, which is generally considered to be in the top 5 of projects by processing power, but which doesn’t use GPUs.

So clearly, if the data you need to crunch can be crunched on a GPU, that’s the way to go. So far it hasn’t always been easy to do, but that should get better as GPGPU frameworks and standards appear (like Apple’s OpenCL, for example), and the hardware itself becomes more flexible and powerful (the ability to do “double precision” floating-point calculations, which is required by most scientific applications, etc).

What the Future of GPUs Might Look Like
The best-case scenario would be an open framework that makes it easy to write scientific apps for GPUs and to port and parallelize already-existing code. Ideally, that would work on a level that makes supporting GPUs from all manufacturers (AMD/ATI, nVidia, Intel, etc) easy without having to write hardware-specific code (which is the case for the Folding@home GPU apps right now, as far as I know).

(more…)

Built-to-Order Artificial Bones

July 12, 2008

Artificial Jaw Bone photo

In the past decades, we’ve all heard about the progress made in organ transplant science and in the therapeutic cloning field, but advances in artificial bones have rarely made waves.

Bones are very light but nonetheless able to withstand extremely heavy loads. The inside of a bone is like a sponge. It is particularly firm and compact in certain places, and very porous in others. […]

Researchers at the Fraunhofer Institute for Manufacturing Engineering and Applied Materials Research developed a simulation program that calculates the internal structure and density distribution of the bone material. […]

Engineers can produce complex components with the aid of rapid prototyping technology. This involves coating a surface with wafer-thin layers of special metal powder [made of biomaterials such as titanium and steel alloys]. A laser beam heats – or sinters – the powdered metal in the exact places that need to be firm. […]

“The end product is an open-pored element,” explains [Andreas] Burblies. “Each point possesses exactly the right density and thus also a certain stability.” The method allows the engineers to produce particularly lightweight components – customized for each application – that are also extremely robust.

Of course, the ultimate goal is to replace bones as rarely as possible, and in the long-run that can probably be accomplished with complete rejuvenation therapies (how many people in their 30s need to have bones replaced?), but until we get there and for special cases, the best alternative is to swap out those worn out knees for the best reproductions possible.

And while we’re at it, why not replace bones with superior artificial bones. Lighter, stronger, etc. No reason to limit ourselves to what evolution gave us.

Sources: Fraunhofer-Gesellschaft, Science Daily.

Virtual Reality Could Explain the Fermi Paradox

May 9, 2008

Galaxies from deep space photo

A recent article in Technology Review by Nick Bostrom generated a lot of discussion about the Fermi paradox, which states:

The size and age of the universe suggest that many technologically advanced extraterrestrial civilizations ought to exist. However, this hypothesis seems inconsistent with the lack of observational evidence to support it.

I’ll add my 2 cents to this discussion by saying that there’s a possibility that any civilization that becomes advanced enough discovers that physical reality can’t hold a candle to virtual reality and makes the transition (alien transubstantiation, to coin a phrase). This could explain why they haven’t colonized the galaxy, or why we aren’t bathed in their radio communications.

Virtual worlds can be, in theory, both much more pleasant to inhabit, with unlimited freedom and none of the downsides of an existence based on crude physical processes, and also much more energy-efficient. Even without cold computing, it would take a lot less energy for an advanced civilization to do all that it wants to do within a simulation than by moving atoms around.

As I mentioned before, they could also think much faster, subjectively pushing back the heat death of the universe (while at the same time making communication with ‘slow’ beings almost impossible).

I haven’t read all the serious papers on SETI and the Fermi paradox yet, but I’m pretty sure this is not an original theory. It’s just something that I haven’t seen mentioned yet and that I think deserves thinking about.

Update: Just to make things clearer, the kind of virtual reality I’m envisioning here is not one where you connect a biological body to a machine that sends it sensory information (like in the Matrix, for example). What I’m thinking of could probably be called ‘mind uploading’. There is no physical body, because one is not required. Everything would be inside the virtual world, kind of like how an artificial intelligence would not require a physical presence other than its computing substrate.

See also:

On the Nature of Time: Implications for Advanced Intelligence and SETI

April 13, 2008

I was reading and article in The Economist about lasers that can pulse extremely rapidly. We’re talking really fast, in the femtoseconds range (one billionth of one millionth of a second).

This got me thinking about the nature of time: Is there a theoretical limit to how fast something can happen? I’m not aware of any, but physics probably gives an answer one way or the other.

Still, even if there’s a limit somehow, there’s still quite a gigantic range. From femtoseconds to how long it takes for universes to die.

What if what we consider to be “real time” – how fast we move, talk, think – happens to be a glacial pace compared to other lifeforms? I’m not sure if biological intelligent life could have a subjective impression of time on such a scale because of limits to the speed of chemical reactions and the minimum complexity required for intelligence, but if an advanced civilization had made the transition to a non-biological substrate (such as super-computers), it would be conceivable that for them seconds could subjectively be the equivalent of millennia (or more) to us.

That would make communication unlikely. It would be a bit like trying to have a conversation with a rock. Even if you knew it was intelligent, you’d probably be bored out of your mind and either you would ignore it, or wait for it to speed up. And even if that’s too anthropocentric a way to look a the situation, there’s still the problem of saying something coherent mentioned below.

There’s always the possibility that such a fast intelligence would remembers how slow it once was, in its original bio-chemical form, and plan for future contact with lesser intelligences. Keep listening on the ‘slow lane’, in other words. But even if it did that, could it really communicate with us coherently if between each syllable it had the time to evolve and change a lot (more than Homo Sapiens has had time to evolve so far)? Even if it creates the message in its ‘real-time’ and then slows it down to send it, will the entity that created the message have much in common with the subjectively much older entity that exists by the time the message has been completely sent?

See also:

Future Gaming: Total Freedom Off the Beaten Path

February 21, 2008

Virtual World

I haven’t played computer games in a long time, not since the days of Quake 2, but I’ve kept an eye on developments in the field and I think I can take an educated guess on where the state of the art will be relatively soon.

So imagine you are playing a first person action-adventure game that takes place around New York City. Your friend gets killed in building A, you investigate, find clues that lead you to building B to meet certain people, etc. There’s a fairly linear storyline that you can follow and it will lead you to a conclusion that wraps things up.

But lets say you don’t feel like following the main plot. Nothing special about that, lots of games give you freedom to wander around and explore. What I’m taking about is taking this to the next level not only in scope, but in detail and interactivity.

So you are in virtual NYC. What would you be able to do? What about taking a cab to New Jersey, going to Newark to buy a plane ticket, fly to Paris and have a drink under the Eiffel Tower, then go get another plane ticket, fly to Saudi Arabia and visit Mecca. Or go to Tokyo and find a karaoke…

Why not? With a game engine sophisticated enough, all of these things could be generated almost automatically as long as you have lots of raw data. During game development, you would feed it tons of detailed maps, satellite photos, encyclopedia information, government statistics, architectural blueprints and demographic information. For the details, you could probably feed it a few terabytes of public domain geo-tagged photos and videos (from sites like Flickr and Youtube, or whatever we have a few years from now) that would help with the appearance of buildings in various places, how people dress in different parts of the world, local plants and animals, parks and waterways, hills, mountains, etc. The blind spots could be extrapolated or filled in by the programmers.

(more…)