Docphish

This is Docphish's re-post blog...
kickstarter:

Project of the Day—Video game music and sound has been such an enduring part of the industry that it eventually began influencing music that had nothing to do with the games themselves. Beep: A Documentary History of Video Game Sound, is a film about that music and the people that made it.

kickstarter:

Project of the Day—Video game music and sound has been such an enduring part of the industry that it eventually began influencing music that had nothing to do with the games themselves. Beep: A Documentary History of Video Game Soundis a film about that music and the people that made it.

fastcompany:

Lee returns to the old neighborhood, with cast members, new friends, and Beats.
Do The Right Thing is 25 years old now. This past week’s events in Ferguson, Missouri have made clear that the film’s depiction of racial tensions in America are still relevant (and Public Enemy’s “Fight The Power” still sounds as good as ever)—but Spike Lee’s depiction of the Bedford-Stuyvesant neighborhood in Brooklyn doesn’t necessarily resemble the Brooklyn and Bed-Stuy of 2014.
Read More>

fastcompany:

Lee returns to the old neighborhood, with cast members, new friends, and Beats.

Do The Right Thing is 25 years old now. This past week’s events in Ferguson, Missouri have made clear that the film’s depiction of racial tensions in America are still relevant (and Public Enemy’s “Fight The Power” still sounds as good as ever)—but Spike Lee’s depiction of the Bedford-Stuyvesant neighborhood in Brooklyn doesn’t necessarily resemble the Brooklyn and Bed-Stuy of 2014.

Read More>

smarterplanet:

A self-organizing thousand-robot swarm | KurzweilAI
The first thousand-robot flash mob has assembled at Harvard University.
“Form a sea star shape,” directs a computer scientist, sending the command to 1,024 little bots simultaneously via an infrared light. The robots begin to blink at one another and then gradually arrange themselves into a five-pointed star. “Now form the letter K.”
The ‘K’ stands for Kilobots, the name given to these extremely simple robots, each just a few centimeters across, standing on three pin-like legs. Instead of one highly complex robot, a “kilo” of robots collaborate, providing a simple platform for the enactment of complex behaviors.

smarterplanet:

A self-organizing thousand-robot swarm | KurzweilAI

The first thousand-robot flash mob has assembled at Harvard University.

“Form a sea star shape,” directs a computer scientist, sending the command to 1,024 little bots simultaneously via an infrared light. The robots begin to blink at one another and then gradually arrange themselves into a five-pointed star. “Now form the letter K.”

The ‘K’ stands for Kilobots, the name given to these extremely simple robots, each just a few centimeters across, standing on three pin-like legs. Instead of one highly complex robot, a “kilo” of robots collaborate, providing a simple platform for the enactment of complex behaviors.

(via emergentfutures)

vicemag:

We Need to Stop Killer Robots from Taking Over the World
Nick Bostrom’s job is to dream up increasingly lurid scenarios that could wipe out the human race: Asteroid strikes; high-energy physics experiments that go wrong; global plagues of genetically-modified superbugs; the emergence of all-powerful computers with scant regard for human life—that sort of thing.
In the hierarchy of risk categories, Bostrom’s specialty stands above mere catastrophic risks like climate change, financial market collapse and conventional warfare.
As the Director of the Future of Humanity Institute at the University of Oxford, Bostrom is part of a small but growing network of snappily-named academic institutions tackling these “existential risks”: the Centre for the Study of Existential Risk at the University of Cambridge; the Future of Life Institute at MIT and the Machine Intelligence Research Institute at Berkeley. Their tools are philosophy, physics and lots and lots of hard math.
Five years ago he started writing a book aimed at the layman on a selection of existential risks but quickly realized that the chapter dealing with the dangers of artificial intelligence development growth was getting fatter and fatter and deserved a book of its own. The result is Superintelligence: Paths, Dangers, Strategies. It makes compelling—if scary—reading.
The basic thesis is that developments in artificial intelligence will gather apace so that within this century it’s conceivable that we will be able to artificially replicate human level machine intelligence (HLMI).
Once HLMI is reached, things move pretty quickly: Intelligent machines will be able to design even more intelligent machines, leading to what mathematician I.J. Good called back in 1965 an “intelligence explosion” that will leave human capabilities far behind. We get to relax, safe in the knowledge that the really hard work is being done by super-computers we have brought into being.
Continue

vicemag:

We Need to Stop Killer Robots from Taking Over the World

Nick Bostrom’s job is to dream up increasingly lurid scenarios that could wipe out the human race: Asteroid strikes; high-energy physics experiments that go wrong; global plagues of genetically-modified superbugs; the emergence of all-powerful computers with scant regard for human life—that sort of thing.

In the hierarchy of risk categories, Bostrom’s specialty stands above mere catastrophic risks like climate change, financial market collapse and conventional warfare.

As the Director of the Future of Humanity Institute at the University of Oxford, Bostrom is part of a small but growing network of snappily-named academic institutions tackling these “existential risks”: the Centre for the Study of Existential Risk at the University of Cambridge; the Future of Life Institute at MIT and the Machine Intelligence Research Institute at Berkeley. Their tools are philosophy, physics and lots and lots of hard math.

Five years ago he started writing a book aimed at the layman on a selection of existential risks but quickly realized that the chapter dealing with the dangers of artificial intelligence development growth was getting fatter and fatter and deserved a book of its own. The result is Superintelligence: Paths, Dangers, Strategies. It makes compelling—if scary—reading.

The basic thesis is that developments in artificial intelligence will gather apace so that within this century it’s conceivable that we will be able to artificially replicate human level machine intelligence (HLMI).

Once HLMI is reached, things move pretty quickly: Intelligent machines will be able to design even more intelligent machines, leading to what mathematician I.J. Good called back in 1965 an “intelligence explosion” that will leave human capabilities far behind. We get to relax, safe in the knowledge that the really hard work is being done by super-computers we have brought into being.

Continue