There ain't no rules around here! We're trying to accomplish something!

Archive for the ‘Technology’ Category

Something lovely: oscillography

In Delights, Math, Technology on March 3, 2009 at 4:06 am


Eric Archer, electronic media experimentor, has rigged up a vector art synthesizer with an oscilloscope, a digital pattern generator, and a set of identical cards called Quadrature Wavetable Oscillators, which convert digital information into analog voltages. The outputs are summed on a two-channel mix bus, with the two channels representing the X and Y coordinates in Cartesian space. The oscillations can make beautiful fine-line patterns reminiscent of the engravings on paper currency around the world.

Specialized lathes have been in use for hundreds of years to make complex patterns that are unreproduceable without directly copying them (i.e. photography or digital means). This is the historical art of guilloche (ghee-o-shay’) or Engine Turning. Remember the old 1970’s toy called Spirograph? It operates on a similar principle, producing mathematical curves called epitrochoids via revolving circular gears around each other while a stylus traces their motion. Other combinations of motion can be used, such as mounting the stylus to a rotating disc as it traverses a straight line. Watchmakers and jewelers have long used these techniques for ornamentation on their work. The famous Faberge eggs bear designs engraved by a similar technique.

Inside these sophisticated engraving machines, there are numerous settings to be made among the gears that revolve to cut the pattern. One doesn’t need many meshed revolving gears before it becomes possible to produce endless patterns that are practically impossible to replicate. Hence this technique was adopted very early by national governments to mint their paper currency, postage stamps, and other monetary certificates. The U.S. Treasury is rumored to maintain such a machine, known as a Geometric Lathe, containing ten interlocked pattern-generating discs. The settings of the discs would only be known to a select few, as this information must be guarded from the hands of counterfeiters… at least prior to the digital age we are in now.

More here.
Flickr stream here.
Also: Archer has a gadget that lets you listen to the modulations in visible light. The sun apparently sounds incredible — like “pink static.” Listen for yourself here.


More efficient web storage for the developing world

In Technology on February 26, 2009 at 3:59 pm

As hard drive space becomes cheaper, RAM has remained expensive — meaning that internet access is still out of reach for most of the world. Vivek Pai, a CS professor here at Princeton, has developed an invention that may change that. HashCache is a highly efficient method of caching, or saving frequently accessed Web content on a hard drive instead of using precious bandwidth to to look it up every time. The RAM-hogging step in most caching methods is the index, a table that assigns each image or block of text on a website a number, which in turn is associated to a location on the hard disk. Pai’s breakthrough was eliminating the index; the number is the memory address. This significantly increases the efficiency of internet data transfer.

The development team has been working with Intel, which is “very keen on making technology affordable in developing regions,” according to CS department chair Larry Peterson. And the caching system is already being tested at the Kokrobitey Institute in Ghana and Obafemi Awolowo University in Nigeria. (HT

The interface between technology, altruism, and capitalism is an interesting one. Should inventors like Pai make an effort to make their products accessible at low cost to poor countries? Is that idealism, “creative capitalism,” or good business sense? Is it possible that inventors do more good for the world if instead they try to make as much money as they can?

Some very bright people discuss the question here. My take is roughly John Quiggin’s, which observes that technological creativity is not always — not at first, anyway — linked to profit.

Yet neither the Internet nor the Web was a product of the market economy, and even now the relationship between market incentives and the social contribution made by Internet-related activities is tenuous at best.

Both the Internet and the Web developed as non-commercial activities, outstripping or absorbing a variety of commercial competitors (Genie, Delphi, AOL and so on) before being opened up to commercial use in the mid-1990s. And even since large-scale commercial involvement began, most of the exciting innovation continues to come from noncommercial users (blogs and wikis, for example) or from non-commercial content producers (YouTube, Flickr and so on). By contrast, heavily funded commercial innovations such as push technology and portals have failed or declined into insignificance.

Of course, corporations still have a large role to play in the economy of the Internet. A company like Google, for example, provides services that cannot easily be replicated by users acting either individually or collectively. But Google depends crucially and directly on the content created by users and more generally on the goodwill of the Internet community.

If these assets were lost, Google would be vulnerable to displacement; Microsoft’s loss of its seemingly unassailable dominance of both personal computing and the Internet software market is an illustration. Google’s slogan ‘don’t be evil’ and its sensitivity to criticism, for example over its compliance with Chinese censorship laws, illustrates the point. Equally, so do the many products Google creates and gives away, with no obvious path to future profit.

Paradoxically, the most profitable ideas were often not invented to be profitable. The Twitter guys invented a new form of human communication (communicagion?) and they’re not worried that they still don’t make a dime off it. Somehow, they’re sure, they will — and we believe them. People come up with ideas, especially scientific and technological ideas, more out of curiosity than either altruism or greed. Humanitarian and financial benefits usually follow a project that some guy just thought was intellectually cool. Maybe making it possible for a few billion more people to go online will, in the long run, make greater profits than commercial licensing in developed countries ever will. The techie’s ethic of compulsive sharing — reminiscent of Joseph Priestley’s fevered scientific correspondence — doesn’t look very businesslike, but it makes useful ideas happen faster.

Don’t be evil?

In Delights, Technology on February 26, 2009 at 3:07 am

Google Street View, which tries to present car-eye-views of the landscape in addition to Google Maps’s satellite images, has managed to turn up some interesting artefacts, including a creepy guy with a sniper rifle and a marriage proposal.  But as it turns out, one of the Street View cars, which carry the cameras used to make these composites, hit a deer in upstate New York, and the accident was captured on camera.  Google has since published an official response to the incident (apparently the deer was able to leave the scene on its own), and removed the incriminating photos.

Bambi survived

Bambi survived

New Plumbing for the Internet

In Policy, Technology on February 16, 2009 at 5:49 am

For those of you who didn’t catch it, the NYT had an article yesterday titled, quite simply, “Do We Need a New Internet?“. It’s worth a few minutes of your time on the Old Internet to take a look at it.

The article’s premise is that the Internet was never created for the requirements we’re now asking of it. It was meant to be a network for the military and professors in their ivory towers. From the hardware and TCP/IP upward, the idea was to let a close knit community more easily share papers and mail.

But then a funny thing happened: the rest of the world moved in. All of a sudden, you’re not sure who your next door neighbor is. Is he the innocent server he says he is, or does he have more malicious designs? Ideally, when you can’t see the dude or dudette you’re communicating with, you’d like your network to help you out, by either limiting who can join or or limiting anonymity of the participants. Unfortunately, because academics made it, nodes on the Internet, by default, trust other nodes, and because it was commissioned by the military to be resilient in the face of Soviet attack, there’s no central control area on the Internet.

The only thing growing faster than our dependence to the Internet, it seems, is the number of ways to exploit Internet security flaws. The article does a good job of talking about the state of Int

ernet security today, describing a sort of patchwork solution to the holes we find every day. Particularly striking was the warning about the impending calamity if we continue down this increasingly futile road: “If you’re looking for a digital Pearl Harbor, we now have the Japanese ships streaming toward us on the horizon.”

This struck a chord with me because it’s not the first time I’ve head someone talk about a concerned Internet attack:

Lawrence Lessig, a respected Law Professor from Stanford University told an audience at this years Fortune’s Brainstorm Tech conference in Half Moon Bay, California, that “There’s going to be an i-9/11 event” which will act as a catalyst for a radical reworking of the law pertaining to the internet.

The article goes on to mention that…

Lessig also revealed that he had learned, during a dinner with former government Counter Terrorism Czar Richard Clarke, that there is already in existence a cyber equivalent of the Patriot Act, an “i-Patriot Act” if you will, and that the Justice Department is waiting for a cyber terrorism event in order to implement its provisions.

So far, we’ve talked about technology; now come the policy issues.

What the NYT doesn’t really bring up is what, if we were have a New Internet, it would look like. It surely wouldn’t be just about the technology: unlike during the 1970’s, governments and businesses are a lot more aware of the power of the Internet, and you can bet they’re going to want to have a say in designing this New Internet. Privacy becomes one important concern. We’ll surely be giving up some of it (a necessary cost), but who determines what’s too much? Would we give the American government be too much of our information? Are we granting the Chinese filters their wet dream?

What of companies? Do they have access to our data so they can better target their advertisements (and “enhance our Online experience”)? If ISPs get too much of a say (or if we yearn for central authority), can we say goodbye to Net Neutrality? Does doing so mean we say goodbye to YouTube?

The point is not so much to scare people (or spew anti-government, anti-corporate hate) as to remind everyone that the prospect of a New Internet designed by special interest groups is almost as scary as the prospect of living with the current Internet. The free and natural evolution of the Internet is, after all, the thing that’s allowed to the Internet to be the base for so much great innovation. But then, if we don’t do anything, we risk “cyber 9/11” and may have to live with the Government’s version of the Internet.

It’s not clear what path we’ll venture down, but either way, it looks treacherous. Welcome to the future.

The INternet

The Internet

The Lunar Society: citizen geeks of the 18th century

In Delights, History, Technology on February 15, 2009 at 5:55 am

Boulton, Watt, and Murdoch, members of the Lunatic Society, in Birmingham, England.

Boulton, Watt, and Murdoch, members of the Lunatic Society, in Birmingham, England.

Science before the twentieth century wasn’t done by “scientists.” There was no such word. There were educated amateurs and self-taught tinkerers, building their own labs in search of discovery or a patent. And so there wasn’t such a distinction between science and culture — the smart set went to “electrical parties” to see demonstrations of the newly discovered force. Ben Franklin wrote,

A turkey is to be killed for dinner by the electric shock, and roasted by the electric jack, before a fire kindled by the electrified bottle; when the healths of all the famous electricians of England, France, Holland, and Germany are to be drunk in electrified bumpers, under the discharge of guns from the electrified battery.

The best example of the public nature of science in Enlightenment England was the Lunar Society, a club of industrialists, natural philosophers, and intellectuals that met in Birmingham at the full moon between 1765 and 1813. The port and talk flowed. Joseph Priestley was a regular member: a self-taught chemist, political radical and Unitarian minister, he discovered oxygen and its necessity for animal life, invented seltzer water, and supported the American and French revolutions. Also a “Lunatick” was Josiah Wedgwood, the great ceramics industrialist and founding member of the Society for the Abolition of the Slave Trade. James Watt, the inventor of the modern steam engine, attended meetings regularly. Thomas Jefferson and Benjamin Franklin visited occasionally; Antoine Lavoisier corresponded with Society members; John Smeaton, the father of civil engineering, and Joseph Wright, the painter of the Industrial Revolution, were also regulars.

It must have been a thrill. Something like TEDTalks with Stilton. Writes Adam Hart-Davis:

The Lunar Society believed in argument and cooperation. They had long discussions about why thunder rumbles and decided the best way to test their various theories was by experiment. Boulton made a 5-foot-diameter balloon from varnished paper, and they filled it with a terrifying mixture of air and hydrogen (“inflammable air from iron”). They lit a fuse underneath, released the balloon into the night sky on a calm, clear evening and waited for the bang. Unfortunately, the fuse was rather long, and they all assumed it must have gone out; so they began to talk among themselves, when there was a colossal explosion, and they all said, “There it goes!” and forgot to listen for the rumble! Watt was at home 3 miles away and wrote that the bang was “instantaneous, and lasted about one second.” This seems self-contradictory, but in any case, the experiment failed to produce a simple answer to the original question.

There you have it: science, explosions, debate, optimism, politics, technology, curiosity. The future started more than two hundred years ago.

Reverse-engineering the brain

In Neuroscience, Technology on February 11, 2009 at 10:52 pm


Via Wired:
“The plan is to engineer the mind by reverse-engineering the brain,” says Dharmendra Modha, manager of the cognitive computing project at IBM Almaden Research Center.

In what could be one of the most ambitious computing projects ever, neuroscientists, computer engineers and psychologists are coming together in a bid to create an entirely new computing architecture that can simulate the brain’s abilities for perception, interaction and cognition. All that, while being small enough to fit into a lunch box and consuming extremely small amounts of power.

The 39-year old Modha, a Mumbai, India-born computer science engineer, has helped assemble a coalition of the country’s best researchers in a collaborative project that includes five universities, including Stanford, Cornell and Columbia, in addition to IBM.

The researchers’ goal is first to simulate a human brain on a supercomputer. Then they plan to use new nano-materials to create logic gates and transistor-based equivalents of neurons and synapses, in order to build a hardware-based, brain-like system. It’s the first attempt of its kind.

Related is the project at Harvard and UCLA to map the “connectome” — the actual neural circuitry of the central and peripheral nervous system. That’s 100 billion neurons and several trillion synaptic connections; it’s equivalent in scale and scope to the Human Genome Project. The key discovery is the Brainbow, a technique for stimulating neurons so they took up different colored fluorescent probes; for the first time, tangled neurons can be visually distinguished.

The Connectome project is a kind of science that often gets short shrift these days: “inductive” reasoning, collecting a vast library of observations first, in the hopes that they will suggest theories and future questions. It’s very much in the tradition of Victorian naturalists (Darwin was an inductivist). The idea is that the brain is so complicated and so little understood that we don’t yet know what to theorize about or where to look; the humbler and ultimately more fruitful approach is to look around. This is a high-tech version of going back to biology’s beetle-pinning roots.

Australian bush fires and data portability

In Policy, Technology on February 10, 2009 at 6:25 pm

Google Australia has created a flash map of the fires currently devastating southeast Australia, with fire locations and status updates. Green areas are safe; red means fires are still in progress. These are the worst fires in Australia’s history and what’s particularly scary is that they may have been set deliberately. More than 100 are reported to have lost their lives.

Some food for thought: Googler Paula Fox was able to provide the flash map because the Victoria Fire Department supports the open standard RSS. (RSS is a standardized data format for frequently updated information, designed to be read on many kinds of programs.) But to be useful for visualization, fire data needs geographical information; there exist adaptations such as GeoRSS to do this, but the fire department didn’t have any such thing.

From technologist Elias Bizannes:

1) If you output data, output it in some standard structured format (like RSS, KML, etc).
2) If you want that data to be useful for visualisation, include both time and geographic (latitude/longitude information). Otherwise you’re hindering the public’s ability to use it.
3) Let the public use your data. The Google team spent some time to ensure they were not violating anything by using this data. Websites should be clearer about their rights of usage to enable mashers to work without fear
4) Extend the standards. It would have helped a lot of the CFA site extended their RSS with some custom elements (in their own namespace), for the structured data about the fires. Like for example Get the hell out of here.
5) Having all the Fire Department’s using the same standards would have make a world of difference – build the mashup using one method and it can be immediately useful for future uses.

Natural disaster response needs data, and good data sharing protocols. US agencies aren’t always so good at that. During Katrina, it was the volunteer database Katrinalist that helped people find survivor information. But FEMA’s models were not made available in a way that would allow first responders to act quickly. We need to work on that.