Feb 24

Why Mass Surveillance Is Worse for Poor People

Since Edward Snowden revealed that the whole damn world is a surveillance state, a host of encryption and privacy services have popped up. But, people who have the luxury of using them, and the luxury of actually worrying about their privacy, are overwhelmingly well off, overwhelmingly white, and overwhelmingly male. Is privacy only for the privileged?

As we learn that the NSA, FBI, and even local law enforcement have their hands on any number of surveillance tools, it’s getting ever more complicated to actually keep your communications safe from them. Anonymity tools like Tor and encryption services aren’t always easy to use, and even default encryption on the iPhone requires you to have enough money to buy and pay the contract on an iPhone. Technological literacy is something that only the privileged can afford, and, beyond that, there is increasingly a concrete dollar amount that will afford you a modicum more of privacy.

Last week, news broke that AT&T would disable so-called “super cookies,” which track users throughout the internet, for those who pay an extra $ 29 monthly. In the developing world, many people don’t know of an internet beyond Facebook thanks to Facebook Zero, a service that provides access to the social network, but not the real internet, for free.

“You have to design with people and not just for people”

“It’s not just AT&T,” Daniel Kahn Gillmor of the American Civil Liberties Union said Monday at the New America Foundation’s cybersecurity event in Washington, DC. “If the only party you’re talking to is Facebook, then it becomes a central point for data collection and surveillance. For communities without phones, that’s the only way they’re going to get access. Access to the whole, worldwide internet could become the domain of people who have the ability to pay for it.”

Less obvious, however, is the fact that protecting yourself online requires a digital literacy that you don’t learn when you’re working two jobs or living on welfare. It’s a problem that Google says it’s constantly dealing with. The wording of the warnings Chrome gives you get when you click on a nefarious link, for instance, have to be constantly workshopped to be useful to the largest number of people while still being technical enough to describe what’s going on to the computer literate.

“Usability is fundamentally important, and there are very hard problems we’re grappling with. We have large numbers of users with different backgrounds, disabilities, ages, and technological literacy,” Tara Whalen, a staff privacy analyst at Google, said at the conference. “A specific example of this is with an SSL certificate. When it breaks, it’s hard to explain nuancesis this a risk, what went wrong.”

And that’s just for technology that people use every day, such as a web browser. How are you supposed to get people who haven’t been taught that “password” is not such a good password to use an encryption app, especially when the majority of primarily white, primarily male, primarily rich developers haven’t thought about making them accessible to everyone?

“If you look at some of the developer communities, the people who volunteer to make [encryption] tools, the diversity is not particularly large,” Whalen said. “The numbers of minorities are low, the number of women participating is low. Anyone in a marginalized group doesn’t have the resources to participate in a free labor project.”

“Bridging that gap is a challenge. You have to involve people in participatory designyou have to design with people and not just for people,” she added.

This is, of course, not an entirely new phenomenon. Crime ridden parts of cities are often under constant surveillance, whether it be police cameras or foot patrols. That problem, however, has extended to the digital realm.

The key here, according to Seda Gurses, a postdoc at New York University working on surveillance and privacy issues, is to push companies to build privacy into everything that they do. Asking users to take privacy into their own hands sounds good in theory, but simply doesn’t work for everyone.

“There’s this word, it’s hard to say, but it’s responsibilization, which is encouraging people to manage risk themselves,” she said. “If you think there are risks, you are responsible for protecting yourself from it. This is problematic of course. Instead of burdening the users, we should ask the phone companies or whoever to give them secure phones. We should make sure the network is secured in a way it can’t be eavesdropped on.”

Right now, we’re seeing little of that happening. The digital divide is extending to become a privacy divide.

Motherboard RSS Feed

Feb 23

The Closest Thing to a Map of the Dark Net: Pastebin

For all the freaking out we do about the secrecy of the dark web (sites most often accessed with a tool called Tor), anyone with reasonable Google skills and enough motivation could, if he or she wanted to, find at least a handful of hidden service websites. So is the dark web actually all that hidden? Maybe notand one researcher is in the process of making a map of the dark web, pulled from the normal internet.

The vast majority of hidden service websites (that is, the ones you need Tor to get to) have to walk a fine line. They want to remain somewhat hidden, but not so hidden that no one can actually find them. Therefore, there is evidence of how to find deep web sites littered all over the public, or “clear” internet.

Staffan Truv, CEO of Recorded Future, a Sweden-based cyber threat research company, has tracked down where users talk about the dark web and direct each other to specific hidden sites in an attempt to visualize what it looks like. In fact, the company is persistently monitoring many parts of the dark web.

“Some people are over mystifying the dark web. It’s not that magical. There’s no sharp dividing line between the normal internet and the dark web,” Truv told me. “On sites like Pastebin, you can find lots of pointers to the darker sides of the web.”

“We capture everything that’s posted on paste sites”

Recorded Future scrapes everything posted on Pastebin and other “paste” sites, which are sites where plaintext can be posted anonymously. Pastebin is a popular place to find torrents, hacking data dumps, and, back in December, was the site where links to the leaked files from the Sony hacks ended up. It’s also a fantastic place to find links to dark web sites. The company also monitors Twitter and forums all around the normal internet for links to the dark web.

Just finished presenting at #TheSAS2015 about how to visualize the underbelly of internet. pic.twitter.com/yOguheJzLu

Staffan Truv (@truve) February 16, 2015

Last week, Truv gave a presentation on Recorded Future’s work called Visualizing the Underbelly of the Internet, in which he said the dark web is necessarily not all that secretive.

“If you’re in this business to sell something, well, you need to advertise it somewhere,” he said. “There are really a lot of shades of gray as to what I’d say is a dark web site. There’s really obscure things on what we would consider ‘normal’ websites. I think the borderline is hacker forums and sites like Pastebin.”

Many parts of the dark web aren’t hard to find or visualize, but they are volatile. Truv says that 10 percent of dark web sites posted on Pastebin are deleted within 48 hours. He says that the majority of those deleted sites point to illegal services on the dark web, which get used up, presumably by criminals, then deleted very quickly.

“We capture everything that’s posted on paste sites, and then we go back in and check if the links are still active,” he said.

By persistently monitoring it, he says the company can tell what the dark web is talking about.

“We analyze all the contents and do standard linguistic analysis to try to determine what’s popular on the dark web,” he said. “Really, this is very similar to Google trends. We can tell what people are querying for.”

Mapping out the dark web doesn’t necessarily make it any easier to crack down on what’s there, which is a noted goal of the FBI and other law enforcement agencies. And Truv’s service isn’t capturing the sites that aren’t advertised anywhere on the normal internet.

He says it’s hard to know how many sites he’s missing out on.

“It kind of goes back to Donald Rumsfeld’s ‘unknown unknowns,'” he said. “There’s not really a good way to quantify it. We see the rest as being a law enforcement thing. We try to do a quantified analysis of the dark web, as best as we can.”

And, as for the encrypted parts of the dark web?

“We just ignore that content,” he said. “We don’t do code breaking.”

So, yes, many parts of the dark web aren’t all that secretbut there’s still a bit of mystique left.

Motherboard RSS Feed

Feb 23

‘DNA spellchecker’ means that genes aren’t all equally likely to mutate

A study that examined 17 million mutations in the genomes of 650 cancer patients concludes that large differences in mutation rates across the human genome are caused by the DNA repair machinery.

‘DNA spellchecker’ is preferentially directed towards more important parts of chromosomes that contain key genes.

The study illustrates how data from medical sequencing projects can answer basic questions about how cells work.

The work, performed by two scientists from the EMBL-CRG Systems Biology Unit in Barcelona, will be published online in Nature on 23rd February. Copying the large book that it is our genome without mistakes every time a cell divides is a difficult job. Luckily, our cells are well-equipped to proof-read and repair DNA mistakes. Now, two scientists at the Centre for Genomic Regulation in Barcelona have published a study showing that mistakes in different parts of our genome are not equally well corrected. This means that some of our genes are more likely to mutate and so contribute to disease than others.

The scientists analysed 17 million ‘single nucleotide variants’ — mutations in just one nucleotide (letter) of the DNA sequence — by examining 650 human tumours from different tissues. These were ‘somatic’ mutations, meaning they are not inherited from parents or passed down to children, but accumulate in our bodies as we age. Such somatic mutations are the main cause of cancer. Many result from mutagens, such as tobacco smoke or ultraviolet radiation, and others come from naturally occurring mistakes in copying DNA as our tissues renew.

Ben Lehner and his team had previously described that somatic mutations are much more likely in some parts of the human genome, thus damaging genes that may cause cancer. In a new paper published on 23rd February in Nature, they show that this is because genetic mistakes are better repaired in some parts of the genome than in others. This variation was generated by a particular DNA repair mechanism called “mismatch repair” — a sort of a spellchecker that helps fix the errors in the genome after copying. Lehner and Supek show that the efficiency of this ‘DNA spellchecker’ varies depending on the region of the genome, with some parts of chromosomes getting more attention than others.

Turning the tables on mutation rates

The work presented by Lehner and Supek sheds new light on a process that was unexplored — what makes some parts of the human genome more vulnerable to damage? “We found that regions with genes switched on had lower mutation rates. This is not because less mistakes are happening in these regions but because the mechanism to repair them is more efficient,” explains Ben Lehner, group leader, ICREA and AXA professor of risk prediction in age-related diseases at the EMBL-CRG Systems Biology unit in Barcelona. The ‘mismatch repair’ cellular machinery is extremely accurate when copying important regions containing genes that are key for cell functioning, but becomes more relaxed when copying less important parts. In other words, there appears to be a limited capacity for DNA repair in our cells, which is directed where it matters most.

The CRG researchers also found that the rate of mutation differs for around 10% of the human genome in cells originating from different tissues. In particular, liver, colorectal and lymphocyte malignancies present more mutations in some parts of our chromosomes, while breast, ovarian and lung cancers accumulate more mutations in other places. They found that genes that are important and switched on (expressed) in a particular tissue also exhibit less mutations in tumours of that tissue; the effect extends into the surrounding DNA. But what gives the important genes a higher resilience to damage?

“The difference is not in the number of new mutations but in the mechanism that keeps these mutations under control,” comments Fran Supek, CRG postdoctoral researcher and first author of the paper. “By studying cancer cells, we now know more about maintaining DNA integrity, which is really important for healthy cells as well,” he adds. Once the ‘genomic spellchecker’ has been disabled in a cell, the scientists observed that genetic information started decaying not only very rapidly, but also equally in all parts of the genome — neither the important nor the less important parts can were repaired well anymore. DNA mismatch repair is known to be switched off in some tumours from the colon, stomach and uterus, producing ‘hypermutator’ cancer in those organs.

The accumulation of harmful changes in DNA is a normal process occurring in all human cells every time they divide. Therefore this research not only makes an important contribution not only to cancer research, but also may lead to insights into aging and genetic diseases as well.

Story Source:

The above story is based on materials provided by Center for Genomic Regulation. Note: Materials may be edited for content and length.


News — ScienceDaily

Feb 23

Will Robots Be Able to Help Us Die?

The robot stares down at the sickly old woman from its perch above her home care bed. She winces in pain and tries yet again to devise a string of commands that might trick the machine into handing her the small orange bottle just a few tantalizing feet away. But the robot is a specialized care machine on loan from the hospital. It regards her impartially from behind a friendly plastic face, assessing her needs while ignoring her wants.

If only shed had a child, she thinks for the thousandth time, maybe then thered be someone left to help her kill herself.

Hypothetical scenarios such as this inspired a small team of Canadian and Italian researchers to form the Open Roboethics Initiative (ORi). Based primarily out of the University of British Columbia (UBC), the organization is just over two years old. The idea is not that robotics experts know the correct path for a robot to take at every robo-ethical crossroadbut rather, that robotics experts do not.

Ethics is an emergent property of a society, said ORi board member and UBC professor Mike Van der Loos. It needs to be studied to be understood.

To that end, ORi has been conducting a series of online polls to determine what people think about robot ethics, and where they disagree. Could it ever be ethical to replace a senior citizens human contact with mostly robot contact? Should a minor be able to ride a self-driving car alone? Should a robot be allowed to stop a burglar from breaking into its owners’ otherwise empty home? The team hopes that by collecting the full spectrum of public opinion on these kinds of questions, they can help society avoid any foreseeable pitfalls.

Privacy isnt just about a robot collecting and relaying information to third parties, but also about how it feels to have a social robot around you all the time.

Unsurprisingly, ORi’s earliest polls probed how people might want self-driving cars of the future to prioritize safety on the road in the event of a crash. When asked whether a car should favour its own passengers’ safety, or the overall minimization of risk, respondents were remarkably split. And when given the choice between killing its own passenger or a pedestrian child, two thirds of respondents thought a self-driving car should choose to kill the child.

I bought the car, reasoned one anonymous participant. It should protect me.

But not all robo-ethical quandaries have to involve life-or-death situations to be divisive and complex. In a poll about appropriate designs for bathing assistance robots, complete autonomy was associated with both a loss of user control for some users, and a gain in user privacy for others. In other words, a robot bath might not be as comfy as one from a human, but for many people thats a worthwhile trade to eliminate all human eyes from the bathroom.

University of Washington law professor Ryan Calo told Motherboard that such privacy issues could be even touchier than ORis data suggests. Research has shown that the presence of a humanoid robot can have some of the same effects as the presence of a human companion. Privacy, he said, isnt just about a robot collecting and relaying information to third parties, but also about how it feels to have a social robot around you all the time.

How do you deal with liability when you have a physical entity that can cause physical harm, running third party code?

The context of a situation can dramatically influence a persons idea of which principle ought to direct a robots actions, too. Public health concerns might take precedence when deciding not to serve a drink to a diagnosed alcoholic, but personal liberty might be more important when handing a cheeseburger to a person with high blood pressure.

In fact, the team has found that peoples reactions can vary widely based on fairly subtle variations in the moral structure of situations. There are key issues for different people, said ORi executive and UBC Masters student Shalaleh Rismani. They can flip a situation completely. Ownership [of the robot] is big, but so is privacy and control… and safety.

Calo, for instance, has written about the potential ethical problems posed by open-source development of robot softwarein other words, the ability for anyone to modify and contribute to a commercial robots code. How do you deal with liability when you have a physical entity that can cause physical harm, running third party code? Calo said.

The experts at ORi are the first to admit that polling is just a first step toward sensible robot policy. But they think it’s a necessary if we are to avoid, as Calo put it, one or two engineers just deciding what they think”a future where a revolutionary tech product’s most challenging ethical questions dont arise until it’s ready to hit the market. From robo-assisted suicide to commercial drone use in urban areas, robots will create new ethical quandaries at precisely the same rate that engineers imbue them with new abilities. 

ORi still has a great deal of work ahead of it. Today’s high-tech industry still gives most robot ethics questions only passing attention, and funding has been hard to come by. And despite the intriguing nature of their questions some polls still suffer from having too few participants. ORi wants to apply for funding through Elon Musk’s Future of Life project, but for now it seems the most realistic candidates for support will remain such explicitly futurist organizations.

The issues won’t stop, Rismani insisted. We want to keep going with this until there are no more questions left to ask.

Motherboard RSS Feed

Feb 22

The Beautiful Art of Meteorite Science

For most of human history, astronomers have had to learn about celestial bodies largely by observation. Before fancy rockets and orbital spacecraft were developed to break the bonds of gravity, distant worldsand the materials that comprised themseemed perpetually out of reach.

But even before the advent of spaceflight, nature sometimes threw scientists a bone in the form of meteorites. Like manna from heaven, these rocks contain multitudes of information about otherworldly places, and some ancient cultures understandably embraced these cosmic harbingers as mystical.

Now, with the technological advances of the 21st century, weve been able to extract even more tantalizing secrets from these rocky visitors. Weve come a long way since then, said Denton Ebel, curator of the American Museum of Natural Historys Meteorites division, in a recent AMNH SciCafe presentation.

SciCafe: Imaging Space Rocks. Image: AMNH/YouTube.

Ebel, along with fellow AMNH meteorite specialists Ellen Crapster-Pregont and Amanda White, showed off some of the more visually stunning techniques for determining the composition of the meteorites. One of them was using differently colored composite maps to zero in on the individual details of the rock, as in the below Warhol riff on meteorite microscopy.

Pic1.jpg

Meteorite clasts imaged in different colors. Image: AMNH/YouTube.

We like to create the red, green, blue composite maps in order to better differentiate and see sort of the relationships between the different components in these meteorites, said Crapster-Pregont, while describing the sample pictured below.

Pic2.jpg

Red-green-blue composite map of meteorite. Image: AMNH/YouTube.

[T]his is an example of a red-green-blue magnesium-calcium-aluminum combination, she continued. Here you can very clearly distinguish the chondrules which are very red and magnesium-rich, from the calcium and aluminum which are the blue and green.

But perhaps the most beautiful image presented during the lecture was this picture of an ean individual chondrule.

Pic3.jpg

Individual chondrule. Image: AMNH/YouTube.

This is one of those chondrules where the mineral grains, each one is changing the light a little bit because its crystal in material, and its anisotropic to light, said Ebel. And in between those though you have black, which is glass, which is isotropic [and] the fine grained minerals between these chondrules and so forth, is dark because its so fine-grained it scatters light and nothing comes through the microscope.

We have taken optical microscopy here in the museum, in a totally different direction with extraterrestrial materials, he added.

The whole lecture is worth a watch or a listen, if you are interested in decoding alien space rocks or appreciating their unique beauty. Meteor impacts may have the power to wipe out species and scar planets, but as the AMNH research demonstrates, nobody can argue they dont deliver some truly out-of-this-world aesthetics too.

Motherboard RSS Feed

Feb 22

MIT Engineers Beat the Complexity of Deep Parallel Processing

Engineers at MIT have developed an algorithm that promises to boost the speeds of multicore systems by nearly 50 percentwhile upping their overall power efficiency by some 30 percent in the process. It doesn’t entirely nuke the lingering parallel processing memory bottleneck known as the cache hierarchy, but the new scheme is poised to at least widen it considerably. Computers can get faster after all. Phew.

The MIT researchers, led by electrical engineering professor Daniel Sanchez, presented its work at this month’s Electrical and Electronics Engineers International Symposium on High-Performance Computer Architecture, where they were nominated for the conference’s best-paper award. As Sanchez and his group write in that paper, the scheme, “uses a combination of hardware and software techniques to achieve performance close to idealized schemes at low overheads.”

The problem here is a deep one. Computer processors have reached their absolute limits, achieving scales so small that hardware begins to interfere with itself, in a sense. Noise and unpredictability dominate and this sets a more or less fundamental lower limit on things. Multithreading, where a bunch of processors are stacked together, seems like a natural solution, but it comes with its own set of fundamental difficulties. Turns out that doubling the number of chips in a computer doesn’t double its performance, and adding more cores to a system delivers diminishing returns.

The cache hierarchy problem has to do with how physical data is shuttled around the different cores of a multicore system. The key to faster parallel processing is better and more efficient sharing of tasks, which imposes a deep, new complexity on how data is distributed around a given architecture. It seems natural to store data as close as possible to where that data is going to be processed, but when sharing is involved, this quickly becomes a very, very difficult criteria to manage.

The group’s allocation scheme visualized. Image: MIT

“For systems to scale efficiently, data must be close to the computation that uses it. This requires keeping cached data in banks close to threads (to minimize on-chip traffic),while judiciously allocating cache capacity among threads (to minimize cache misses),” Sanchez and his group write in the new paper. “We find that to achieve good performance, the system must both manage cache capacity well and schedule threads to limit capacity contention. We call this computation and data co-scheduling. This is a complex, multi-dimensional optimization problem.”

The fundamental problem is old. It’s known as “place and route” and the basic idea is of finding the best way to minimize the physical distances between related computations on a given chip (or set of chips). The problem is NP-hard, actually, which is a designation applied to only the most difficult and unsolvable computing problems. It’s just too complex; if a solution exists, there wouldn’t be enough time and computing power in the entire universe to find it. So, engineers fudge approximate solutions.

These approximate solutions do pretty well, it turns out. In a 64-core system, they’re able to come up with good solutions in a matter of hours. The catch is that hours for a computer is full-on millenia, where 50 million operations might be processed in the span of milliseconds. Sanchez’s new algorithm offers this same capability in 25 millisecond calculations, allowing an architecture to remake itself on the fly.

Sanchez explains: “What we do is we first place the data roughly. You spread the data around in such a way that you dont have a lot of [memory] banks overcommitted or all the data in a region of the chip. Then you figure out how to place the [computational] threads so that theyre close to the data, and then you refine the placement of the data given the placement of the threads. By doing that three-step solution, you disentangle the problem. So: distributing data and distributing computations.

This scheme involves the introduction of a new on-board scheduler or monitor, which would take up about 1 percent of a given architecture’s real estate. Seems like a small price to pay for the future of fast computing.

Motherboard RSS Feed