Recent Articles:

This Website Streams Camera Footage from Users Who Didn’t Change Their Password

October 31, 2014 Tech Comments Off

Last week, I sat at my computer and watched a young man from Hong Kong relaxing on his laptop; an Israeli woman tidying the changing room in a clothes store; and an elderly woman in the UK watching TV.

All of these people were completely unaware that I was spying on them, thousands of miles away, through devices that were inadvertently broadcasting their private lives on the internet.

I found them on a website that claims to have the direct feeds of hundreds of thousands of private cameras. There are 152 countries to choose from listed on the site, as diverse as Thailand, Sudan, and the Netherlands. The UK has 1,764 systems listed. The US has 8,532.

This particular website exposes IP cameras. These are external devices typically bought to keep an eye on valuables, act as a baby monitor, or make up a home or business security system. Some of these devices come with a default password that many users do not change, which is how this site is able to access them.

Its all in the name of raising awareness about computer security, the sites creator claims (never mind the fact that the site has ads). This site has been designed in order to show the importance of the security settings, the page states.

Image: screenshot from the website

The website is one of the latest, and perhaps biggest, examples of a trend wherein security researchers risk peoples personal privacy under the justification of exposing security issues. Although this approach can sometimes force a vendor to act and fix the problem, it can also harm the public at large.

Often when a researcher finds a vulnerability in a device or system, they will notify the affected company, then work with them towards a solution behind closed doors. For example, in May a researcher notified Google and Microsoft about a particular method of delivering malware by tricking users into thinking they were downloading a file from a trusted website. The problem was addressed before the researcher made his findings public.

Usually, these kind of white hat hackers will abide by strict guidelines.Most responsible disclosure policies used by security researchers derive from the RFPolicy, Shane Macaulay, director of cloud security at IOActive, told me in an email. The procedures outline ethical ways to handle unresponsive vendors and to disclose to various security forums as a way to shame a vendor and get the word out’ as well as normal press channels.

These principles arent in any way binding; theyre more suggestions on how to responsibly handle security issues between researchers and vendors, such as keeping in close contact and releasing information to the public at an appropriate time.

But some feel there are times when a different approach is needed. Earlier this year, two researchers released a crucial USB vulnerability into the wild. The attack could load any USB device with undetectable and powerful malware, and there was no quick fix to sort it all out.

I will grant that sometimes there are vulnerabilities that are intractable

So the researchers published the vulnerability details in order to push an entire industry—those who manufacture USB devices—to deal with the problem. This has a downside: with the code posted on Github, theoretically any entrepreneurial criminal could craft a new money-making scheme out of the research, threatening the security of an incalculable number of people.

Matthew Green, assistant research professor at the Department of Computer Science at John Hopkins University, told me in a phone interview that this sort of action is sometimes needed. I will grant that sometimes there are vulnerabilities that are intractable: you tell people about them, and everybody knows about them, but nobody tries to fix them, he said. In theory, in those cases, you need to do something that takes it to the next level.

But, in the case of the IP camera site, he said he didn’t think that hosting the feeds of hundreds of thousands private cameras is the right way to go about it. What is different about this is that there are actual victims; that they are individuals, Green said.

Setting up the website, Green said, sounds a little irresponsible to me. Thats if the creator’s claims of making the site in order to raise important security issues are even genuine in the first place. There are a lot of people who pull stunts, and try to make a name for themselves, Green added. The owner of the site did not respond to a request for comment.

Back in 2012, a similar thing happened specifically with Trendnet cameras. The blog Console Cowboys detailed a critical flaw in these cameras, and someone else eventually created a Google maps style interface for tapping into the cameras at will, allegedly to raise awareness of the issue and force Trendnet to take action. In response, Trendnet notified their customers of an update that would fix the vulnerability.

Image: screenshot from the website

But this new site doesnt target a technical fault and its creator doesnt seem like your regular white hat researcher. Although it lists the different brand of cameras being used (Foscam, Panasonic, Linksys, and IPCamera, as well as AvTech and Hikvision digital video recorders), the weakness doesn’t necessarily lie with the manufacturers. At least in part, its simply poor password management from the users.

There are a couple of things that the camera manufacturers could do, such as forcing all customers to choose a new password when they setup their device the first time, or shipping all of their cameras with a unique password by default.

Foscams COO Chase Rhymes told me that they implemented the former over a year ago, once they became aware that their cameras were being accessed because of their default passwords. But, it was certainly not because of this website, Rhymes said. It was due to a baby monitor being broken into, back in 2013. Foscam was aware of the website before I contacted them, because reporters from the Mail on Sunday had reached out to them when they found the site last month.

All cameras being manufactured require the user, during the setup process, to change the password, Rhymes said in a phone interview. For cameras already being used, he claimed that an update was released that would force users to change the password. The company also claimed to have contacted customers and retailers by email.

Linksys, meanwhile, first heard of the site from me. The company is still trying to determine which Linksys IP cameras are referenced on the site, but it believes they are old, out-of-production models. Its newer cameras display a warning to users who have not changed their default password.

The real problem is that the people who are the victims—the people who are being observed—are not necessarily being notified that this is happening

According to the webcam site, if you discover your camera feed and wish to have it removed, you can email and it will disappear from the site. If you don’t want your camera to remain exposed in the long term, it recommends that you change your password. But how are you, the person on the other end of the camera, supposed to find out its compromised in the first place?

The real problem is that the people who are the victims—the people who are being observed—are not necessarily being notified that this is happening, Green said.

Even if this researcher—if we can call him that—really is trying to expose weak security practices, theres little doubt that this behavior is illegal under US law.

It is a stunningly clear violation of the Computer Fraud and Abuse Act (CFAA), Jay Leiderman, a US lawyer who has experience in hacking cases, told me in a phone interview.

It appears the site has changed providers since the Mail investigation; the reporters said they tracked it down to Moldova, but it now seems to be hosted by GoDaddy.com with an IP coming from Moscow in Russia.

Legally, Leiderman said it doesn’t matter that no ‘real’ hacking is taking place and the cameras are accessed via their default passwords. You put a password on a computer to keep it private, even if that password is just ‘1, he said. It’s entry into a protected computer.

Sometimes there is a case for highlighting security weaknesses in a bold fashion. It can force companies into a corner, and address a problem that they may otherwise ignore. But websites like this, which expose the private lives of people—people who probably won’t find out anyway—don’t offer any solutions. The true motives of the sites creators remain unclear.

I really think it’s unlikely that this is going to result in widespread attention to the problem, Green concluded. I think it’s probably, on balance, going to be more damaging than helpful.

Motherboard RSS Feed

Projecting a robot’s intentions: New spin on virtual reality helps engineers read robots’ minds

October 29, 2014 Robots Comments Off

In a darkened, hangar-like space inside MIT’s Building 41, a small, Roomba-like robot is trying to make up its mind.

Standing in its path is an obstacle — a human pedestrian who’s pacing back and forth. To get to the other side of the room, the robot has to first determine where the pedestrian is, then choose the optimal route to avoid a close encounter.

As the robot considers its options, its “thoughts” are projected on the ground: A large pink dot appears to follow the pedestrian — a symbol of the robot’s perception of the pedestrian’s position in space. Lines, each representing a possible route for the robot to take, radiate across the room in meandering patterns and colors, with a green line signifying the optimal route. The lines and dots shift and adjust as the pedestrian and the robot move.

This new visualization system combines ceiling-mounted projectors with motion-capture technology and animation software to project a robot’s intentions in real time. The researchers have dubbed the system “measurable virtual reality (MVR) — a spin on conventional virtual reality that’s designed to visualize a robot’s “perceptions and understanding of the world,” says Ali-akbar Agha-mohammadi, a postdoc in MIT’s Aerospace Controls Lab.

“Normally, a robot may make some decision, but you can’t quite tell what’s going on in its mind — why it’s choosing a particular path,” Agha-mohammadi says. “But if you can see the robot’s plan projected on the ground, you can connect what it perceives with what it does to make sense of its actions.”

Agha-mohammadi says the system may help speed up the development of self-driving cars, package-delivering drones, and other autonomous, route-planning vehicles.

“As designers, when we can compare the robot’s perceptions with how it acts, we can find bugs in our code much faster,” Agha-mohammadi says. “For example, if we fly a quadrotor, and see something go wrong in its mind, we can terminate the code before it hits the wall, or breaks.”

The system was developed by Shayegan Omidshafiei, a graduate student, and Agha-mohammadi. They and their colleagues, including Jonathan How, a professor of aeronautics and astronautics, will present details of the visualization system at the American Institute of Aeronautics and Astronautics’ SciTech conference in January.

Seeing into the mind of a robot

The researchers initially conceived of the visualization system in response to feedback from visitors to their lab. During demonstrations of robotic missions, it was often difficult for people to understand why robots chose certain actions.

“Some of the decisions almost seemed random,” Omidshafiei recalls.

The team developed the system as a way to visually represent the robots’ decision-making process. The engineers mounted 18 motion-capture cameras on the ceiling to track multiple robotic vehicles simultaneously. They then developed computer software that visually renders “hidden” information, such as a robot’s possible routes, and its perception of an obstacle’s position. They projected this information on the ground in real time, as physical robots operated.

The researchers soon found that by projecting the robots’ intentions, they were able to spot problems in the underlying algorithms, and make improvements much faster than before.

“There are a lot of problems that pop up because of uncertainty in the real world, or hardware issues, and that’s where our system can significantly reduce the amount of effort spent by researchers to pinpoint the causes,” Omidshafiei says. “Traditionally, physical and simulation systems were disjointed. You would have to go to the lowest level of your code, break it down, and try to figure out where the issues were coming from. Now we have the capability to show low-level information in a physical manner, so you don’t have to go deep into your code, or restructure your vision of how your algorithm works. You could see applications where you might cut down a whole month of work into a few days.”

Bringing the outdoors in

The group has explored a few such applications using the visualization system. In one scenario, the team is looking into the role of drones in fighting forest fires. Such drones may one day be used both to survey and to squelch fires — first observing a fire’s effect on various types of vegetation, then identifying and putting out those fires that are most likely to spread.

To make fire-fighting drones a reality, the team is first testing the possibility virtually. In addition to projecting a drone’s intentions, the researchers can also project landscapes to simulate an outdoor environment. In test scenarios, the group has flown physical quadrotors over projections of forests, shown from an aerial perspective to simulate a drone’s view, as if it were flying over treetops. The researchers projected fire on various parts of the landscape, and directed quadrotors to take images of the terrain — images that could eventually be used to “teach” the robots to recognize signs of a particularly dangerous fire.

Going forward, Agha-mohammadi says, the team plans to use the system to test drone performance in package-delivery scenarios. Toward this end, the researchers will simulate urban environments by creating street-view projections of cities, similar to zoomed-in perspectives on Google Maps.

“Imagine we can project a bunch of apartments in Cambridge,” Agha-mohammadi says. “Depending on where the vehicle is, you can look at the environment from different angles, and what it sees will be quite similar to what it would see if it were flying in reality.”

Because the Federal Aviation Administration has placed restrictions on outdoor testing of quadrotors and other autonomous flying vehicles, Omidshafiei points out that testing such robots in a virtual environment may be the next best thing. In fact, the sky’s the limit as far as the types of virtual environments that the new system may project.

“With this system, you can design any environment you want, and can test and prototype your vehicles as if they’re fully outdoors, before you deploy them in the real world,” Omidshafiei says.

This work was supported by Boeing.

Video: http://www.youtube.com/watch?v=utM9zOYXgUY


Robotics Research News — ScienceDaily

Why Did the Antares Rocket Explode?

October 29, 2014 Tech Comments Off

The much-anticipated launch of Orbital Sciences Antares rocket, on a cargo resupply mission to the International Space Station, ended in disappointment last night when the unmanned rocket disastrously exploded after about 10 seconds of flight.

The Antares explosion: RT/YouTube.

Fortunately, nobody was injured during the explosion, but damage to the Wallops Flight Facility is severe—not to mention the blow of losing the missions payload of scientific equipment and crew supplies.

The damage to the facility and Orbitals reputation as a safe ISS vehicle will likely take the company out of the ISS resupply game for several years. This will place future US-based missions solely on the shoulders of Orbitals competitor SpaceX, a company that—with respect to Orbitals impressive team—has already outpaced the company on a few levels.

Both companies were awarded contracts from NASA in 2008, but SpaceX was much quicker to the take when it came to getting vehicles to the ISS. The companys first cargo resupply mission successfully launched on October 8, 2012, around the same time that Orbital was getting around to running the first major tests of the Antares on the launchpad.

Since then, SpaceX has pulled off four successful resupply missions, with its fifth coming up this December. Last nights launch would have been Orbitals third resupply mission.

The other key difference is the technical approach the two companies took in developing their rockets and capsules, and how that ties into their comparative overall outlook.

Orbital opted to go for already established spacecraft designs, making the controversial decision to dust off a bunch of Soviet NK-33 engines to propel the Antares into low-Earth orbit. These engines were originally intended to power the ill-fated Soviet moon program of the late 1960s/early 1970s, but were stockpiled when those missions were axed.

Orbital refurbished and modified them for 21st century spaceflight—renaming them AJ-26s—but the fact remains that the lynchpin of their propulsion design is a decades-old rocket engine. The AJ-26s were determined to be the cause of another Orbital explosion earlier this year, but whether that makes it more or less likely that they contributed to last nights failure is anyones guess at this point.

SpaceXs approach, on the other hand, was to reinvent rocket designs—which ties into CEO Elon Musks larger ambitions to pull off a manned Mars mission.

SpaceXs Falcon family of rockets are slowly evolving into a completely reusable launch system, whereas the Antares family remains totally expendable—both the capsule and rocket burn up in the atmosphere. Musk is also investing in exotic concepts like the Grasshopper rocket, which can take off and land back on a launchpad, as part of his larger vision of a recyclable space ferry.

The Grasshopper in action. SpaceX/YouTube

The cause of last nights disappointing launch failure is open speculation for the moment, but the effect is pretty clear: Orbital is going to be out of the picture for a while as the company recovers and rebuilds. American resupply missions will be up to SpaceX until another solution can be worked out.

Its a sad time not just for Orbital, but for space enthusiasts in general. But it also serves as a reminder that the last 60-odd years of refining rockets hasnt made spaceflight accident-proof. Despite all our advances, it is still very difficult and impressive to blast a rocket filled with cargo to an orbiting space station. Orbital deserves credit for their successes with the Antares as much as they deserve scrutiny over its unexpected failure last night. 

Motherboard RSS Feed

An International Space Station Resupply Mission Just Exploded Soon After Launch

October 28, 2014 Tech Comments Off

So far, Orbital Sciences has run two resupply missions to the International Space Station. Today, on what was supposed to be its third mission, its Antares Rocket exploded soon after it launched from Wallops Island, Va., in what NASA mission control called a “catastrophic anomaly.” 

Orbital Sciences has joined SpaceX as NASA’s commercial space resuppliers. For this mission, roughly 5,000 pounds of cargo, including several scientific experiments and supplies were onboard the Cygnus cargo spacecraft.

In fact, among the payload was a CubeSat owned by Planetary Resources, one of the companies most seriously looking into asteroid mining. The satellite, called Arkyd3, was designed to test the company’s communications equipment and was to be the first satellite launched by the company.

Robotically assisted bypass surgery reduces complications after surgery, cuts recovery

October 28, 2014 Robots Comments Off

Robotically assisted coronary artery bypass grafting (CABG) surgery is a rapidly evolving technology that shortens hospital stays and reduces the need for blood products, while decreasing recovery times, making the procedure safer and less risky, says a study presented at the Canadian Cardiovascular Congress.

“Robotically assisted CABG is a safe and feasible alternative approach to standard bypass surgery in properly selected patients. It is a less traumatic and less invasive approach than regular CABG,” says cardiac surgeon and researcher Dr. Richard Cook of the University of British Columbia. “It may reduce complications following surgery, and in the Canadian experience, has been associated with an extremely low mortality rate.”

For CABG, or bypass surgery, a surgeon uses a section of vein, usually from the patient’s leg, or an artery from inside the patient’s chest, to create a new route for oxygen-rich blood to reach the heart. It is performed to improve blood flow to the heart muscle caused by the build up of plaque in the coronary arteries (atherosclerosis).

The robot offers several technical advantages to surgeons including a magnified 3D view of the patient’s heart, as well as the elimination of any kind of tremor, which makes for precise incisions.

For this study 300 patients (men and women 60 years or older) underwent robotically assisted CABG at three hospital sites. In addition to the Vancouver General Hospital, the study was undertaken at the London Health Sciences Centre, led by Drs. Bob Kiaii and Michael Chu, and at Montreal’s Sacred Heart Hospital, led by Dr. Hugues Jeanmart.

There were no deaths in this group of patients, with only one patient developing a deep wound infection after the procedure.

The doctors performed the surgery using the da Vinci Surgical System. It consists of a “surgeon console” where the surgeon views a high definition 3D image inside the patient’s body. When the surgeon’s fingers move the master controls, the system’s “patient-side cart” springs into action with three or four robotic arms mimicking the surgeon’s hand, wrist and finger movements with surgical instruments.

With traditional CABG the average hospital stay is five to six days. With the robotically assisted surgery, that was cut to an average of four days in the group of patients having surgery at London Health Sciences Centre; the hospital with the greatest experience with robotically-assisted cardiac surgery in Canada.

There was also less blood loss, which translated into a lower need for blood products. The more precise incisions also mean less cosmetic scarring.

Patients from the study reported being back to near normal levels of activity within a couple of weeks. With standard CABG, patients are asked to avoid driving or lifting any weights over 10 pounds for six weeks.

“Each year nearly 25,000 bypass surgeries are performed in Canada,; it is the most common form of surgery for people with heart disease,” says Heart and Stroke Foundation spokesperson Dr. Beth Abramson, author of Heart Health for Canadians. “Surgery saves lives and helps improve quality of life. The safer we can make the surgery, the more lives we can save.”

She adds that bypass surgery doesn’t cure the underlying heart disease. “Health behaviour changes and medications as prescribed by your healthcare providers are critical to preventing further damage.”

Currently, 17 centres across Canada use this robotic technology for surgery. However, they are used primarily in the fields of urology and gynecology. Dr. Cook and his colleagues hope findings from this study will increase the use robotically assisted heart surgery.

Story Source:

The above story is based on materials provided by Heart and Stroke Foundation of Canada. Note: Materials may be edited for content and length.


Robotics Research News — ScienceDaily

Happy Anniversary to the Early Internet’s First Network-Wide Crash

October 28, 2014 Tech Comments Off

By 1980, ARPANET, the US Defense Departments pre-internet internet, had spread across the United States and on into Europe. Over the course of about a decade, it had grown from a four-node network to a system supporting thousands of users, floods of email messages, and even something of proto-reddit collection of special interest message-groups.

Americas not-quite-internet was growing up quickly and, thus, it was high time for something to go really, really wrong. And so it did, on October 27, 1980, with the first proper network crash in the history of the proto-internet.

The failure didnt leave any warships adrift, but the event, which left ARPANET disconnected for nearly four hours, was a milestone nonetheless. It was the result of a pair of subtle screwups having to do with the networks interface message processors (IMPs), which were basically what we now call routers: intermediate switching devices that process network traffic.

ARPANETs IMPs were in charge of taking communications from local computers and networks and translating them into the ARPANET standard. Different sites were based on different platforms involving different protocols and standards, and the IMPs generalized these differences so everything moving around the network was generic to ARPANET. 

UCLA computer science professor Leonard Kleinrock and the first IMP. Image: Kleinrock

Imagine bouncing messages around between UNIX, Mac, and Windows-based environments, all with their own ways of dealing with information. IMPs would take all of that and just make it ARPANET-based and platform-agnostic.

One of the networks IMPs, IMP29, was dropping bits (slivers of binary information) as the result of a hardware failure. IMP29s job was to act as the communications pathway for another node, IMP50, and because of the bit dropping, IMP50 received a wonky status message with a bad timestamp, which it repeated across the network. 

Every node was required to send out status messages at one-minute intervals, and the screwy timestamp on IMP50s message meant that every other node took in this message, prioritized it ahead of everything else, and then repeated the corrupted message. And then they kept repeating it over and over and over.

The networks garbage collection software, which was responsible for deleting those status messages as they accumulate, was forced to deal with messages having multiple timestamps as a result. It didnt know how to do that, and the result was every node being forced to store every status message. The nodes memories were quickly saturated.

Basically, ARPANET DDoSd itself as more and more messages accumulated in a sort of feedback cycle.

The result was a naturally propagating, globally contaminating effect, in the words of Peter Neumann, chief scientist at the SRI International Computer Science Laboratory.

In a sense, the effect was similar to a distributed denial of service attack, or DDoS, which occurs when a network is flooded with traffic from various sources to the point that its unable to function normally. To be clear, the crash wasn’t due to an actual attack, but a cascading series of technical failures. Still, it’s an illustrative comparison.

ARPANET as of 1977. Image: Wikipedia

Basically, ARPANET DDoSd itself as more and more messages accumulated in a sort of feedback cycle. Every status message was stamped with the highest priority code, so any other sort of message sent between nodes was ignored in favor of the junk status updates.

This also meant that any message sent to the nodes was ignored as well and thus it was impossible to deal with the problem remotely. Every single node had to be shut down and restarted manually, and only then could the full network go back online. 

If just a few IMPs were restarted, rather than every single one, the result was that the restarted IMP would receive a copy of the corrupted message from one of the nodes that wasnt restarted, and it would once again go down. There was some trial and error in fixing the problem.

The IMPs supporting the 1980 ARPANET actually had an onboard system for finding bit-dropping errors, but theyd all been deactivated, according to a subsequent report published in SIGSOFT Software Engineering Notes

Bit dropping was usually a spurious occurrence and a detection meant having to restart individual IMPs manually, so it didnt seem worth the trouble, at least until that bit dropping fouled a timestamp such that the entire network collapsed. The next generation of IMP took care of the problem by including a new loader/dumper fault state that could be controlled off-site.

The easiest fix suggested in the SIGSOFT report is almost comically simple. When the faulty garbage collection utility checked message timestamps, it calculated later using a greater-than-or-equals sign, rather than a plain greater-than sign, thus allowing a whole bunch of messages to effectively share a timestamp and flood the network. Hindsight, eh?

Motherboard RSS Feed

Tweets

Categories

Enter your email address to subscribe to this blog and receive notifications of new posts by email.