This is default featured slide 5 title

This is default featured slide 5 title

You can completely customize the featured slides from the theme theme options page. You can also easily hide the slider from certain part of your site like: categories, tags, archives etc. More »

 

Protecting urban infrastructure against cyberterrorism

While working for the global management consulting company Accenture, Gregory Falco discovered just how vulnerable the technologies underlying smart cities and the “internet of things” — everyday devices that are connected to the internet or a network — are to cyberterrorism attacks.

“What happened was, I was telling sheiks and government officials all around the world about how amazing the internet of things is and how it’s going to solve all their problems and solve sustainability issues and social problems,” Falco says. “And then they asked me, ‘Is it secure?’ I looked at the security guys and they said, ‘There’s no problem.’ And then I looked under the hood myself, and there was nothing going on there.”

Falco is currently transitioning into the third and final year of his PhD within the Department of Urban Studies and Planning (DUSP). Currently, his is carrying out his research at the Computer Science and Artificial Intelligence Laboratory (CSAIL). His focus is on cybersecurity for urban critical infrastructure, and the internet of things, or IoT, is at the center of his work. A washing machine, for example, that is connected to an app on its owner’s smartphone is considered part of the IoT. There are billions of IoT devices that don’t have traditional security software because they’re built with small amounts of memory and low-power processors. This makes these devices susceptible to cyberattacks and may provide a gate for hackers to breach other devices on the same network.

Falco’s concentration is on industrial controls and embedded systems such as automatic switches found in subway systems.

“If someone decides to figure out how to access a switch by hacking another access point that is communicating with that switch, then that subway is not going to stop, and people are going to die,” Falco says. “We rely on these systems for our life functions — critical infrastructure like electric grids, water grids, or transportation systems, but also our health care systems. Insulin pumps, for example, are now connected to your smartphone.”

Citing real-world examples, Falco notes that Russian hackers were able to take down the Ukrainian capital city’s electric grid, and that Iranian hackers interfered with the computer-guided controls of a small dam in Rye Brook, New York.

Falco aims to help combat potential cyberattacks through his research. One arm of his dissertation, which he is working on with renown negotiation Professor Lawrence Susskind, is aimed at conflict negotiation, and looks at how best to negotiate with cyberterrorists. Also, with CSAIL Principal Research Scientist Howard Shrobe, Falco seeks to determine the possibility of predicting which control-systems vulnerabilities could be exploited in critical urban infrastructure. The final branch of Falco’s dissertation is in collaboration with NASA’s Jet Propulsion Laboratory. He has secured a contract to develop an artificial intelligence-powered automated attack generator that can identify all the possible ways someone could hack and destroy NASA’s systems.

“What I really intend to do for my PhD is something that is actionable to the communities I’m working with,” Falco says. “I don’t want to publish something in a book that will sit on a shelf where nobody would read it.”

Million investment in new lab with MIT to advance AI hardware, software, and algorithms

IBM and MIT today announced that IBM plans to make a 10-year, $240 million investment to create the MIT–IBM Watson AI Lab in partnership with MIT. The lab will carry out fundamental artificial intelligence (AI) research and seek to propel scientific breakthroughs that unlock the potential of AI. The collaboration aims to advance AI hardware, software, and algorithms related to deep learning and other areas; increase AI’s impact on industries, such as health care and cybersecurity; and explore the economic and ethical implications of AI on society. IBM’s $240 million investment in the lab will support research by IBM and MIT scientists.

The new lab will be one of the largest long-term university-industry AI collaborations to date, mobilizing the talent of more than 100 AI scientists, professors, and students to pursue joint research at IBM’s Research Lab in Cambridge, Massachusetts — co-located with the IBM Watson Health and IBM Security headquarters in Kendall Square — and on the neighboring MIT campus.

The lab will be co-chaired by Dario Gil, IBM Research VP of AI and IBM Q, and Anantha P. Chandrakasan, dean of MIT’s School of Engineering. (Read a related Q&A with Chandrakasan.) IBM and MIT plan to issue a call for proposals to MIT researchers and IBM scientists to submit their ideas for joint research to push the boundaries in AI science and technology in several areas, including:

AI algorithms: Developing advanced algorithms to expand capabilities in machine learning and reasoning. Researchers will create AI systems that move beyond specialized tasks to tackle more complex problems and benefit from robust, continuous learning. Researchers will invent new algorithms that can not only leverage big data when available, but also learn from limited data to augment human intelligence.
Physics of AI: Investigating new AI hardware materials, devices, and architectures that will support future analog computational approaches to AI model training and deployment, as well as the intersection of quantum computing and machine learning. The latter involves using AI to help characterize and improve quantum devices, and researching the use of quantum computing to optimize and speed up machine-learning algorithms and other AI applications.
Application of AI to industries: Given its location in IBM Watson Health and IBM Security headquarters in Kendall Square, a global hub of biomedical innovation, the lab will develop new applications of AI for professional use, including fields such as health care and cybersecurity. The collaboration will explore the use of AI in areas such as the security and privacy of medical data, personalization of health care, image analysis, and the optimum treatment paths for specific patients.
Advancing shared prosperity through AI: The MIT–IBM Watson AI Lab will explore how AI can deliver economic and societal benefits to a broader range of people, nations, and enterprises. The lab will study the economic implications of AI and investigate how AI can improve prosperity and help individuals achieve more in their lives.
In addition to IBM’s plan to produce innovations that advance the frontiers of AI, a distinct objective of the new lab is to encourage MIT faculty and students to launch companies that will focus on commercializing AI inventions and technologies that are developed at the lab. The lab’s scientists also will publish their work, contribute to the release of open source material, and foster an adherence to the ethical application of AI.

“The field of artificial intelligence has experienced incredible growth and progress over the past decade. Yet today’s AI systems, as remarkable as they are, will require new innovations to tackle increasingly difficult real-world problems to improve our work and lives,” says John Kelly III, IBM senior vice president, Cognitive Solutions and Research. “The extremely broad and deep technical capabilities and talent at MIT and IBM are unmatched, and will lead the field of AI for at least the next decade.”

Reduce power consumption of data center “caches” by 90 percent

Most modern websites store data in databases, and since database queries are relatively slow, most sites also maintain so-called cache servers, which list the results of common queries for faster access. A data center for a major web service such as Google or Facebook might have as many as 1,000 servers dedicated just to caching.

Cache servers generally use random-access memory (RAM), which is fast but expensive and power-hungry. This week, at the International Conference on Very Large Databases, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) are presenting a new system for data center caching that instead uses flash memory, the kind of memory used in most smartphones.

Per gigabyte of memory, flash consumes about 5 percent as much energy as RAM and costs about one-tenth as much. It also has about 100 times the storage density, meaning that more data can be crammed into a smaller space. In addition to costing less and consuming less power, a flash caching system could dramatically reduce the number of cache servers required by a data center.

The drawback to flash is that it’s much slower than RAM. “That’s where the disbelief comes in,” says Arvind, the Charles and Jennifer Johnson Professor in Computer Science Engineering and senior author on the conference paper. “People say, ‘Really? You can do this with flash memory?’ Access time in flash is 10,000 times longer than in DRAM [dynamic RAM].”

But slow as it is relative to DRAM, flash access is still much faster than human reactions to new sensory stimuli. Users won’t notice the difference between a request that takes .0002 seconds to process — a typical round-trip travel time over the internet — and one that takes .0004 seconds because it involves a flash query.

Keeping pace

The more important concern is keeping up with the requests flooding the data center. The CSAIL researchers’ system, dubbed BlueCache, does that by using the common computer science technique of “pipelining.” Before a flash-based cache server returns the result of the first query to reach it, it can begin executing the next 10,000 queries. The first query might take 200 microseconds to process, but the responses to the succeeding ones will emerge at .02-microsecond intervals.

Even using pipelining, however, the CSAIL researchers had to deploy some clever engineering tricks to make flash caching competitive with DRAM caching. In tests, they compared BlueCache to what might be called the default implementation of a flash-based cache server, which is simply a data-center database server configured for caching. (Although slow compared to DRAM, flash is much faster than magnetic hard drives, which it has all but replaced in data centers.) BlueCache was 4.2 times as fast as the default implementation.

Joining Arvind on the paper are first author Shuotao Xu and his fellow MIT graduate student in electrical engineering and computer science Sang-Woo Jun; Ming Liu, who was an MIT graduate student when the work was done and is now at Microsoft Research; Sungjin Lee, an assistant professor of computer science and engineering at the Daegu Gyeongbuk Institute of Science and Technology in Korea, who worked on the project as a postdoc in Arvind’s lab; and Jamey Hicks, a freelance software architect and MIT affiliate who runs the software consultancy Accelerated Tech.

The boundaries of research on artificial intelligence

MIT and IBM jointly announced today a 10-year agreement to create the MIT–IBM Watson AI Lab, a new collaboration for research on the frontiers of artificial intelligence. Anantha Chandrakasan, the dean of MIT’s School of Engineering, who led MIT’s work in forging the agreement, sat down with MIT News to discuss the new lab.

Q: What does the new collaboration make possible?

A: AI is everywhere. It’s used in just about every domain you can think of and is central to diverse fields, from image and speech recognition, to machine learning for disease detection, to drug discovery, to financial modeling for global trade.

This new collaboration will bring together researchers working on the core algorithms and devices that make such applications possible, enabling the pursuit of jointly defined projects. We will focus on basic research and applications, but with new resources and colleagues and tremendous access to real-world data and computational power.

The project will support many different pursuits, from scholarship, to the licensing of technology, to the release of open-source material, to the creation of startups. We hope to use this new lab as a template for many other interactions with industry.

We’ll issue a call for proposals to all researchers at MIT soon; this new lab will hope to attract interest from all five schools. I’ll co-chair the lab alongside Dario Gil, IBM Research VP of AI and IBM Q, and Dario and I will name co-directors from MIT and IBM soon.

Q: What are the key areas of research that this lab will focus on?

A: The main areas of focus are AI algorithms, the application of AI to industries (such as biomedicine and cybersecurity), the physics of AI, and ways to use AI to advance shared prosperity.

The core AI theme will focus on not only advancing deep-learning algorithms and other approaches, but also the use of AI to understand and enhance human intelligence. One of the goals is to build machine learning and AI systems that excel at both narrow tasks and the human skills of discovery and explanation. In terms of applications, there are some particular targets we have in mind, including being able to detect cancer (e.g., by using AI with imaging in radiology to automatically detect breast cancer) well before we do now.

This new collaboration will also provide a framework for aggregating knowledge from different domains. For example, a method that we use for cancer detection might also be useful in detecting other diseases, or the tools we develop to enable this might end up being useful in a non-biomedical context.

The work on the physics of AI will include quantum computing and new kinds of materials, devices, and architectures that will support machine-learning hardware. This will require innovations not only in the way that we think about algorithms and systems, but also at the physical level of devices and materials at the nanoscale.

To that end, IBM will become a founding member of MIT.nano, our new nanotechnology research, fabrication, and imaging facility that is set to open in the summer of 2018.

Lastly, researchers will explore how AI can increase prosperity broadly. They will also develop approaches to mitigate data bias and to ensure that AI systems behave ethically when deployed.

Luqiao Liu lab synthesizing and testing manganese gallium samples

Assistant professor of electrical engineering Luqiao Liu is developing new magnetic materials, known as antiferromagnets, that can be operated at room temperature by reversing their electron spin and can serve as the basis for long-lasting, spintronic computer memory. Stephanie Bauman, an intern in the Materials Processing Center and Center for Materials Science and Engineering Summer Scholars program, spent her internship making and testing these new materials, which include manganese gallium samples.

“In our project we’re working on the area of spintronics, anti-ferromagnetic devices that switch electron spin controlled by a current,” said Bauman, a University of South Florida physics major. “I’m working with a lot of new equipment like the vibrating sample magnetometer and the sputterer to lay down thin films.”

“I’ve been working on a daily basis with Joe Finley, who is a graduate student here, and he’s been a explaining a lot of things to me,” Bauman said. “It’s a very dense subject matter. And he does help me out a lot when we go to things like the X-ray diffraction room, and he shows me how the graphs can interpret how thick each layer of the thin layers of the devices are. He’s really helpful and easy to work with.”

During a visit to the lab, where she synthesizes these thin films with a special machine called a sputter deposition chamber, Bauman said she always refers to a checklist to make sure she’s doing everything in the right order. In order to take out a sample from the machine, she follows a complicated set of steps, making sure its parts are correctly lined up and unhooking the sample holder in the main chamber. Because the chamber is pressurized, she must bring it back to everyday atmospheric pressure before taking it out. “Now that I can see that it disengaged, I go ahead and move it all the way back up,” she said. With the sample holder on a moveable arm, she is able to rotate it out.

The sample moved across a gear arm out of the main chamber into transfer chamber known as a load lock. “A very, very important part of this is to make sure you close the transfer valve again, otherwise you mess up the pressure in the main chamber,” she said. After double-checking the transfer valve is closed, she brought the load lock back to sea level pressure of 760 torr. Then she took out the sample holder.

“As you can see the sample is really tiny. It’s half a centimeter by a half a centimeter, which is what we’re working with right now,” Bauman said. As she loosened the screws on the arms holding the sample in place, she noted that she had to be careful not to scratch the sample with the arms. Once safely removed, she placed the sample in a special holder, labeled based on when each sample was made, which sample of the day it is and its thickness. That way, she noted, “we can refer back to that in our data so that we know what thickness levels that we’re testing.”

“Sometimes you end up playing tiddlywinks. I know that some younger people don’t really know what that game is, but it’s what it looks like when you push down on the arm, and the sample goes flying,” she cautioned.

Bring optical communication onto silicon chips

The huge increase in computing performance in recent decades has been achieved by squeezing ever more transistors into a tighter space on microchips.

However, this downsizing has also meant packing the wiring within microprocessors ever more tightly together, leading to effects such as signal leakage between components, which can slow down communication between different parts of the chip. This delay, known as the “interconnect bottleneck,” is becoming an increasing problem in high-speed computing systems.

One way to tackle the interconnect bottleneck is to use light rather than wires to communicate between different parts of a microchip. This is no easy task, however, as silicon, the material used to build chips, does not emit light easily, according to Pablo Jarillo-Herrero, an associate professor of physics at MIT.

Now, in a paper published today in the journal Nature Nanotechnology, researchers describe a light emitter and detector that can be integrated into silicon CMOS chips. The paper’s first author is MIT postdoc Ya-Qing Bie, who is joined by Jarillo-Herrero and an interdisciplinary team including Dirk Englund, an associate professor of electrical engineering and computer science at MIT.

The device is built from a semiconductor material called molybdenum ditelluride. This ultrathin semiconductor belongs to an emerging group of materials known as two-dimensional transition-metal dichalcogenides.

Unlike conventional semiconductors, the material can be stacked on top of silicon wafers, Jarillo-Herrero says.

“Researchers have been trying to find materials that are compatible with silicon, in order to bring optoelectronics and optical communication on-chip, but so far this has proven very difficult,” Jarillo-Herrero says. “For example, gallium arsenide is very good for optics, but it cannot be grown on silicon very easily because the two semiconductors are incompatible.”

In contrast, the 2-D molybdenum ditelluride can be mechanically attached to any material, Jarillo-Herrero says.

Another difficulty with integrating other semiconductors with silicon is that the materials typically emit light in the visible range, but light at these wavelengths is simply absorbed by silicon.

Molybdenum ditelluride emits light in the infrared range, which is not absorbed by silicon, meaning it can be used for on-chip communication.

To use the material as a light emitter, the researchers first had to convert it into a P-N junction diode, a device in which one side, the P side, is positively charged, while the other, N side, is negatively charged.

Laboratory team scores big at international hacking event

They call themselves Lab RATs, in a nod to remote access trojans, which are malware that attempt to hijack a computer’s operations. Battling teams from around the world, a team of staff members from MIT Lincoln Laboratory’s Cyber Security and Information Sciences Division and Information Services Department made it all the way to the finals of this year’s DEF CON Capture the Flag (CTF) hacking competition.

The laboratory’s cyber researchers and analysts, joined by students from Rensselaer Polytechnic Institute and MIT, were pitted against other elite teams trying to breach each other’s computers and capture “flags” — which are actually code strings — embedded within the programming. Because DEF CON CTF is an attack-and-defend tournament, competitors not only had to infiltrate opponents’ systems to steal flags and earn points, they also accrued points by keeping their own services up and running against the onslaught of 14 other teams who came to DEF CON from Germany, Israel, Russia, China, Korea, and Hungary, as well as elsewhere in the U.S.

After the 52-hour contest was over, the Lab RATs had earned 10th place among the 15 teams that had qualified for the finals of DEF CON CTF, the world’s premier hacking competition. Teams chosen for the coveted finals slots emerged from more than 4,000 entrants who competed in qualifying events.

This year’s CTF was held in Las Vegas, and was part of the annual DEF CON hackers’ convention, which attracts not only amateur codebreakers but also cybersecurity professionals from academia, governments, and businesses worldwide.

This was the first year Lab RATS qualified for the finals of the competition, which they have entered for the past three years. The team meets and practices during non-work hours at the Beaver Works facility in Cambridge, Massachusetts, and membership fluctuates between 20-30 laboratory employees and six to eight MIT students.

“Participation in DEF CON CTF is realistic cybersecurity training,” says Lab RATs captain Andrew Fasano of the laboratory’s Cyber System Assessments Group. “You have to develop the tools and mindset to attack and defend computer systems in a high-pressure environment.”

This year’s DEF CON CTF competition was a humdinger, Fasano says. The Legitimate Business Syndicate, organizer of the 2017 CTF and a previous competitor at DEF CON CTF finals, was on its last year of a multiyear contract to devise the game and was determined to make their swan song an extreme challenge.

The Computer Science and Artificial Intelligence Laboratory could make it easier

Certain industries have traditionally not had the luxury of telecommuting. Many manufacturing jobs, for example, require a physical presence to operate machinery.

But what if such jobs could be done remotely? Last week researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) presented a virtual reality (VR) system that lets you teleoperate a robot using an Oculus Rift headset.

The system embeds the user in a VR control room with multiple sensor displays, making it feel like they’re inside the robot’s head. By using hand controllers, users can match their movements to the robot’s movements to complete various tasks.
“A system like this could eventually help humans supervise robots from a distance,” says CSAIL postdoc Jeffrey Lipton, who was the lead author on a related paper about the system. “By teleoperating robots from home, blue-collar workers would be able to tele-commute and benefit from the IT revolution just as white-collars workers do now.”

The researchers even imagine that such a system could help employ increasing numbers of jobless video-gamers by “gameifying” manufacturing positions.

The team used the Baxter humanoid robot from Rethink Robotics, but said that it can work on other robot platforms and is also compatible with the HTC Vive headset.

Lipton co-wrote the paper with CSAIL Director Daniela Rus and researcher Aidan Fay. They presented the paper at the recent IEEE/RSJ International Conference on Intelligent Robots and Systems in Vancouver.

There have traditionally been two main approaches to using VR for teleoperation.

In a direct model, the user’s vision is directly coupled to the robot’s state. With these systems, a delayed signal could lead to nausea and headaches, and the user’s viewpoint is limited to one perspective.

In a cyber-physical model, the user is separate from the robot. The user interacts with a virtual copy of the robot and the environment. This requires much more data, and specialized spaces.

The CSAIL team’s system is halfway between these two methods. It solves the delay problem, since the user is constantly receiving visual feedback from the virtual world. It also solves the the cyber-physical issue of being distinct from the robot: Once a user puts on the headset and logs into the system, they’ll feel as if they’re inside Baxter’s head.

Computer science reduce false positives and unnecessary surgeries

Every year 40,000 women die from breast cancer in the U.S. alone. When cancers are found early, they can often be cured. Mammograms are the best test available, but they’re still imperfect and often result in false positive results that can lead to unnecessary biopsies and surgeries.

One common cause of false positives are so-called “high-risk” lesions that appear suspicious on mammograms and have abnormal cells when tested by needle biopsy. In this case, the patient typically undergoes surgery to have the lesion removed; however, the lesions turn out to be benign at surgery 90 percent of the time. This means that every year thousands of women go through painful, expensive, scar-inducing surgeries that weren’t even necessary.

How, then, can unnecessary surgeries be eliminated while still maintaining the important role of mammography in cancer detection? Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), Massachusetts General Hospital, and Harvard Medical School believe that the answer is to turn to artificial intelligence (AI).

As a first project to apply AI to improving detection and diagnosis, the teams collaborated to develop an AI system that uses machine learning to predict if a high-risk lesion identified on needle biopsy after a mammogram will upgrade to cancer at surgery.

When tested on 335 high-risk lesions, the model correctly diagnosed 97 percent of the breast cancers as malignant and reduced the number of benign surgeries by more than 30 percent compared to existing approaches.

“Because diagnostic tools are so inexact, there is an understandable tendency for doctors to over-screen for breast cancer,” says Regina Barzilay, MIT’s Delta Electronics Professor of Electrical Engineering and Computer Science and a breast cancer survivor herself. “When there’s this much uncertainty in data, machine learning is exactly the tool that we need to improve detection and prevent over-treatment.”

Trained on information about more than 600 existing high-risk lesions, the model looks for patterns among many different data elements that include demographics, family history, past biopsies, and pathology reports.

“To our knowledge, this is the first study to apply machine learning to the task of distinguishing high-risk lesions that need surgery from those that don’t,” says collaborator Constance Lehman, professor at Harvard Medical School and chief of the Breast Imaging Division at MGH’s Department of Radiology. “We believe this could support women to make more informed decisions about their treatment, and that we could provide more targeted approaches to health care in general.”

The Institute has become one of the first universities to issue recipient

In 1868, the fledgling Massachusetts Institute of Technology on Boylston Street awarded its first diplomas to 14 graduates. Since then, it has issued paper credentials to more than 207,000 undergraduate and graduate students in much the same way.

But this summer, as part of a pilot program, a cohort of 111 graduates became the first to have the option to receive their diplomas on their smartphones via an app, in addition to the traditional format. The pilot resulted from a partnership between the MIT Registrar’s Office and Learning Machine, a Cambridge, Massachusetts-based software development company.

The app is called Blockcerts Wallet, and it enables students to quickly and easily get a verifiable, tamper-proof version of their diploma that they can share with employers, schools, family, and friends. To ensure the security of the diploma, the pilot utilizes the same blockchain technology that powers the digital currency Bitcoin. It also integrates with MIT’s identity provider, Touchstone. And while digital credentials aren’t new — some schools and businesses are already touting their use of them — the MIT pilot is groundbreaking because it gives students autonomy over their own records.

“From the beginning, one of our primary motivations has been to empower students to be the curators of their own credentials,” says Registrar and Senior Associate Dean Mary Callahan. “This pilot makes it possible for them to have ownership of their records and be able to share them in a secure way, with whomever they choose.”

The Institute is among the first universities to make the leap, says Chris Jagers, co-founder and CEO of Learning Machine.

“MIT has issued official records in a format that can exist even if the institution goes away, even if we go away as a vendor,” Jagers says. “People can own and use their official records, which is a fundamental shift.”