Archive 06.07.2017

Page 4 of 4
1 2 3 4

China’s e-commerce dynamo JD makes deliveries via mobile robots

China’s second-biggest e-commerce company, JD.com (Alibaba is first), is testing mobile robots to make deliveries to its customers, and imagining a future with fully unmanned logistics systems.

 Story idea and images courtesy of RoboticsToday.com.au.

On the last day of a two-week-long shopping bonanza that recorded sales of around $13 billion, some deliveries were made using mobile robots designed by JD. It’s the first time that the company has used delivery robots in the field. The bots delivered packages to multiple Beijing university campuses such as Tsinghua University and Renmin University. 

JD has been testing delivery robots since November last year. At that time, the cost of a single robot was almost $88,000.

They have been working on lowering the cost and increasing the capabilities since then. The white, four-wheeled UGVs can carry five packages at once and travel 13 miles on a charge. They can climb up a 25° incline and find the shortest route from warehouse to destination.

Once it reaches its destination, the robot sends a text message to notify the recipient of the delivery. Users can accept the delivery through face-recognition technology or by using a code.

The UGVs now cost $7,300 per robotic unit which JD figures can reduce delivery costs from less than $1 for a human delivery to about 20 cents for a robot delivery.

JD is also testing the world’s largest drone-delivery network, including flying drones carrying products weighing as much as 2,000 pounds.

“Our logistics systems can be unmanned and 100% automated in 5 to 8 years,” said Liu Qiangdong, JD’s chairman.

Simulated car demo using ROS Kinetic and Gazebo 8

By Tully Foote

We are excited to show off a simulation of a Prius in Mcity using ROS Kinetic and Gazebo 8. ROS enabled the simulation to be developed faster by using existing software and libraries. The vehicle’s throttle, brake, steering, and transmission are controlled by publishing to a ROS topic. All sensor data is published using ROS, and can be visualized with RViz.

We leveraged Gazebo’s capabilities to incorporate existing models and sensors.
The world contains a new model of Mcity and a freeway interchange. There are also models from the gazebo models repository including dumpsters, traffic cones, and a gas station. On the vehicle itself there is a 16 beam lidar on the roof, 8 ultrasonic sensors, 4 cameras, and 2 planar lidar.

The simulation is open source and available at on GitHub at osrf/car_demo. Try it out by installing nvidia-docker and pulling “osrf/car_demo” from Docker Hub. More information about building and running is available in the README in the source repository.

Talking Machines: Bias variance dilemma for humans and the arm farm, with Jeff Dean

In episode four of season three Neil introduces us to the ideas behind the bias variance dilemma (and how how we can think about it in our daily lives). Plus, we answer a listener question about how to make sure your neural networks don’t get fooled. Our guest for this episode is Jeff Dean,  Google Senior Fellow in the Research Group, where he leads the Google Brain project. We talk about a closet full of robot arms (the arm farm!), image recognition for diabetic retinopathy, and equality in data and the community.

Fun Fact: Geoff Hinton’s distant relative invented the word tesseract. (How cool is that. Seriously.)


If you enjoyed this episode, you may also want to listen to:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

June 2017 fundings, acquisitions, and IPOs

June, 2017 saw two robotics-related companies get $50 million each and 17 others raised $248 million for a monthly total of $348 million. Acquisitions also continued to be substantial with SoftBank's acquisition of Google's robotic properties Boston Dynamics and Schaft plus two others acquisitions.

Fundings

  • Drive.ai raised $50 million in a Series B funding round, led by New Enterprise Associates, Inc. (NEA) with participation from GGV Capital and existing investors including Northern Light Venture Capital. Andrew Ng who led AI projects at Baidu and Google (and is husband to Drive.ai’s co-founder and president Carol Reiley) joined the board of directors and said: 

    “The cutting-edge of autonomous driving has shifted squarely to deep learning. Even traditional autonomous driving teams have 'sprinkled on' some deep learning, but Drive.ai is at the forefront of leveraging deep learning to build a truly modern autonomous driving software stack.

  • Aera Technology, renamed from FusionOps, a Silicon Valley software and AI provider, raised $50 million from New Enterprise Associates. Aera seems to be the first RPA to actuate in the physical world. Merck uses Aera to predict demand, determine product routing and interact with warehouse management systems to enact what’s needed.

    “The leap from transactional automation to cognitive automation is imminent and it will forever transform the way we work,” says Frederic Laluyaux, President and CEO of Aera. “At Aera, we deliver the technology that enables the Self-Driving Enterprise: a cognitive operating system that connects you with your business and autonomously orchestrates your operations.”

  • Swift Navigation, the San Francisco tech firm building centimeter-accurate GPS technology to power a world of autonomous vehicles, raised $34 million in a Series B financing round led by New Enterprise Associates (NEA), with participation from existing investors Eclipse and First Round Capital. Swift provides solutions to over 2,000 customers-including autonomous vehicles, precision agriculture, unmanned aerial vehicles (UAVs), robotics, maritime, transportation/logistics and outdoor industrial applications. By moving GPS positioning from custom hardware to a flexible software-based receiver, Swift Navigation delivers Real Time Kinematics (RTK) GPS (100 times more accurate than traditional GPS) at a fraction of the cost ($2k) of alternative RTK systems.
  • AeroFarms raised over $34 million of a $40 million Series D. The New Jersey-based indoor vertical farming startup has raised over $130 million since 2014 and now has 9 operating indoor farms. AeroFarms grows leafy greens using aeroponics –- growing them in a misting environment without soil using LED lights, and growth algorithms. The round brings AeroFarms’ total fundraising to over $130 million since 2014 including a $40 million note from Goldman Sachs and Prudential.
  • Seven Dreamers Labs, a Tokyo startup commercializing the Laundroid robot, raised $22.8 million KKR’s co-founders Henry Kravis and George Roberts, Chinese conglomerate Fosun International, and others. Laundroid is being developed with Panasonic and Daiwa House.
  • Bowery Farming, which raised $7.5 million earlier this year, raised an additional $20 million from General Catalyst, GGV Capital and GV (formerly Google Ventures). Bowery’s first indoor farm in Kearny, NJ, uses proprietary computer software, LED lighting and robotics to grow leafy greens without pesticides and with 95% less water than traditional agriculture.
  • Drone Racing League raised $20 million in a Series B investment round led by Sky, Liberty Media and Lux Capital, and new investors Allianz and World Wrestling Entertainment, plus existing investors Hearst Ventures, RSE Ventures, Lerer Hippeau Ventures, and Courtside Ventures.
  • Momentum Machines, the SF-based startup developing a hamburger-making robot, raised $18.4 million in an equity offering of $21.8 million, from existing investors Lemnos Labs, GV, K5 Ventures and Khosla Ventures. The company has been working on its first retail location since at least June of last year. There is still no scheduled opening date for the flagship, though it's expected to be located in San Francisco's South of Market neighborhood.
  • AEye, a startup developing a solid state LiDAR and other vision systems for self-driving cars, raised $16 million in a Series A round led by  Kleiner Perkins Caufield & Byers, Airbus Ventures, Intel Capital, Tyche Partners and others.

    Said Luis Dussan, CEO of AEye. “The biggest bottleneck to the rollout of robotic vision solutions has been the industry’s inability to deliver a world-class perception layer. Quick, accurate, intelligent interpretation of the environment that leverages and extends the human experience is the Holy Grail, and that’s exactly what AEye intends to deliver.”

  • Snips,  an NYC voice recognition AI startup, raised $13 million in a Series A round led by MAIF Avenir with PSIM Fund managed by Bpifrance, as well as previous investor Eniac Ventures and K-Fund 1 and Korelya Capital, joining the round.  Snip makes an on-device system that parses and understands better than Amazon's Alexa.
  • Misty Robotics, a spin-out from Orbotix/Sphero, raised $11.5 million in Series A funding from Venrock, Foundry Group and others. Ian Bernstein, former Sphero co-founder and CTO, will be taking the role of Head of Product and is joined by five other autonomous robotics division team members. Misty Robotics will use its new capital to build out the team and accelerate product development. Sphero and Misty Robotics will have a close partnership and have signed co-marketing and co-development agreements.
  • Superflex, a spin-off from SRI, has raised $10.2 million in equity financing from 10 unnamed investors. Superflex is developing a powered suit designed for individuals experiencing mobility difficulties and working in challenging environments to support the wearer’s torso, hips and legs.
  • Nongtian Guanjia (FarmFriend), a Chinese drone/ag industry software startup, raised $7.36 million led by Gobi Partners and existing investors GGV Capital, Shunwei Capital, the Zhen Fund and Yunqi Partners.
  • Carmera, a NYC-based auto tech startup, unstealthed this week with $6.4M in funding led by Matrix Partners. The two-year-old company has been quietly collecting data for its 3D mapping solution, partnering with delivery fleets to install its sensor and data collection platform.
  • Cognata, an Israeli deep learning simulation startup, raised $5 million from Emerge, Maniv Mobility, and Airbus Ventures. Cognata recently launched a self-driving vehicle road-testing simulation package

    “Every autonomous vehicle developer faces the same challenge—it is really hard to generate the numerous edge cases and the wide variety of real-world environments. Our simulation platform rapidly pumps out large volumes of rich training data to fuel these algorithms,” said Cognata’s Danny Atsmon

  • SoftWear Automation, the GA Tech and DARPA sponsored startup developing sewing robots for apparel manufacturing, raised $4.5 million in a Series A round from CTW Venture Partners.
  • Knightscope, a startup developing robotic security technologies, raised $3 million from Konica Minolta. The capital is to be invested in Knightscope’s current Reg A+ “mini-IPO” offering of Series M Preferred Stock.
  • Multi Tower Co, a Danish medical device startup, raised around $1.12 million through a network of private and public investors most notable of which were Syddansk Innovation, Rikkesege Invest, M. Blæsbjerg Holding and Dahl Gruppen Holding. The Multi Tower Robot used to lift and move hospital patients, is developed through Blue Ocean Robotics’ partnership program, RoBi-X, in a public-private partnership (PPP) between University Hospital Køge, Multi Tower Company and Blue Ocean Robotics.
  • Optimus Ride, a MIT spinoff developing self-driving tech, raised $1.1 million in financing from an undisclosed investor.

Acquisitions

  • SoftBank acquired Boston Dynamics and Schaft from Google Alphabet for an undisclosed amount.
    • Boston Dynamics, a DARPA and DoD-funded 25 year old company, designs two and four-legged robots for the military. Videos of BD’s robots WildCat, Big Dog, Cheetah and most recently Handle, continue to be YouTube hits. Handle is a two-wheeled, four-legged hybrid robot that can stand, walk, run and roll at up to 9 MPH.
    • Schaft, a Japanese participant in the DARPA Robotics Challenge, recently unveiled an updated version of a two-legged robot that climbed stairs, can carry 125 pounds of payload, move in tight spaces and keep its balance throughout. 
  • IPG Photonics, a laser component manufacturer/integrator of welding and laser-cutting systems, including robotic ones, acquired Innovative Laser Technologies, a Minnesota laser systems maker, for $40 million. 
  • Motivo Engineering, an engineering product developer, has acquired Robodondo, an ag tech integrator focused on food processing, for an undisclosed amount.

IPOs

  • None. Nada. Zip.

Peering into neural networks

Neural networks learn to perform computational tasks by analyzing large sets of training data. But once they’ve been trained, even their designers rarely have any idea what data elements they’re processing.
Image: Christine Daniloff/MIT

By Larry Hardesty

Neural networks, which learn to perform computational tasks by analyzing large sets of training data, are responsible for today’s best-performing artificial intelligence systems, from speech recognition systems, to automatic translators, to self-driving cars.

But neural nets are black boxes. Once they’ve been trained, even their designers rarely have any idea what they’re doing — what data elements they’re processing and how.

Two years ago, a team of computer-vision researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) described a method for peering into the black box of a neural net trained to identify visual scenes. The method provided some interesting insights, but it required data to be sent to human reviewers recruited through Amazon’s Mechanical Turk crowdsourcing service.

At this year’s Computer Vision and Pattern Recognition conference, CSAIL researchers will present a fully automated version of the same system. Where the previous paper reported the analysis of one type of neural network trained to perform one task, the new paper reports the analysis of four types of neural networks trained to perform more than 20 tasks, including recognizing scenes and objects, colorizing grey images, and solving puzzles. Some of the new networks are so large that analyzing any one of them would have been cost-prohibitive under the old method.

The researchers also conducted several sets of experiments on their networks that not only shed light on the nature of several computer-vision and computational-photography algorithms, but could also provide some evidence about the organization of the human brain.

Neural networks are so called because they loosely resemble the human nervous system, with large numbers of fairly simple but densely connected information-processing “nodes.” Like neurons, a neural net’s nodes receive information signals from their neighbors and then either “fire” — emitting their own signals — or don’t. And as with neurons, the strength of a node’s firing response can vary.

In both the new paper and the earlier one, the MIT researchers doctored neural networks trained to perform computer vision tasks so that they disclosed the strength with which individual nodes fired in response to different input images. Then they selected the 10 input images that provoked the strongest response from each node.

In the earlier paper, the researchers sent the images to workers recruited through Mechanical Turk, who were asked to identify what the images had in common. In the new paper, they use a computer system instead.

“We catalogued 1,100 visual concepts — things like the color green, or a swirly texture, or wood material, or a human face, or a bicycle wheel, or a snowy mountaintop,” says David Bau, an MIT graduate student in electrical engineering and computer science and one of the paper’s two first authors. “We drew on several data sets that other people had developed, and merged them into a broadly and densely labeled data set of visual concepts. It’s got many, many labels, and for each label we know which pixels in which image correspond to that label.”

The paper’s other authors are Bolei Zhou, co-first author and fellow graduate student; Antonio Torralba, MIT professor of electrical engineering and computer science; Aude Oliva, CSAIL principal research scientist; and Aditya Khosla, who earned his PhD as a member of Torralba’s group and is now the chief technology officer of the medical-computing company PathAI.

The researchers also knew which pixels of which images corresponded to a given network node’s strongest responses. Today’s neural nets are organized into layers. Data are fed into the lowest layer, which processes them and passes them to the next layer, and so on. With visual data, the input images are broken into small chunks, and each chunk is fed to a separate input node.

For every strong response from a high-level node in one of their networks, the researchers could trace back the firing patterns that led to it, and thus identify the specific image pixels it was responding to. Because their system could frequently identify labels that corresponded to the precise pixel clusters that provoked a strong response from a given node, it could characterize the node’s behavior with great specificity.

The researchers organized the visual concepts in their database into a hierarchy. Each level of the hierarchy incorporates concepts from the level below, beginning with colors and working upward through textures, materials, parts, objects, and scenes. Typically, lower layers of a neural network would fire in response to simpler visual properties — such as colors and textures — and higher layers would fire in response to more complex properties.

But the hierarchy also allowed the researchers to quantify the emphasis that networks trained to perform different tasks placed on different visual properties. For instance, a network trained to colorize black-and-white images devoted a large majority of its nodes to recognizing textures. Another network, when trained to track objects across several frames of video, devoted a higher percentage of its nodes to scene recognition than it did when trained to recognize scenes; in that case, many of its nodes were in fact dedicated to object detection.

One of the researchers’ experiments could conceivably shed light on a vexed question in neuroscience. Research involving human subjects with electrodes implanted in their brains to control severe neurological disorders has seemed to suggest that individual neurons in the brain fire in response to specific visual stimuli. This hypothesis, originally called the grandmother-neuron hypothesis, is more familiar to a recent generation of neuroscientists as the Jennifer-Aniston-neuron hypothesis, after the discovery that several neurological patients had neurons that appeared to respond only to depictions of particular Hollywood celebrities.

Many neuroscientists dispute this interpretation. They argue that shifting constellations of neurons, rather than individual neurons, anchor sensory discriminations in the brain. Thus, the so-called Jennifer Aniston neuron is merely one of many neurons that collectively fire in response to images of Jennifer Aniston. And it’s probably part of many other constellations that fire in response to stimuli that haven’t been tested yet.

Because their new analytic technique is fully automated, the MIT researchers were able to test whether something similar takes place in a neural network trained to recognize visual scenes. In addition to identifying individual network nodes that were tuned to particular visual concepts, they also considered randomly selected combinations of nodes. Combinations of nodes, however, picked out far fewer visual concepts than individual nodes did — roughly 80 percent fewer.

“To my eye, this is suggesting that neural networks are actually trying to approximate getting a grandmother neuron,” Bau says. “They’re not trying to just smear the idea of grandmother all over the place. They’re trying to assign it to a neuron. It’s this interesting hint of this structure that most people don’t believe is that simple.”

The Drone Center’s Weekly Roundup: 7/3/17

The OR-3 autonomous security robot will begin patrolling parts of Dubai. Credit: Otsaw Digital

At the Center for the Study of the Drone

In a podcast at The Drone Radio Show, Arthur Holland Michel discusses the Center for the Study of the Drone’s recent research on local drone regulations, public safety drones, and legal incidents involving unmanned aircraft.

In a series of podcasts at the Center for a New American Security, Dan Gettinger discusses trends in drone proliferation and the U.S. policy on drone exports.

News

The U.S. Court of Appeals for the D.C. Circuit dismissed a lawsuit over the death of several civilians from a U.S. drone strike in Yemen, concurring with the decision of a lower court. In the decision, Judge Janice Rogers Brown argued that Congress had nevertheless failed in its oversight of the U.S. military. (The Hill)

Commentary, Analysis, and Art

At the Bulletin of Atomic Scientist, Michael Horowitz argues that the Missile Technology Control Regime is poorly suited to manage international drone proliferation.

At War on the Rocks, Joe Chapa argues that debates over the ethics of drone strikes are often clouded by misconceptions.

At Phys.org, Julien Girault writes that Chinese drone maker DJI is looking at how its consumer drones can be applied to farming.

At IHS Jane’s Navy International, Anika Torruella looks at how the U.S. Navy is investing in unmanned and autonomous technologies.

Also at IHS Jane’s, Anika Torruella writes that the U.S. Navy does not plan to include large unmanned undersea vehicles as part of its 355-ship fleet goal.

At Defense One, Brett Velicovich looks at how consumer drones can easily be altered to carry a weapons payload.

At Aviation Week, James Drew considers how U.S. drone firm General Atomics is working to develop the next generation of drones.

At Popular Science, Kelsey D. Atherton looks at how legislation in California could prohibit drone-on-drone cage fights.

At the Charlotte Observer, Robin Hayes argues that Congress should not grant Amazon blanket permission to fly delivery drones.

At the MIT Technology Review, Bruce Y. Lee argues that though medicine-carrying drones may be expensive, they will save lives.

In a speech at the SMi Future Armoured Vehicles Weapon Systems conference in London, U.S. Marine Corps Colonel Jim Jenkins discussed the service’s desire to use small, cheap autonomous drones on the battlefield. (IHS Jane’s 360)

At the Conversation, Andres Guadamuz considers whether the works of robot artists should be protected by copyright.

Know Your Drone

A team at the MIT Computer Science and Artificial Intelligence Laboratory has built a multirotor drone that is also capable of driving around on wheels like a ground robot. (CNET)

Facebook conducted a test flight of its Aquila solar-powered Internet drone. (Fortune)

Meanwhile, China Aerospace Science and Technology Corporation conducted a 15-hour test flight of its Cai Hong solar-powered drone at an altitude of over 65,000 feet. (IHS Jane’s 360)

The Defense Advanced Research Projects Agency successfully tested autonomous quadcopters that were able to navigate a complex obstacle course without GPS. (Press Release)

French firm ECA group is modifying its IT180 helicopter drone for naval operations. (Press Release)

Italian firm Leonardo plans to debut its SD-150 rotary-wing military drone in the third quarter of 2017. (IHS Jane’s 360)

Researchers at MIT are developing a drone capable of remaining airborne for up to five days at a time. (TechCrunch)

Drones at Work

The government of Malawi and  humanitarian agency Unicef have launched an air corridor to test drones for emergency response and medical deliveries. (BBC)

French police have begun using drones to search for migrants crossing the border with Italy. (The Telegraph)

Researchers from Missouri University have been testing drones to conduct inspections of water towers. (Missourian)

An Australian drug syndicate reportedly used aerial drones to run counter-surveillance on law enforcement officers during a failed bid to import cocaine into Melbourne. (BBC)

In a simulated exercise in New Jersey, first responders used a drone to provide temporary cell coverage to teams on the ground. (AUVSI)

The International Olympic Committee has announced that chipmaker Intel will provide drones for light shows at future Olympic games. (CNN)

The U.S. Air Force has performed its first combat mission with the new Block 5 variant of the MQ-9 Reaper. (UPI)

The police department in West Seneca, New York has acquired a drone. (WKBW)

Chinese logistics firm SF Express has obtained approval from the Chinese government to operate delivery drones over five towns in Eastern China. (GBTimes)

Portugal’s Air Traffic Accident Prevention and Investigation Office is leading an investigation into a number of close encounters between drones and manned aircraft in the country’s airspace. (AIN Online)

The U.S. Federal Aviation Administration and app company AirMap are developing a system that will automate low-altitude drone operation authorizations. (Drone360)

Police in Arizona arrested a man for allegedly flying a drone over a wildfire. (Associated Press)

Dubai’s police will deploy the Otsaw Digital O-R3, an autonomous security robot equipped with facial recognition software and a built-in drone, to patrol difficult-to-reach areas. (Washington Post)

The University of Southampton writes that Boaty McBoatface, an unmanned undersea vehicle, captured “unprecedented data” during its voyage to the Orkney Passage.

Five flights were diverted from Gatwick Airport when a drone was spotted flying nearby. (BBC)

Industry Intel

The U.S. Special Operations Command awarded Arcturus UAV a contract to compete in the selection of the Mid-Endurance Unmanned Aircraft System. AAI Corp. and Insitu are also competing. (DoD)

The U.S. Air Force awarded General Atomics Aeronautical a $27.6 million contract for the MQ-9 Gen 4 Predator primary datalink. (DoD)

The U.S. Army awarded AAI Corp. a $12 million contract modification for the Shadow v2 release 6 system baseline update. (DoD)

The U.S. Army awarded DBISP a $73,392 contract for 150 quadrotor drones made by DJI and other manufacturers. (FBO)

The Department of the Interior awarded NAYINTY3 a $7,742 contract for the Agisoft Photo Scan, computer software designed to process images from drones. (FBO)

The Federal Aviation Administration awarded Computer Sciences Corporation a $200,000 contract for work on drone registration. (USASpending)

The U.S. Navy awarded Hensel Phelps a $36 million contract to build a hangar for the MQ-4C Triton surveillance drone at Naval Station Mayport in Florida. (First Coast News)

The U.S. Navy awarded Kratos Defense & Security Solutions a $35 million contract for the BQM-177A target drones. (Military.com)

NATO awarded Leonardo a contract for logistic and support services for the Alliance Ground Surveillance system. (Shephard Media)

Clobotics, a Shanghai-based startup that develops artificial intelligence-equipped drones for infrastructure inspection, announced that it has raised $5 million in seed funding. (GeekWire)

AeroVironment’s stock fell despite a $124.4 million surge in revenue in its fiscal fourth quarter. (Motley Fool)

Ford is creating the Robotics and Artificial Intelligence Research team to study emerging technologies. (Ford Motor Company)

For updates, news, and commentary, follow us on Twitter. The Weekly Drone Roundup is a newsletter from the Center for the Study of the Drone. It covers news, commentary, analysis and technology from the drone world. You can subscribe to the Roundup here.

Building with robots and 3D printers: Construction of the DFAB HOUSE up and running

At the Empa and Eawag NEST building in Dübendorf, eight ETH Zurich professors as part of the Swiss National Centre of Competence in Research (NCCR) Digital Fabrication are collaborating with business partners to build the three-storey DFAB HOUSE. It is the first building in the world to be designed, planned and built using predominantly digital processes.

Robots that build walls and 3D printers that print entire formworks for ceiling slabs – digital fabrication in architecture has developed rapidly in recent years. As part of the National Centre of Competence in Research (NCCR) Digital Fabrication, architects, robotics specialists, material scientists, structural engineers and sustainability experts from ETH Zurich have teamed up with business partners to put several new digital building technologies from the laboratory into practice. Construction is taking place at NEST, the modular research and innovation building that Empa and Eawag built on their campus in Dübendorf to test new building and energy technologies under real conditions. NEST offers a central support structure with three open platforms, where individual construction projects – known as innovation units – can be installed. Construction recently began on the DFAB HOUSE.

Digitally Designed, Planned and Built
The DFAB HOUSE is distinctive in that it was not only digitally designed and planned, but is also built using predominantly digital processes. With this pilot project, the ETH professors want to examine how digital technology can make construction more sustainable and efficient, and increase the design potential. The individual components were digitally coordinated based on the design and are manufactured directly from this data. The conventional planning phase is no longer needed. As of summer 2018, the three-storey building, with a floor space of 200 m2, will serve as a residential and working space for Empa and Eawag guest researchers and partners of NEST.

Four New Building Methods Put to the Test
At the DFAB HOUSE, four construction methods are for the first time being transferred from research to architectural applications. Construction work began with the Mesh Mould technology, which received the Swiss Technology Award at the end of 2016. The result will be a double-curved load-bearing concrete wall that will shape the architecture of the open-plan living and working area on the ground floor. A “Smart Slab” will then be installed – a statistically optimised and functionally integrated ceiling slab, for which the researchers used a large-format 3D sand printer to manufacture the formwork.

Smart Dynamic Casting technology is being used for the façade on the ground floor: the automated robotic slip-forming process can produce tailor-made concrete façade posts. The two upper floors, with individual rooms, are being prefabricated at ETH Zurich’s Robotic Fabrication Laboratory using spatial timber assemblies; cooperating robots will assemble the timber construction elements.

More Information in ETH Zurich Press Release and on Project Website
Detailed information about the building process, quotes as well as image and video material can be found in the extended press release by ETH Zurich. In addition, a project website for the DFAB HOUSE is currently in development and will soon be available at the following link: www.dfabhouse.ch. Until then, Empa’s website offers information about the project: https://www.empa.ch/web/nest/digital-fabrication

NCCR Investigators Involved with the DFAB HOUSE:
Prof. Matthias Kohler, Chair of Architecture and Digital Fabrication
Prof. Fabio Gramazio, Chair of Architecture and Digital Fabrication
Prof. Benjamin Dillenburger, Chair for Digital Building Technologies
Prof. Joseph Schwartz, Chair of Structural Design
Prof. Robert Flatt, Institute for Building Materials
Prof. Walter Kaufmann, Institute of Structural Engineering
Prof. Guillaume Habert, Institute of Construction & Infrastructure Management
Prof. Jonas Buchli, Institute of Robotics and Intelligent Sys

Image credits: TBD
Image caption: TBD

tems

Page 4 of 4
1 2 3 4