ACM/IFIP/USENIX Middleware 2017

Las Vegas, Nevada

Dec 11-15, 2017

Tutorials

Middleware 2017 will feature four exciting tutorials! These will be held before the main conference on December 11th (Monday) and 12th (Tuesday). The exact schedule will be updated soon.

Click on each tutorial to learn more about it or expand all.

High Performance Network Middleware with Intel DPDK and OpenNetVM

Tim Wood, George Washington University

  • Abstract: Traditional computer networks have been built from hardware middleboxes such as routers, firewalls, and switches. These devices can quickly process network packets, but they provide very little programmability since they are based on single-purpose hardware. Recent advances in Network Function Virtualization (NFV) allow these network components to instead be run as software on commodity servers. NFV allows high performance packet processing software to be dynamically deployed and easily adjusted due to changes in network workloads. This tutorial will provide a hands-on introduction to NFV and enabling technologies such as Intel's Dataplane Development Kit (DPDK) and the OpenNetVM NFV platform. Participants will learn how to deploy network services that intercept, monitor, and transform data as it flows through the network, opening new possibilities in middleware research.

  • What to expect:
    • Intro to Network Function Virtualization (NFV) and Software Defined Networking (SDN)
    • High Performance network software with Intel DPDK and kernel bypass
    • Hands-on: Intro to DPDK
    • Composing network functions into chains with OpenNetVM
    • Hands-on: Build a load balancer

  • Who can attend: This tutorial is open to anyone curious about how emerging technologies will allow middleware research to extend across both network and server infrastructure. Attendees should have basic C programming and linux shell experience. No prior knowledge of NFV or SDN is needed. The tutorial will give an introduction to developing high performance network middleware, and will include a hands-on component where participants build a middlebox such as a network load balancer and test it on the CloudLab.us testbed. Attendees will need a laptop with an SSH client.

  • Speaker bio:

    Timothy WoodTimothy Wood is an associate professor in the Department of Computer Science at George Washington University. Before joining GW, he received a doctoral degree in computer science from the University of Massachusetts Amherst and a bachelor's degree in electrical and computer engineering from Rutgers University. His research studies how new virtualization technologies can provide application agnostic tools that improve performance, efficiency, and reliability in cloud computing data centers and software-based networks. His PhD thesis received the UMass CS Outstanding Dissertation Award, his students have voted him CS Professor of the Year, and he has won three best paper awards, a Google Faculty Research Award, and an NSF Career award.


Istio Service mesh for more robust, secure and easy to manage microservices

Edward Snible, IBM T. J. Watson Research Center
Fabio Oliveira, IBM T. J. Watson Research Center

  • Abstract: Writing reliable, loosely coupled, production-grade applications based on microservices can be challenging. As monolithic applications are decomposed into microservices, software teams have to worry about the challenges inherent in integrating services in distributed systems: they must account for service discovery, load balancing, fault tolerance, end-to-end monitoring, dynamic routing for feature experimentation, and compliance and security.Inconsistent attempts at solving these challenges, cobbled together from libraries, scripts and Stack Overflow snippets leads to solutions that vary wildly across languages and runtimes, and have poor observability.

    Google, IBM and Lyft joined forces to create the microservice mesh called Istio with the goal of providing a reliable substrate for microservice development and maintenance. Istio brings SDN concepts to microservices by transparently injecting a layer 7 proxy (data plane) between a service and the network and introducing a control plane to define and enforce policies. Istio provides fine grained control and observability while at the same time freeing up developers from the complexity of building distributed systems.

    This tutorial will start with a quick overview of common best practices to build robust microservices, their embodiment in popular software components and finally the approach taken by istio. We will then introduce key istio concepts, discuss real-world use cases, and illustrate how istio enables traffic management, observability, policy enforcement through a hands-on session. We will end with a discussion on promising research directions like automated resiliency testing and tools and techniques for advanced DevOps.

  • Who can attend: Some familiarity with microservices, usage of docker containers, kubernetes. Preferably an account with IBM Bluemix Kubernetes cluster if they want to play along with the system in real time.

  • References: Key blog post related to Istio that describes the core concepts.

  • Speaker bios:

    Edward SnibleEd Snible is a software engineer at IBM TJ Watson Research Center and has contributed to more than ten US patents in the field of software deployment automation and verification. Ed is passionate about the user experience of the DevOps engineers who deploy and maintain systems. His research interests include autotuning microservices for robustness.




    Fabio OliveiraFabio Oliveira is a Research Staff Member at the IBM T. J. Watson Research Center. He received a Ph.D. degree in Computer Science from Rutgers University in 2010. His research interests include cloud computing, systems management, distributed systems, and operational and DevOps analytics. His current research focuses on deriving meaningful insights from large volumes of metrics and monitoring data generated by complex microservice architectures.


SMACK stack 101: Building Fast Data stacks

Jörg Schad, Mesosphere

  • Abstract: In this workshop, the participants will build their own microservice application and connect it to a Fast Data Pipeline consisting of Apache Spark, Cassandra, and Kafka. This pipeline will then deployed on top of DC/OS. We will also look into the operational aspects of updating, scaling, and monitoring such data pipelines.

  • What to expect:
    • Best practices for setting up Fast Data Pipelines
    • The different components of the SMACK stack and respective alternatives
    • How to deploy such stack in an efficient and fault-tolerant way
    • How to keep such stacks running:
      • Monitoring
      • Upgrades
      • Debugging

  • Who can attend: Some knowledge of the big data ecosystem is recommended but not required. We will provide access to DC/OS cluster, but each participant should bring their own laptop with some way of sshing into a remote linux system. Apache Mesos and DC/OS are used by companies such as Twitter, Apple, Netflix, or Verizon to build their cluster infrastructure on top. Fast data stack similar to the ones we will build in this workshop are used for example by ESRI or Uber.

  • Speaker bio:

    Jörg SchadJörg is a software engineer at Mesosphere in Hamburg. In his previous life he implemented distributed and in memory databases and conducted research in the Hadoop and Cloud area. His speaking experience includes various Meetups, international conferences, and lecture halls.


Trusted Execution of Software using Intel SGX

Christof Fetzer, TU Dresden, Germany
Pascal Felber, University of Neuchatel, Switzerland
Ruediger Kapitza, TU Braunschweig, Germany
Peter Pietzuch, Imperial College London, United Kingdom

  • Abstract: In this tutorial, we will provide an overview of the recent support in Intel CPU for trusted execution of security-sensitive software using Software Guard Extensions (SGX). We will introduce the key concepts behind trusted execution and Intel SGX, and discuss the required systems/middleware support to create trustworthy applications in otherwise untrusted environments. Our goal is to convey the opportunities and challenges that the wide-spread availability of trusted execution features and SGX will bring, and how it will affect the middleware software stack found in today's data centres and cloud environments.

    Trusted hardware features have recently recently hit the mainstream through their inclusion in commodity Intel CPUs as part of the SGX extensions to the x86 instruction set architecture. Both in research and industry, we therefore witness a rising interest in how hardware trust mechanisms such as Intel SGX can be used to engineer the next generation of trustworthy cloud applications.

  • What to expect: The tutorial will be structured around lectures on particular topics related to trusted hardware, potentially with some hands-on sessions on the use of Intel SGX.

  • Speaker bios:

    Rüdiger KapitzaRüdiger Kapitza studied Computer Science at the FAU Erlangen-Nürnberg and the University of Freiburg. He obtained his PhD from the FAU Erlangen-Nürnberg in 2007. In 2010, he was a visiting scientist at IBM Research Zurich. Since 2012, he is a professor at the TU Braunschweig, where he leads the distributed systems group at the Institute of Operating Systems and Computer Networks. His research interests can be broadly summarized as systems research targeting fault-tolerant and secure distributed systems with special focus on Byzantine fault tolerance and selected aspects of trusted execution. Currently, he investigates the use of hardware-based trusted execution support in the context of the EU project SERECA that aims at improving the trustworthiness of Cloud Computing.



    Peter PietzuchPeter Pietzuch is a Full Professor at Imperial College London, where he leads the Large-scale Distributed Systems (LSDS) group (http://lsds.doc.ic.ac.uk) in the Department of Computing. His research focuses on the design and engineering of scalable, reliable and secure large-scale software systems, with a particular interest in performance, data management and security issues. He has published papers in premier international venues, including SIGMOD, VLDB, ICDE, OSDI, USENIX ATC, SoCC, ICDCS, CCS, CoNEXT, NSDI, and Middleware. Before joining Imperial College London, he was a post-doctoral fellow at Harvard University. He holds PhD and MA degrees from the University of Cambridge.



    Pascal FelberPascal Felber received his M.Sc. and Ph.D. degrees in Computer Science from the Swiss Federal Institute of Technology in 1994 and 1998. He has then worked at Oracle Corporation and Bell-Labs (Lucent Technologies) in the USA, before joining Institut EURECOM in France. Since October 2004, he is a Professor of Computer Science at the University of Neuchâtel, Switzerland, working in the field of dependable, concurrent, and distributed computing. He has published over 150 research papers in various journals and conferences.




    Christof FetzerChristof Fetzer has received his diploma in Computer Science from the University of Kaiserslautern, Germany (Dec. 1992) and his Ph.D. from UC San Diego (March 1997). Dr. Fetzer joined AT&T Labs- Research in August 1999 and had been a principal member of technical staff until March 2004. Since April 2004 he heads the endowed chair (Heinz-Nixdorf endowment) in Systems Engineering in the Computer Science Department at TU Dresden. He is the chair of the Distributed Systems Engineering International Masters Program at the Computer Science Department. Prof. Dr. Fetzer has published over 150 research papers in the field of dependable distributed systems, has been member of more than 50 program committees, has recently won four best paper awards (DEBS2013, LISA2013, SRDS2014, EuroSys2017), his PhD students have won two best student paper awards (IEEE Cloud 2014, DSN2015), and the EuroSys Roger Needham Award 2014. He currently coordinates two EU H2020 projects: SERECA and SecureCloud.