Off-path Attacks on DNS Security
Amir Herzberg (Department of Computer Science, Bar Ilan University), Haya Shulman (Department of Computer Science, Bar Ilan University)

Abstract
:
The Domain Name System (DNS) is critical to the operation of the Internet, and there has been extensive efforts to
improve its security. Most notable are the DNS Security extensions (DNSSEC), designed to ensure security even
against Man-in-the-MIddle attackers. We have investigated security of DNS against the (weaker and more com-
mon) off-path attacker model. In this talk, we will only present some of our (off-path) attacks, specifically:
•  For the (currently typical) use of DNS without DNSSEC, we show effective DNS poisoning attacks which are
deployable in typical network scenarios. This motivates deployment of DNSSEC (or comparable defenses).
•  We show off-path DNS poisoning attacks which take advantage of partial deployment scenarios of DNSSEC.
Since partial-deployment is essentially unavoidable, this motivates improvements to DNSSEC to prevent
such attacks
•  We show that fully-deployed DNSSEC is vulnerable to effective off-path Denial of Service (DoS) attacks.

Decompression-Free Inspection: DPI for Shared Dictionary Compression over HTTP
Anat Bremler-Bar (The Interdisciplinary Center Hertzelia, Israel), Shimrit Tzur David, Anat Bremler-Bar *The Interdisciplinary Center Hertzelia, Israel), David Hay (The Hebrew University Jerusalem, Israel), Yaron Koral (Tel Aviv University Tel Aviv, Israel)

Abstract
:
Deep Packet Inspection (DPI) is the most time and resource consuming procedure in contemporary security tools such as Network Intrusion Detection/Prevention System (NIDS/IPS), Web Application Firewall (WAF), or Content Filtering Proxy. DPI consists of inspecting both the packet header and payload and alerting when signatures of malicious software appear in the traffic. These signatures are  identified through pattern matching algorithms. Moreover, DPI and its corresponding pattern matching algorithms are also a crucial building for other networking applications such as monitoring and HTTP load balancing. The portion of compressed traffic of overall Internet traffic is constantly increasing. This paper focuses on traffic compressed using Shared Dictionary Compression over HTTP (SDCH), a compression algorithm proposed by Google Inc. in 2008. Unlike traditional compression algorithms, SDCH takes advantage of the inter-response redundancy (e.g., almost the same data is sent over and over again) as in nowadays dynamic Data. Currently, 22% of HTTP clients supports SDCH, and this number is expected to grow substantially. SDCH works well with other compression algorithm (as GZip), making it even more appealing. On the other hand, performing DPI on any compressed traffic is considered hard, therefore today’s security tools either do not inspect compressed data, alter HTTP headers to avoid compression, or decompress the traffic before inspecting it. We present a novel pattern matching algorithm that inspects SDCH-compressed traffic without decompressing it first. Our algorithm relies on inspecting the shared dictionary, which is common to all compressed traffic, offline and marking auxiliary information on it to speed up the online DPI inspection. We show that our algorithm works near the rate of the compressed traffic, implying a speed gain of SDCH’s compression ratio (which is around 40%). We also discuss how to implement our algorithm in an entire systems, and show how the NIDS should get the shared dictionary, how to deal with SDCH compression over GZip compression, and how to support regular expression matching.

MCA2: Multi Core Architecture for Mitigating Complexity Attacks
Yehuda Afek (Tel Aviv University), Anat Bremler-Barr (Interdisciplinary Center Herzlia), David Hay (Hebrew University), Yaron Koral (Tel Aviv University),  Yotam Harchol (Hebrew University)

Abstract
:
In this work we take advantage of the emerging multi core computers to design a general architecture for mitigating network based complexity attacks. In complexity attacks an attacker carefully crafts messages such that each consumes substantially more resources than a normal average message. It then sends enough such heavy messages to bring the system to a crawl at best. In our architecture, called MCA2, Multi-Core Architecture for Mitigating Complexity Attacks, cores quickly identify messages suspicious heavy and divert them to a fraction of the cores that under attack are dedicated to handle all the heavy messages keeping the rest of the cores relatively unaffected and free to give the normal legitimate traffic the good service it is suppose to get. To demonstrate the effectiveness of our method we examine the cache-miss complexity attack against the Deep Packet Inspection (DPI) engine of SNORT. An attack in which 30% of the packets are carefully crafted heavy brings the system throughput down by more than 50% while with MCA2 the system throughput drops by about 20% if we do not drop the heavy packets and by 10% with dropping. (at 60% heavy packets the corresponding numbers are 70%, 40% and 23%).

Max Percentile Replication for Peer-based VoD Systems
Yuval Rochman (Computer Science, Tel-Aviv University, Israel), Hanoch Levy (Computer Science, Tel-Aviv University, Israel), Eli Brosh, (Vidyo, NJ, USA)

Abstract
:
Peer-to-peer based (P2P) VoD systems have proven to be an effective solution for scalable video distribution. In P2P VoD, each peer contributes storage to replicate videos and assist video delivery. A fundamental question is how to optimally replicate video content across the peers so as to maximize their upload capacity. We study this question within the context of a large-scale P2P network where peers are grouped into different geographical regions, and downloading a video across regions is more expensive than within a region. Our analysis addresses the combined challenge of (1) optimizing the replica allocation (placement) with respect to an arbitrary stochastic demand distribution, and (2) finding an optimal assignment of video requests to peers. Our main result is that optimal replica placement in single- and multi-region environments is of max percentile nature. We derive optimal algorithms and show that they have low complexity and thus very practical. We use numerical analysis and simulation to evaluate the system performance and study its behavior. Our results can be used to provide valuable insights on the design of P2P VoD systems.

Distributed Average Consensus and Coherence in Dynamic Networks
Stacy Patterson (Department of Electrical Engineering, Technion - Israel Institue of Technology), Bassam Bamieh (Department of Mechanical Engineering, University of California, Santa Barbara), Mihailo R. Jovanovic (Department of Electrical and Computer Engineering, University of Minnesota), Partha R. Mitra (Cold Spring Harbor Laboratory)

Abstract
:
In the distributed average consensus problem, each node in a network has an initial value, and the objective is for all nodes to reach consensus at the average of these values using only communication with nearby nodes. Distributed average consensus algorithms have a wide variety of applications, including distributed optimization, sensor fusion, load balancing, and autonomous vehicle formation control. This talk centers on the analysis of distributed averaging algorithms for consensus and vehicle formation control in networks with dynamic characteristics such as stochastic packet loss, node failures, network partitions, and additive disturbances.  I will present an overview of distributed averaging algorithms and some recent results on the stability and robustness of these algorithms in dynamic networks.  We analyze the relationship between the algorithm performance and the size, dimension, and dynamic characteristics of the network, and we show that network dimension imposes fundamental limitations on performance in large networks.

Network Coded Gossip with Correlated Data
Bernhard Haeupler (MIT), Asaf Cohen (Ben-Gurion University), Chen Avin (Ben-Gurion University), Muriel Me ́dard (MIT)

Abstract
:
We design and analyze gossip algorithms for networks with correlated data. In these networks, either the data to be distributed, the data already available at the nodes, or both, are correlated. This model is applicable for a variety of modern networks, such as sensor, peer-to-peer and content distribution networks. Although coding schemes for correlated data have been studied extensively, the focus has been on characterizing the rate region in static memory-free networks. In a gossip-based scheme, however, nodes communicate among each other by continuously exchanging packets according to some underlying communication model. The main figure of merit in this setting is the stopping time \u2013 the time required until nodes can successfully decode. While Gossip schemes are practical, distributed and scalable, they have only been studied for uncorrelated data. We wish to close this gap by providing techniques to analyze network coded gossip in (dynamic) networks with correlated data. We give a clean framework for oblivious network models that applies to a multitude of network and communication scenarios, specify a general setting for distributed correlated data, and give tight bounds on the stopping times of network coded protocols in this wide range of scenarios.

Priority Based Enhancement of Online Power-Aware Routing in Wireless Sensor Network
Ronit Nossenson (Faculty of Computer Science, Jerusalem College of Technology, Israel)

Abstract
:
We propose a new topology based priority enhancement of on-line power-aware routing algorithms in wireless sensor network. The nodes priority assignment is driven from the network connectivity model. Nodes that are critical to the network connectivity structure are marked with high priority and route less traffic to prolong the system lifetime.

A priority queue powering technique
Yehuda Afek (Tel-Aviv University), Anat Bremler-Barr (The Interdisciplinary Center Hertzelia), Liron Schiff (Tel-Aviv University)

Abstract
:
Priority queues are the key and bottleneck element in the implementation of fair queueing with perfect fairness when dealing with millions of concurrent connections at high line rates. Here we present, Power Priority Queue, a priority queue powering technique where an n elements priority queue is constructed from 3 queues of size  square-root(n) entries each (or alternatively from 5, priority queues of size n^(1/3) each.) with the throughput of the construct equals or beats that of a single size n priority queue. Applying our powering technique to a TCAM based priority queue, results in, a scalable perfect line rate fair queueing of millions of concurrent connections at speeds of 100 Gbps. It sequentially accesses small TCAMs (1Mb and below) at most 3 times, and the RAM a small constant number of times, per insert or dequque operation. Additionally, our technique produces an O(n) sorting algorithm in a system equipped with a O(w square-root(n)) entries TCAM, where n is the number of items, and 2^w is the items space, improving on a previous result that used O(n) entries TCAM. Our technique opens new applications for TCAM memories.

How Good is Bargained Routing?
Gideon Blocq (Department of Electrical Engineering
Technion, Israel Institute of Technology), Ariel Orda (Department of Electrical Engineering Technion, Israel Institute of Technology)

Abstract
:
Game theoretic models have been widely employed in many networking contexts. Research to date has mainly focused on non-cooperative networking games, where the selfish agents cannot reach a binding agreement on the way they would share the network infrastructure and the operating points are the Nash equilibria. These are typically inefficient, as manifested by large values of the Price of Anarchy (PoA). Many approaches have been proposed for mitigating this problem, however under the standing assumption of a non-cooperative game. In a growing number of networking scenarios it is possible for the selfish agents to communicate and reach an agreement, i.e., play a cooperative game. Therefore, the degradation of performance should be considered at an operating point that is a cooperative game solution. Accordingly, our goal is to lay foundations for the application of cooperative game theory to fundamental problems in networking. We explain our choice of the Nash Bargaining Scheme (NBS) as the solution concept, and we introduce the Price of Selfishness (PoS), which considers the degradation of performance at an NBS. We focus on the fundamental load balancing game of routing over parallel links. First, we study the classical scenario of agents that consider the same performance objectives. While the PoA here can be very large, we establish that, under plausible assumptions, the PoS attains its minimum value, i.e., through bargaining, the selfish agents reach social optimality. We then extend our study to consider the heterogeneous case, where agents may consider vastly different performance objectives. We demonstrate that the PoS and PoA can be unbounded, yet we explain why both measures may now be unsuitable. Accordingly, we introduce the Price of Heterogeneity (PoH), as a proper extension of the PoA. We establish an upper-bound on the PoH for a general class of heterogeneous performance objectives, and indicate that it provides incentives for bargaining also in this general case. We discuss network design guidelines that follow from our findings.

High Performance Exact Match Lookup Algorithm for switching & Router  devices
Carmi Arad (Marvell Israel), Gil Levy (Marvell Israel)

Abstract
:
Network devices commonly use exact match lookups for packet classification such as Bridging, Routing, Access control lists and Quality of Service. Exact match lookup should support network bandwidth requirements while using the memory efficiently. The common ways to measure the efficiency of exact match technology for a given memory size are “First Miss” and “Table Utilization”.  Exact match memory consists of exact match entries that are compared with the predefined fields extracted from the arriving packets.  The memory utilization measurement is done by additive insertion of entries starting from an empty memory.  “First Miss” is defined by the number of successful insertion until the first insertion fails. “Table Utilization” is defined by the number of successful insertion from N different tries while N represents the maximum memory size in entries. Memory technologies that are commonly used for exact match lookup are BCAM and Algorithmic lookup. BCAM (Binary content addressable memory) technology achieves optimal utilization for “First Miss” and “Table Utilization” but it is not scalable due to its silicon die size requirement. The suggested lookup algorithm is an n-way associative exact match lookup based on multiple and orthogonal hash functions. The exact match memory structure consists of N lines, n ways (columns) and n orthogonal hash function. The insertion is done by using n hash functions in parallel and selecting the empty place with the most populated way. The search function performs a single access to each way and compares all n ways in parallel. The number of the n ways is a parameter affects the memory size and memory utilization. Based on multiple simulations we can choose the adequate n achieves the required memory utilization at “First Match” with less memory die size required then BCAM.  The multiple hash function approach provides high memory utilization, flexible table size selection, scalability and simple way to use external memory extension.

Counter-Estimation Decoupling for Approximate Rates
Erez Tsidon (Electrical Engineering Dep. Technion, Qualcomm), Iddo Hanniel (Mechanical Engineering Dep.  Technion), Isaac Keslassy (Electrical Engineering Dep. Technion)

Abstract
:
Network management applications require large numbers of counters in order to collect traffic characteristics for each network flow. However, these counters often barely fit into on-chip SRAM memories. Past papers have proposed using counter estimators instead, thus trading off counter precision for a lower number of bits. But these estimators do not achieve optimal estimation error, and cannot always scale to arbitrary counter values. In this paper, we introduce the CEDAR algorithm for decoupling the counter estimators from their estimation values, which are quantized into estimation levels and shared among many estimators. These decoupled and shared estimation values enable us to easily adjust them without needing to go through all the counters. We demonstrate how our CEDAR scheme achieves the min-max relative error, i.e., can guarantee the best possible relative error over the entire counter scale. We also explain how to use dynamic adaptive estimation values in order to support counter up-scaling and adjust the estimation error depending on the current maximal counter. Finally we implement CEDAR on FPGA and explain how it can run at line rate. We further analyze its performance and size requirements.

The Bloom Filter Paradox
Ori Rottenstreich  (Technion), Yossi Kanizo  (Technion), Isaac Keslassy (Technion)

Abstract:
Bloom filters and counting Bloom filters (CBFs) are widely used in networking device algorithms. They implement fast set representations to support membership queries with limited error.  Unlike Bloom filters, CBFs also support element deletions. In the first part of the talk, we uncover the Bloom paradox in Bloom filters: sometimes, it is better to disregard the query results of Bloom filters, and in fact not to even query them, thus making them useless. We analyze conditions under which the Bloom paradox occurs, and demonstrate that it depends on the a priori probability that a given element belongs to the represented set. In the second part of the talk, we introduce a new general method based on variable increments to improve the efficiency of CBFs and their variants. We demonstrate that this method can always achieve a lower false positive rate and a lower overflow probability bound than CBF in practical systems.

Crawling the Internet PoP level graph for IP geolocation
Yuval Shavitt (Tel-Aviv University), Noa Zilberman (Tel-Aviv University)

Abstract
:
The geographical location of Internet IP addresses is important for academic research, commercial and homeland security applications. We present a structural approach for extracting the topology and connectivity of the Internet at the PoP level using a set of IP-level traceroutes and providing them with geographical location. Then, using this PoP-level connectivity graph, we conduct an evaluation of geolocation databases accuracy, on IP and PoP level, and show that the accuracy of these databases is significantly lower than claimed. Last, we present a method that crawls the Internet PoP level graph to improve the accuracy of geolocation, combining information from both geolocation databases and delay measurements.

Optimal fast multiple-choice hashing
Yossi Kanizo (Technion), David Hay (Hebrew Univ.), Isaac Keslassy (Technion)

Abstract
:
Hash tables form the core building block of many networking device operations, such as flow counter management, flow state keeping, elephant traps, virus signature scanning, and IP address lookup algorithms. However, classical hash table construction schemes do not fit the memory requirements of networking devices. In this talk we explore the design of optimal high-throughput memory-efficient hashing schemes. Given some target hash table overflow rate, we first provide a lower  bound on the total number of needed memory accesses both for the static case where elements are only inserted and the dynamic case where elements are both inserted and deleted. This lower bound is a consequence of some different approaches. Then, we provide online schemes that match the lower bound  in some given range. Surprisingly, while in the static case the overflow rate  decreases exponentially, in the dynamic case it only decreases as 1/a, where a is the average number of memory accesses.

Improving Consolidation of Virtual Machines with Risk-Aware Bandwidth Oversubscription in Compute Clouds
David Breitgand (IBM Haifa Research Lab), Amir Epstein (IBMHaifa Research Lab)

Abstract
:
Current trends in virtualization, green computing, and cloud computing require ever increasing efficiency in consolidating virtual machines without degrading quality of service. In this work, we consider consolidating virtual machines on the minimum number of physical containers (e.g., hosts or racks) in a cloud where the physical network (e.g., network interface or top of the rack switch link) may become a bottleneck. Since virtual machines do not simultaneously use maximum of their nominal bandwidth, the capacity of the physical container can be multiplexed. We assume that each virtual machine has a probabilistic guarantee on realizing its bandwidth Requirements--as derived from its Service Level Agreement with the cloud provider. Therefore, the problem of consolidating virtual machines on the minimum number of physical containers, while preserving these bandwidth allocation guarantees, can be modeled as a Stochastic Bin Packing (SBP) problem, where each virtual machine's bandwidth demand is treated as a random variable. We consider both offline and online versions of SBP. Under the assumption that the virtual machines' bandwidth consumption obeys normal distribution, we show a 2-approximation algorithm for the offline version and improve the previously reported results by presenting a (2+\epsilon)-competitive algorithm for the online version. We also observe that a dual polynomial-time approximation scheme (PTAS) for SBP can be obtained via reduction to the two-dimensional vector bin packing problem. Finally, we perform a thorough performance evaluation study using both synthetic and real data to evaluate the behavior of our proposed algorithms, showing their practical applicability.

Approximation Algorithm for Data Centers Placement
Danny Raz (Technion), Assaf Rappaport (Technion)

Abstract
:
Data centers are becoming the hosting platform for a wide spectrum of composite applications. In recent years, large investments have been made in massive data centers supporting cloud services, by companies such as eBay, Facebook, Google, Microsoft, and Yahoo!. With an increasing trend towards more communication intensive applications in data centers, the bandwidth usage within and between data centers is rapidly growing. Data centers placement presents a challenging optimization problem, involving several factors. We introduce a network design problem that combines a facility location and connectivity problem. Consider the following scenario: an email application, which depends on an authentication service, we consider the problem of placing replicas of an object (e.g. the authentication service) at multiple locations in the data center. Replica placement deals with the actual number and network location of the replicas. Clearly, we would like to minimize the network distance between an application server in a data center and the closest replica containing the desired content and thus having more replicas helps. On the other hand, having more replicas is more expensive so we need to model the cost and the benefit in a way that can allow to make the appropriate decisions regarding the number and the network locations of the replicas. This problem is strongly related to a family of optimization problems generally referred to as facility location problems. As most of the existing algorithms neglects the cost of keeping the replicas across the network up to date, we believe that considering this factor can lead to better realistic solutions. A replica must be synchronized with the original content server in order to supply reliable and precise service to the client requests. The synchronization traffic across the network depends on the number of replicas deployed in the network, the topology of the distributed update and the rate of updates in the content of the server. Moreover, we extend our model by adding capacity constraint for each replica. We can model the scenario above as a Soft-Capacitated Connected Facility Location Problem which is NP-Hard in the general case.  In this work we introduce a constant approximation algorithm.

Time Synchronization Security using IPsec and MACsec
Tal Mizrahi (Marvell Israel)

Abstract
:
Time synchronization protocols have been around for many years. The Network Time Protocol (NTP) provides end-to-end time synchronization between end stations in IP networks. The growing need for high accuracy gave birth to the IEEE 1588. This protocol achieves high precision by hop-by-hop synchronization, where intermediate nodes can participate in the synchronization mechanism, thus eliminating inaccuracies caused by multiple network hops. NTP was conceived with security in mind. It includes an inherent security protocol that provides a symmetric key authentication scheme. The Autokey protocol is the latest extension to the NTP security architecture, using a public key authentication scheme rather than the symmetric key scheme. While NTP uses an end-to-end security scheme, where traffic is protected between the master and the slave, the IEEE 1588, introduces a security challenge due to its per-hop synchronization approach. An end-to-end security approach is generally more robust and secure, whereas the need for the participation of multiple intermediate nodes in the protocol is a real challenge from the security perspective. The IEEE 1588 was not designed with an inherent security mechanism. Annex K of the IEEE 1588-2008 standard defines an experimental security protocol for PTP, but was never formalized into a properly-defined security protocol. Currently, the IEEE 1588 does not have a standard security mechanism. As the IEEE 1588 is becoming commonly used and deployed, the market demand for a security mechanism has preceded its standardization. As a result, ad-hoc solutions have been proposed to secure PTP using existing security protocols, while the security standardization process still lingers.
This work focuses on two methods of securing IEEE 1588 using existing security protocols: IPsec and MACsec. We characterize the typical deployment scenarios, and perform a threat analysis of these scenarios.

Arbitrators in the Security Infrastructure
Ofer Hermoni (Department of Information Systems Engineering Ben-Gurion University of the Negev, Israel), Niv Gilboa (Department of Communication Systems Engineering Ben-Gurion University of the Negev, Israel), Shlomi Dolev (Department of Computer Sciences Ben-Gurion University of the Negev, Israel)

Abstract
:
Traditional public key infrastructure is an example for basing the security of communication among users and servers on trusting a Certificate Authority (CA). A traditional, centralized CA should only be involved in a setup stage for communication; otherwise the CA becomes an obvious bottleneck in the activity. Peer to peer assistance may replace the CA during the actual communication transactions. We introduce such assistants that we call arbitrators. Arbitrators are semi-trusted entities that facilitate communication or business transactions. The communicating parties, users and servers, agree before a communication transaction on a set of arbitrators that they trust (reputation systems may support their choice). Then, the arbitrators receive resources, e.g. a deposit, and a service level agreement between participants such that the resources of a participant are returned if and only if the participant acts according to the agreement. We demonstrate the usage of arbitrators in the scope of conditional (positive) anonymity. A user may interact anonymously with a server as long as the terms for anonymous communication are honored. In case the server finds a violation of the terms, the server proves to the arbitrators that a violation took place and the arbitrators publish the identity of the user. Since the arbitrators may be corrupted, the scheme ensures that only a large enough set of arbitrators may reveal the identity of the user, which is the deposited resource in the case of conditional anonymity.

Fully Dynamic Approximate Distance Oracles for Planar Graphs via Forbidden-Set Distance Labels
Ittai Abraham (Microsoft Research SVC), Shiri Chechik (Weizmann Institute), Cyril Gavoille (University of Bordeaux)

Abstract
:
Distance oracle is a data structure that provides fast answers to distance queries. Recently, the problem of designing distance oracles capable of answering restricted distance queries, that is, estimating distances on a subgraph avoiding some forbidden vertices, has attracted a lot of attention. In this talk, we will consider forbidden set distance oracles for planar graphs. I’ll present an efficient compact distance oracle that is capable of handing any number of failures. In addition, we will consider a closely related notion of fully dynamic distance oracles. In the dynamic distance oracle problem instead of getting the failures in the query phase, we rather need to handle an adversarial online sequence of update and query operations. Each query operation involves two vertices s and t whose distance needs to be estimated. Each update operation involves inserting/deleting a vertex/edge from the graph. I'll show that our forbidden set distance oracle can be tweaked to give fully dynamic distance oracle with improved bounds compared to the previously known fully dynamic distance oracle for planar graphs.

SINR Diagram with Interference Cancellation
Chen Avin (Ben Gurion University, Beer-Sheva, Israel), Asaf Cohen (Ben Gurion University, Beer-Sheva, Israel), Yoram Haddad (Ben Gurion University, and Jerusalem College of Technology, Israel), Erez Kantor (The Technion, Haifa, Israel), Zvi Lotker (Ben Gurion University, Beer-Sheva, Israel), Merav Parter (The Weizmann Institute of Science, Rehovot, Israel ), and David Peleg (The Weizmann Institute of Science, Rehovot, Israel)

Abstract
:
In this talk we studies the reception zones of a wireless network in the SINR model with receivers that employ interference cancellation (IC). IC is a recently developed technique that allows a receiver to decode interfering signals, and cancel them from the received signal in order to decode its intended message. We first derive the important topological properties of the reception zones and their relation to high-order Voronoi diagrams and other geometric objects. We then discuss the computational issues that arise when seeking an efficient description of the zones. Our main fundamental result states that although potentially there are exponentially many possible cancellation orderings, and as a result, reception zones, in fact there are much fewer nonempty such zones. We prove a linear bound (hence tight) on the number of zones and provide a polynomial time algorithm to describe the diagram. Moreover, we introduce a novel parameter, the Compactness Parameter, which influences the tightness of our bounds. We then utilize these properties to devise a logarithmic time algorithm to answer point-location queries for networks with IC.

Improved Approximation for Orienting Mixed Graphs
Iftah Gamzu (Computer Science Division, The Open Univ., and Blavatnik School of Computer Science, Tel-Aviv Univ., Israel), Moti Medina (School of Electrical Engineering, Tel-Aviv Univ., Israel)

Abstract
:
An instance of the maximum mixed graph orientation problem consists of a mixed graph and a collection of source-target vertex pairs. The objective is to orient the undirected edges of the graph so as to maximize the number of pairs that admit a directed source-target path. This problem has recently arisen in the study of biological networks, and it also has applications in communication networks. In this paper, we identify an interesting local-to-global orientation property. This property enables us to modify the best known algorithms for maximum mixed graph orientation and some of its special structured instances, due to Elberfeld et al.~(CPM '11), and obtain improved approximation ratios. We further proceed by developing an algorithm that achieves an even better approximation guarantee for the general setting of the problem. Finally, we study several well-motivated variants of this orientation problem.

Robust Data Sharing with Key-Value Stores
Cristina Basescu (Vrije Universiteit Amsterdam), Christian Cachin (IBM Research - Zurich), Robert Haas, (IBM Research - Zurich), Ittay Eyal (Technion), Alessandro Sorniotti (IBM Research - Zurich), Marko Vukolic (Eurecome), Ido Zachevsky (Technion)

Abstract
:
A key-value store (KVS) offers functions for storing and retrieving values associated with unique keys. KVSs have become the most popular way to access Internet-scale "cloud" storage systems. We present an efficient wait-free algorithm that emulates multi-reader multi-writer storage from a set of potentially faulty KVS replicas in an asynchronous environment. Our implementation serves an unbounded number of clients that use the storage concurrently. It tolerates crashes of a minority of the KVSs and crashes of any number of clients. Our algorithm minimizes the space overhead at the KVSs and comes in two variants providing regular and atomic semantics, respectively. Compared with prior solutions, it is inherently scalable and allows clients to write concurrently. Because of the limited interface of a KVS, textbook-style solutions for reliable storage either do not work or incur a prohibitively large storage overhead.  Our algorithm maintains *two* copies of the stored value per KVS in the common case, and we show that this is indeed necessary. If there are concurrent write operations, the maximum space complexity of the algorithm grows in proportion to the point contention. A series of simulations explore the behavior of the algorithm, and benchmarks obtained with KVS cloud-storage providers demonstrate its practicality.

Opportunistic Scheduling in Heterogeneous Networks: Distributed Algorithms and System Capacity
Yosef Kampeas, Asaf Cohen and Omer Gurewitz.

Abstract
:
In this work, we design and analyze novel distributed scheduling algorithms for multi-user MIMO systems. In particular, we consider algorithms which do not require sending channel state information to a central processing unit, nor do they require communication between the users themselves, yet, we prove their performance closely approximates that of a centrally-controlled system, which is able to schedule the strongest user in each time-slot. Possible application include, but are not limited to, modern 4G networks such as 3GPP LTE, or random access protocols. The analysis is based on a novel application of the Point-Process approximation, enabling the examination of non-homogeneous cases, such as non-identically distributed users, or handling various QoS considerations, which to date had been open.