Can I simulate Openflow with ns3 ns2

FI & IITM. Network Architectures and Services NET


1 Network Architectures and Services NET FI & IITM WS 13/14 Proceedings of the Seminars Future Internet (FI) and Innovative Internet Technologies and Mobile Communications (IITM) Winter Semester 2013/2014 Munich, Germany, Editors Georg Carle, Daniel Raumer, Lukas Schwaighofer Organization Chair for Network Architectures and Services Department of Computer Science, Technical University of Munich Technical University of Munich


3 Network Architectures and Servies NET FI & IITM WS 13/14 Proceeding to the seminar Future Internet (FI) and Innovative Internet Technologies and Mobile Communication (IITM) winter semester 2013/2014 Munich, editors: Georg Carle, Daniel Raumer, Lukas Schwaighofer Organized by the chair Network architectures and network services (I8), Faculty of Computer Science, Technical University of Munich

4 Proceedings of the Seminars Future Internet (FI), and Innovative Internet Technologies and Mobile Communication Networks (IITM) Winter Semester 2013/2014 Editors: Georg Carle Chair of Network Architectures and Network Services (I8) Technical University of Munich D Garching b. Munich, Germany Internet: Daniel Raumer Chair of Network Architectures and Network Services (I8) Internet: Lukas Schwaighofer Chair of Network Architectures and Network Services (I8) Internet: Cataloging-in-Publication Data Seminars FI & IITM WS13 / 14 Proceedings for the seminars Future Internet (FI) and Innovative Internet Technologies and Mobile Communication (IITM) Munich, Germany, ISBN: ISSN: (print) ISSN: (electronic) DOI: / NET Lerhstuhl Netzarchitekturen und Netzdienste (I8) NET Series Editor: Georg Carle, Technische Universität München, Germany c 2014, Technische University of Munich, Germany II

5 Foreword We hereby present the proceedings for the seminars Future Internet (FI) and Innovative Internet Technologies and Mobile Communication (IITM), which took place in the 2013/2014 winter semester at the Faculty of Computer Science at the Technical University of Munich. Both seminars were held in German. The participants were free to use English for both the paper and the lecture. Accordingly, both English and German papers can be found in these proceedings. Some of the lectures were recorded and are available on our media portal at. In the FI seminar, contributions on current topics in network research were presented. The following topics were covered: Measurement of the TCP extension Tail loss probe Minimization of drive tests in cellular networks Analysis of trace routes and rdns data in the Internet Census 2012 Normal disasters and computer systems Human decisions and rationality Economic incentives for HTTPS authentication Monitoring: reasons and technical possibilities Monitoring: Countermeasures The recorded lectures for this seminar can be called up. In the IITM seminar, lectures on the topic of network technologies including mobile communication networks were presented. The following topics were covered: Self-organization for heterogeneous networks Energy consumption and possible optimization Policy description languages ​​Survey The lectures recorded for this seminar can be accessed on. III

6 We hope that you will find valuable suggestions in the articles in these seminars. If you are interested in our work, you will find further information on our homepage Munich, March 2014 Georg Carle Daniel Raumer Lukas Schwaighofer IV

7 Preface We are very pleased to present you the interesting program of our main seminars on Future Internet (FI) and Innovative Internet Technologies and Mobil Communication (IITM) which took place in the winter semester 2013/2014. All seminar courses were held in German but the authors were free to write their paper and give their talk in English. Some of the talks were recorded and published on the media portal In the seminar FI we dealt with issues and innovations in network research. covered: The following topics were Measuring TCP Tail Loss Probe Performance Minimization of Drive Tests in Mobile Communication Networks Analysis of Traceroutes and rdns Data provided by the Internet Census 2012 Normal Accidents and Computer Systems Human Decisions and Rationality Economic Incentives in the HTTPS Authentication Process Getting to know Big Brother Hiding from Big Brother Recordings can be accessed on In the seminar IITM we dealt with different topics in the area of ​​network technologies, including mobile communication networks. The following topics were covered: Self Organization for Heterogeneous Networks Energy Consumption and Optimization Policy Description Languages ​​Survey Recordings can be accessed on We hope that you appreciate the contributions of these seminars. If you are interested in further information about our work, please visit our homepage Munich, March 2014 V

8 VI

9 Seminar organizer Chair holder Georg Carle, Technical University of Munich, Germany (I8) Seminar leader Daniel Raumer, Technical University of Munich, Germany Lukas Schwaighofer, Technical University of Munich, Germany Supervisor Nadine Herold Technical University of Munich, employee I8 Ralph Holz Technical University of Munich, employee I8 Heiko Niedermayer Technical University of Munich, staff member I8 Stephan Posselt Technical University of Munich, staff member I8 Lukas Schwaighofer Technical University of Munich, staff member I8 Tsvetko Tsvetkov Technical University of Munich, staff member I8 Matthias Wachs Technical University of Munich, staff member I8 Seminar website VII


11 Measuring TCP Tail Loss Probe Performance Andre Ryll, B.Eng. Supervisor: Lukas Schwaighofer, M.Sc. Seminar Future Internet WS2013 Chair of Network Architectures and Network Services Faculty of Computer Science, Technical University of Munich ABSTRACT This paper analyzes the performance of the TCP Tail Loss Probe algorithm proposed by Dukkipati et al. in February 2013 under various virtual network conditions. To provide accurate and repeatable measurements and varying network conditions the mininet virtual network is used. Variations include the available bandwidth, round trip time and number of tail loss segments. The tests are done by requesting HTTP data from an nginx web server. Results show that TLP is able to decrease the total transfer time in high-speed networks by 38% and the time until data is retransmitted by 81%. These improvements decrease significantly for higher delay links. Keywords TCP, TLP, performance, measurements, comparison, mininet, virtual network, HTTP, iptables, netfilter 1. INTRODUCTION Loss of data in a network transfer is a general challenge in all connection-oriented protocols. For internet traffic TCP [1] is used as the transport layer for HTTP data. There exist a number of specifications which deal with retransmission behavior of TCP (e.g. [2], [3], [4]). This list has lately been extended by the Tail Loss Probe (TLP) algorithm [5]. The TLP internet draft suggests a real-world improvement of the average response time by 7% based on measurements on Google Web servers over several weeks. This paper aims at precisely measuring the response time improvement under various well-defined laboratory conditions to examine benefits and drawbacks of TLP. Variations will include link quality (bandwidth, delay) and the number of lost tail packets. The measurements are done on a single XUbuntu Linux machine with a kernel. To create a simulation with multiple virtual hosts the mininet virtual network is used. Simple HTTP data transfer is accomplished by an nginx 1 web server and the lynx text browser. A user space C / C ++ application in conjunction with iptables and netfilter queues allows to precisely drop a specified number of packets at the end of a transfer. The remainder of this paper is organized as follows: Section 2 reviews the TCP protocol, extensions to it which are essential for TLP and the TLP algorithm itself. Section 3 describes the test setup for data acquisition. This includes mininet, iptables, the user space application and the measurement variations. Section 4 presents the results and the 1 advantages of TLP. Finally section 5 sums up the insight of the measurement results and outlines further options for analysis. 2. TCP The Transmission Control Protocol (TCP) is a reliable, connection-oriented transport protocol (ISO OSI layer 4 [6]). As it is stream-based it works with bytes (grouped in segments). Higher level protocols can transmit packets over TCP (e.g. HTTP), but TCP itself is not aware of packets. Its counterpart is the simpler User Datagram Protocol (UDP) which works connection-less and packet-based. This section aims at providing an overview of TCP and explains details which are important to understand TLP. To implement the reliability and retransmission capabilities the TCP header includes, amongst others, the following important fields: SYN flag Synchronize sequence number, set once during connection setup to set the initial (arbitrary) sequence number. FIN flag No more data from the sender. ACK flag The ACK field is valid. Indicates that this segment acknowledges received data. Always set except in the first segment of a transmission. ACK field The next expected sequence number of the receiver. SEQ field segment sequence number. The sequence number in the first (SYN) segment minus the current sequence number indicates the packet data offset in the current stream. Figure 1 shows a graphical TCP flow representation created by the network analyzer wireshark 2. In this example, a client requested a web page from a server. Flags, content length and transfer direction of a segment are indicated in the green area. The white area shows the sequence and acknowledgment numbers of every segment relative to the first captured segment (implicitly done by wireshark to simplify reading). The connection is established in the first three transfers (TCP 3-way handshake). Afterwards the client sends a HTTP GET request (in this case with a length of doi: / NET _01

12 200 bytes) with the desired resource name. The request is acknowledged and followed by the actual data transfer from the server. After the transfer is complete the server wishes to terminate the connection (teardown) by issuing the FIN flag. The client acknowledges every segment and the connection teardown. Figure 1: TCP Flow. Green area: Client (left, port 33043) requesting web page via HTTP from server (right, port 80). White area: relative sequence and acknowledgment numbers of the respective segment. It is important to note that the client acknowledges every segment in this example. This is not required by the TCP specification. It is sufficient to acknowledge every second segment, given that the segments come in within a short time (RFC1122 section specifies 500ms [7], Linux uses a dynamic approach with a maximum of 200ms [8]). The example transfer is also loss-free. To handle data loss several methods exist, which will be outlined in the following. The original specification only retransmits segments if they have not been acknowledged after a specified time. This is called retransmission timeout (RTO). Several extensions have been made since TCP was initially specified to improve the retransmission behavior. This includes, amongst others: duplicate ACKs, originally specified in [9], later obsoleted by [2], selective ACKs, specified in [3] and early retransmission, specified in [4]. 2.1 Duplicate ACK (DACK) Duplicate ACKs are acknowledgments from a receiver with the same ACK number, which is not equal to the last expected one of the sender. That means as long as the sender has unacknowledged (but sent) data he expects the ACK number of the receiver to increase with every segment. This explanation is slightly simplified but sufficient for understanding the paper, for a full description see [2]. There exist a number of cases which may lead to duplicate ACKs. First of all, a segment may be lost and more data follows. As the receiver does not receive the segment it expects, he sends an ACK with the sequence number he actually expects. This is done for every segment received after the missing segment. Secondly, segments can arrive at the receiver in a different order than they were send due to different paths of the segments through the network. Although all data arrives at the receiver this is an error condition as TCP has to be in-order. Lastly duplicate ACKs may indeed acknowledge duplicate segments. This might for example be caused by a sudden increase in network delay. The transmitters RTO fires and resends a segment which then arrives twice at the receiver. To differentiate between duplicate ACKs from spurious retransmission, out-of-order reception and loss of the transmitter waits for three duplicate ACKs before retransmitting data. This mechanism is known as fast retransmit as the transmitter does not wait for the retransmission timeout to fire but immediately resends the data. Fast retransmit is described in [2]. 2.2 Selective ACK (SACK) Duplicate ACKs can only inform the transmitter of the next expected sequence. Although more segments after the lost one might have arrived at the receiver, the transmitter will need to retransmit data from the point where the first data was lost. To overcome this limitation the selective ack (SACK) option was added to the TCP header [3]. To use SACK both communication partners need to support it. Every SACK-enabled host sets the SACK-permitted option in the SYN packet of the TCP handshake. If both hosts support this option it can be used in further communication. If a segment is lost and SACK is allowed the receiver still replies with duplicate ACKs but the ACKs will now have more information about which following segment was successfully received. The SACK option specifies up to three continuous blocks of data, that have been received after one ore more missing blocks (holes). Each block uses a left edge (SLE) of data received and a right edge (SRE) one past the last byte received in that block, both of them are sequence numbers. The transmitter can use the SACK information to precisely resend only data that has been lost and avoids resending data which has been successfully received after a lost segment. 2.3 Early Retransmit (ER) Selective ACKs provide additional information to the transmitter of data in case of a lost segment. Nevertheless they do not speed up the time until a segment is resend. They help to inform the sender which segments need to be resend. To resend a segment the RTO or three duplicate ACKs (fast retransmit) are still used. The aim of the early retransmit (ER) algorithm [4] is to lower the number of duplicate ACKs which are needed to retransmit a segment. To achieve this the ER algorithm tries to estimate the remaining number of segments which can be sent. This depends on how much data is available for sending and how much data is allowed to be transmitted before being acknowledged (so called window size). The ER algorithm does not depend on SACK although it can be used with SACK to calculate a more precise estimate of the remaining number of segments. If the window size or the data left is too small to achieve at least three segments in flight, then fast retransmit will never occur as there is no way to generate three duplicate ACKs. In this situation ER reduces the number of required duplicate ACKs to trigger fast retransmit to one or two segments. 2 doi: / NET _01

13 2.4 Forward ACK (FACK) If the window size is large enough and there is enough data to send the retransmission of data still requires at least three duplicate ACKs. To improve this behavior the forward ack (FACK) algorithm has been proposed [10]. FACK requires SACK in order to work. The FACK algorithm monitors the difference between the first unacknowledged segment and the largest SACKed block (the forward-most byte, hence its name). If the difference is larger than three times the maximum segment size of a TCP segment, then the first unacknowledged segment is retransmitted. If exactly one segment is lost this will happen after receiving three duplicate ACKs. So for only one lost segment FACK and fast retransmission based recovery trigger at the same time. The main advantage of FACK is a situation with multiple lost segments. In these situations it requires only one duplicate ACK when three or more segments are lost to start a recovery. 2.5 Tail Loss Probe (TLP) All previous solutions to recover lost data are based on the reception of duplicate ACKs to retransmit data before the retransmission timeout (RTO) expires. In situations where the last segments of a transfer (the tail) are lost, there will be no duplicate ACK. So far the only option to recover from such a loss is the RTO. The Tail Loss Probe (TLP) algorithm [5] proposes an improvement to such situations by issuing a probe segment before the RTO expires. If multiple segments are unacknowledged and the TLP timer expires the last sent segment is retransmitted. This is the basic idea of ​​the TLP algorithm. Further actions in response to the probe segment are handled by the previously described mechanisms. If exactly one segment at the tail is lost, the probe segment itself repairs the loss, a normal ACK is received. If two or three segment are outstanding ER will lower the threshold for fast retransmit and the duplicate ACK of the probe segment triggers early retransmission. If four or more packets are lost the difference between the last unacknowledged segment and the SRE in the SACK of the probe segment will be large enough to trigger FACK fast recovery. In theory the TLP improves response time to loss in all cases. Table 1 sums up the different options. losses after TLP mechanism AAAL AAAA TLP loss detection AALL AALS ER ALLL ALLS ER LLLL LLLS FACK> = 5 L..LS FACK Table 1: TLP recovery options. A: ACKed segment, L: lost segment, S: SACKed segment [5] 3. TEST ENVIRONMENT To evaluate the performance of TLP a network environment and a possibility to drop tail segments is required. As a physical test setup is hard to reconfigure and inflexible with respect to e.g. bandwidth limitation a network simulation tool has been chosen. All tools are compatible with the Linux operating system, thus a XUbuntu Bit machine with a kernel is used for the tests. 3.1 Tool Overview Two network simulation tools have been investigated. The ns-3 network simulator 3 and the mininet virtual network 4. ns-3 provides a lot of features for automated testing and data acquisition, although it requires some effort to write a test program. As ns-3 is a simulator the complete network runs in an isolated application. All test programs and algorithms need to be implemented in C / C ++ to make them available for measuring. The major drawback of ns-3 is that it cannot easily interface a recent Linux kernel that supports TLP. Thus ns-3 could not be used for testing TLP performance. Mininet provides a lightweight virtual network by using functions build into the Linux kernel.Unlike NS-3, it is not a single application but a virtualization technique that allows to create separate network spaces on a single host machine. They share the same file system but processes are executed in their isolated space with specific network configurations. It allows to create everything from simple networks (e.g. 2 hosts, 1 switch) up to very complex topologies, only limited by available processing power. Furthermore it is easily reconfigurable, uses the underlying Linux kernel and can run any Linux program in a virtual host. The Linux traffic control interface can be used to specify delay, bandwidth and loss on a virtual connection. All properties make it ideally suited for TLP performance analysis. The Linux traffic control interface is not able to precisely drop segments at the end of a transfer. There are two options to achieve this. Mininet switches can be used together with an OpenFlow 5 controller which usually tells the switch how to forward packets by installing rules based on e.g. the MAC addresses in a packet. If a packet does not match a rule it is forwarded to the OpenFlow controller, which investigates the packet and afterwards installs an appropriate rule in the virtual switch. By not installing any rule this can be used to forward all packets passing the switch to the Open- Flow controller which then determines if the packet is at the tail of a transfer and if so drops it. This mechanism has a poor performance, because usually only very few packets are forwarded to the controller for learning and installing appropriate rules afterwards. So although this option works, it is not adequate for rapid tail loss. Another option to drop segments is the use of Linux iptables 6. As iptables are primarily used for firewall purposes there is no build in option to drop a configurable amount of tail segments. Nevertheless iptables can forward packets to a user space application which then decides to accept or to drop a packet. This is done by using the netfilter queue (NFQUEUE) as a target. Using the libnetfilter library a user space application can process these packets and also access the complete packet content. This option works locally on a (virtual) machine and uses kernel interfaces, thus this approach is quite fast compared to the OpenFlow solution. The user space application is written in C / C ++ and provides a good performance doi: / NET _01

14 3.2 Setup The final test setup uses mininet with two hosts and one switch on a XUbuntu machine with kernel and a netfilter user space application using iptables. The setup is depicted in figure 2. To configure this setup the following steps are necessary. Figure 3: h1 packet flow in detail Figure 2: Virtual network setup with mininet First of all mininet must be started with a configuration of two hosts and one switch. This is done by the command: mn - topo single, 2 - link tc, bw = 100, delay = 2.5 ms This configures mininet with a link bandwidth of 100MBit / s and a delay of 2.5ms per link. Thus the round trip time is 10 ms. This setup reflects a common high-speed ethernet environment. The two hosts are named h1 and h2. A terminal to the two hosts can be opened via (entered in the mininet console): xterm h1 h2 h1 serves as a web server which is started by typing nginx in its command window. Furthermore iptables needs to be configured to pass outbound HTTP traffic (TCP port 80) on interface h1-eth0 to a netfilter queue. iptables -A OUTPUT -o h1 - eth0 -p tcp - sport 80 -j NFQUEUE - queue - num 0 This forwards all TCP traffic leaving h1 to NFQUEUE 0. The iptables and filter setup on virtual host one (h1) is depicted in figure 3. The inbound traffic is passed directly to nginx, whereas the outbound traffic is either accepted directly (non-tcp) or forwarded to the NFQUEUE. The user space application is named tcpfilter and can be configured by command line arguments to drop a specified number of packets at the end of an HTTP transfer (e.g. two packets) ../ tcpfilter 2 Implementation details of the tcpfilter are explained in section 3.3. To request a web page from h1 the lynx web browser with the dump option is used on h2. It just requests the web page and dumps it to / dev / null. lynx -dump / pk100. html> / dev / null This is repeated several times in a shell script to automatically acquire a set of data. This finalizes the test setup. 3.3 Tail Loss application The tail loss application tcpfilter is a custom application written in C / C ++. It accesses the netfilter queue 0 and processes its packets. For this purpose the complete packet is copied to user space. After packet processing is done it issues a verdict on every packet which can either be ACCEPT or DROP. If drop is selected the kernel silently discards this packet. As this application works on the OUTPUT chain of iptables the packet then never leaves the network interface. This is a simulated packet loss. The number of dropped segments n drop is specified as a command line argument to tcpfilter. The algorithm used to generate tail loss is shown in algorithm 1. As TCP is stream based there is no way of determining the last segment. Thus HTTP is used in the application layer to find the last segment of a transfer. The filter is initially in the idle state. As soon as a HTTP segment (TCP on port 80) is going to be sent it checks the contents of that segment for the string content length. This indicates that this is the header of a new HTTP transfer. Wireshark analysis show that the content length is always set by nginx for HTML data transfer. It is thus safe to use this field as a header indication. The content length is extracted from the header and saved to further track the incoming data. Furthermore, the HTTP data length is this segment is saved to know the maximum data size of a segment. The filter is now in the transfer state. It is locked on to one transfer by saving its source and destination (IP and port) and the TCP identification. Data that is not belonging to this transfer is accepted, and not processed further. Data for the current transfer is tracked and accepted until it reaches the end of the transfer. As soon as the total transfer size minus the size of the transferred data is smaller than the number of segments to lose at the end times the maximum HTTP size of a segment, the segments are saved in a linked list and no verdict is issued. After a TCP segment for this transfer arrives which has the FIN flag set the saved segments are processed. If the segment with the FIN flag also has HTTP data n drop segments at the tail of the list are dropped, otherwise n drop + 1 segments at the tail are dropped. The tcpfilter application thus always drop n drop packets with HTTP data. After the FIN segment the filter enters idle state again and is ready to track the next transfer. 4 doi: / NET _01

15 While the drop candidates are in the list, no verdict is issued which may lead to a delay in sending packets. An analysis of the traffic shows that the window size for this transfer is large enough so that the sender transmits more than 20 segments before waiting for an acknowledgment. The time between the first segment in the drop candidate list and the processing due to receiving the FIN flag is thus very short and should not affect the measurements. Algorithm 1 Creating tail loss Packet p T r a n s f e r S t a t e s L i s t c a n d i d a t e s L i s t droppedonce i f! ishttp (p) then accept (p) ifexist (p. seq, droppedonce) remove (p. seq, droppedonce) accept (p) ifstate == idle and exist (p, Content Length) then state = xfer s. maxhttpsize = p . h t t p S i z e s. s r c d s t = p. s r c d s t s. t o t a l L e n g t h = e x t r a c t (p, Content Length) s. t r a n s f e r r e d = 0 i f s t a t e == x f e r and p. srcdst == s. srcdstbytes R emaining = s. total L ength s. transferred bytestodrop = n drop s. maxhttpsize if bytesremaining ndrop do accept (front (candidates)) whilesize (candidates)> 0 do enqueue (front (candidates). seq, droppedonce) drop (front (candidates)) state = idle During transfer processing several nanosecond-accurate timestamps are taken. The first one t start during the transition from idle to transfer. The next are recorded for every drop candidate. The timestamp of the first segment that is finally selected to be dropped is saved to t drop. The next timestamp t retransmit is taken when the first dropped segment is sent again by the Linux kernel. By using t drop and t retransmit the time until a retransmission is started (t recover = t retransmit t drop) can be accurately measured. Finally the time when the segment with the FIN flag is retransmitted t end is recorded. t recover and the total transfer time t total = t end t start are used to measure the improvements of the TLP algorithm. The tcpfilter applications outputs a row of the following format to the standard output for every transfer: , , , , , Experiments show that TLP is not always selected for retransmissions. To remove these transfers from the results tcpfilter outputs the istlp flag. This flag indicates if a retransmission occurred based on TLP or not. TLP retransmissions can easily be detected be checking the first retransmitted segment. If this segment is the last sent segment TLP is used. All other recovery mechanisms retransmit the first lost segment first. The istlp flag is not valid for zero or one dropped segment, as the distinction cannot be made in this case. 3.4 Measurement description The following section outlines the measurements taken with the test setup to investigate the advantage of TLP in tail loss recovery time and total transfer time. For this purpose all tests are done with a constant transfer size of 100 segments, which equals approximately 144kB in the test setup. The transfer size roughly represents a single web page element, e.g. a graphic or an advertisement. The tcpfilter always drops tail segments, thus the number of segments has no effect on the result. 100 segments are chosen to allow the transmitter to calculate a precise value for the round trip time (RTT) which is used to calculate retransmission timeouts and probe timeouts. There are totally three different options for the recovery algorithm. One is the new TLP. The previous one working with Early Retransmit is denoted with ER. To further compare the results a dataset is acquired with all TCP extensions (SACK / ER / FACK / TLP) disabled, further denoted plain. These extensions can be configured at runtime in the system kernel by using the sysctl interface. All options regarding the tests are found in net.ipv4. For example, the following command disables TLP. s y s c t l w net. ipv4. t c p e a r l y r e t r a n s = 2 The changes take effect immediately, so there is no need to restart mininet or the whole system. Table 2 shows the configuration for the different algorithms used. Option plain ER TLP tcp early retrans tcp fack tcp sack Table 2: TCP configuration in / proc / sys / net / ipv4 To evaluate the performance under various network conditions three exemplary types are selected as shown in table 3. They do not necessarily reflect real networks but cover a broad range of different conditions. To acquire the measurement dataset all TCP configurations are tested with all network configurations. The tests increase the number of lost tail segments from 0 to 20 and record the time t recover until the first segment is resend and the total transfer time t total. Due to the usage of the mininet virtual network there is no natural tail loss during the measurements. Type Bandwidth RTT high-speed 100MBit / s 10ms mobile 7.2MBit / s 100ms satellite 1MBit / s 800ms Table 3: Network configurations 5 doi: / NET _01

16 4. RESULTS The results are acquired by repeating the measurements for the high-speed and mobile network configuration 100 times and 20 times for the satellite network. The reason for only acquiring 20 samples per number of tail losses in the satellite network is the high round trip time. It takes approximately one hour to obtain a dataset with 420 samples. Table 4 sums up the measurements in the different networks with a loss count of five segments. The previously default option of ER in the Linux kernel is the baseline for comparisons. Tail Loss Probe performs best when the round trip time is low. The total transfer time is decreased by 38% in the high-speed network. On a mobile network the time is still 11% lower. The satellite network does not benefit significantly from TLP. Early Retransmit does not improve the transfer time very much compared to plain TCP configuration. This is expected, as ER requires partial information of the received data and duplicate ACKs. Both conditions are not available at a tail drop. When comparing the time to the first retransmission t recover TLP reduces the value significantly by 81% (high-speed network). In the mobile network this reduction drops to 19%. A noticable anomaly is the increase of the recovery time in the plain configuration in the mobile network. As this paper mainly deals with TLP, the evaluation of this anomaly is out of scope. TCP cfg. plain ER TLP 100MBit / s, 10ms RTT t total%% t recover%% 7.2MBit / s, 100ms RTT t total%% t recover%% 1MBit / s, 800ms RTT t total%% t recover%% Table 4: Recovery algorithm comparison (transfer size: 100 packets, losses: 5). ER configuration serves as baseline. t total [s] t total [s] loss (a) high-speed network loss (b) mobile network t recover [s] t total [s] loss Figure 4: Number of tail losses and time until the first segment is retransmitted . Plain (green), ER (blue), TLP (red). Transfer size 100 segments. High-speed network loss (c) satellite network Figure 5: Number of losses and total transfer time. Plain (green), ER (blue), TLP (red). Transfer size 100 packets. 6 doi: / NET _01

17 Figure 4 plots the recovery time versus the number of lost tail segments. The standard deviation is plotted along for each measurement. For a better reading the samples have been slightly shifted on the plot, but the number of losses is always an integer. The results show that the plain implementation and ER perform almost equally. TLP is faster with a factor of approximately 4.5. Of special interest is the behavior of TLP with a loss of exactly one segment. In this case TLP increases the retransmission timer to accommodate for an eventually delayed ACK. TCP can concatenate two ACKs into a single one if data comes in within short time. To make this concatenation possible the TCP implementation in the Linux kernel waits up to 200 ms [8]. This is also the value of the TLP retransmission timer is increased when only a single segment is in flight (cf. WCDelAckT in [5], sec. 2.1). Although the data loss is repaired by the tail loss probe segment, it takes approximately twice the time until the transfer is complete (compared to multiple segments in flight). Figure 5 compares the total transfer time in the three network configurations. In the case of no loss all implementations are equally fast. This also shows that the additional TLP code has no impact on lossless transfers. As noted previously TLP performs bad with a single lost segment. An interesting trend in the mobile and in the satellite network is the slightly increasing transfer time in dependence of the number of lost segments. The tcpfilter drops all segments at once and after the duplicate ACK from the tail loss probe segment ER should immediately resend all segments. Thus the number of tail loss segments should not have such a significant impact. Furthermore, the increase in transfer time is not linear. The time increases after 1, 2, 4, 8 and probably 16 packets, which are all exponentials of 2. The reason for the increasing time is most likely the congestion control algorithm used, but this topic is out of the scope of this paper. The measurements in the satellite network also show that TLP has no substantial benefit for high-delay lines. The recovery time of 2.4s is also higher than expected by the TLP paper. If the flight size is greater than one segment TLP calculates the retransmission timer by multiplying the smoothed RTT by two. This would be 1.6s for the satellite network. For the mobile network this is 0.2s, the measurements show an average of 0.33s. In the high-speed network it should be 0.01s, measured is 0.045s. So the Linux TLP implementation always calculates a higher retransmission timer than specified in the paper. This does not have to be an error in the TLP code but can also be the consequence of the RTT measurement implementation in the Linux kernel. 5. CONCLUSION The results show that the Tail Loss Probe is an improvement to TCP communication in all tested cases. There is no situation where a non-tlp test result is better. The largest improvement in the time until the first segment is retransmitted (-81%) is recorded in the high-speed network. Total transfer time in such a network with a tail loss is decreased by 38%. The TLP draft reports real-world values ​​from a test with the Google web servers of up to 10% improvement in response time. The values ​​presented in this paper are not comparable to the TLP draft values ​​because in the TLP draft the values ​​are calculated from all transmissions, including those without any tail loss. To compare them one needs to know how many of the transmissions encountered tail loss. It is important to note that TLP has no benefit for a single lost tail segment. Furthermore in high RTT networks TLP does not improve the transfer time. Although especially in these networks a decrease in transfer time would be a great advantage. The tests included only a small variation of possible measurements. Further options are the transfer size, although this should have no effect with a constant number of tail drops, a generally lossy line or variations in the transfer window size. Furthermore this paper intentionally left out aspects of congestion control and TLP's interference with it. Measurements in this domain require a much more complex user-space application. 6. REFERENCES [1] J. Postel. Transmission Control Protocol. RFC 793 (Standard), September Updated by RFCs 1122, [2] M. Allman, V. Paxson, and E. Blanton. TCP congestion control. RFC 5681 (Draft Standard), September [3] M. Mathis, J. Mahdavi, S. Floyd, and A. Romanow. TCP Selective Acknowledgment Options. RFC 2018 (Standards Track), October [4] M. Allman, K. Avrachenkov, U. Ayesta, J. Blanton, and P. Hurtig. Early Retransmit for TCP and Stream Control Transmission Protocol (SCTP). RFC 5827 (Experimental), April [5] N. Dukkipati, N. Cardwell, Y. Cheng, M. Mathis, and Google Inc. Tail Loss Probe (TLP): An Algorithm for Fast Recovery of Tail Losses. TCP Maintenance Working Group (Internet Draft), February [6] J. D. Day and H. Zimmermann. The OSI reference model. Proceedings of the IEEE, 71 (12):, [7] R. Braden.Requirements for Internet Hosts - Communication Layers. RFC 1122 (Standard), October Updated by RFCs 1349, [8] Pasi Sarolahti and Alexey Kuznetsov. Congestion Control in Linux TCP. In Proceedings of the FREENIX Track: 2002 USENIX Annual Technical Conference, pages 49 62, Berkeley, CA, USA, USENIX Association. [9] M. Allman, V. Paxson, and W. Stevens. TCP congestion control. RFC 2581 (Proposed Standard), April Obsoleted by RFC 5681, updated by RFC [10] Matthew Mathis and Jamshid Mahdavi. Forward Acknowledgment: Refining TCP Congestion Control. In SIGCOMM, pages, doi: / NET _01

18 8

19 Minimization of Drive Tests (MDT) in Mobile Communication Networks Daniel Baumann Supervisor: Tsvetko Tsvetkov Seminar Future Internet WS2013 / 2014 Chair for Network Architectures and Services Department of Computer Science, Technical University of Munich ABSTRACT Drive tests are used for collecting data of mobile networks. This data is needed for the configuration and maintenance of mobile networks. In order to execute drive tests, human effort is required. These measurements cover only a small piece of time and location of the network. The new idea is to use each device which is active in the network; This concept is referred to as Minimization of Drive Test (MDT). This means that standard mobiles should be used for measurements to provide data for the operators. The main difference between these tests is that MDT uses cheap mobiles whereas drive tests make use of high developed measurement equipment. This paper demonstrates that MDT can reduce drive tests, but that there are still use cases where MDT cannot replace drive tests. Keywords Minimization of Drive Tests, Cellular Networks, Mobile, LTE, SON 1. INTRODUCTION Mobile communication networks like GSM, UMTS, LTE and TETRA must be monitored and optimized in order to provide a good network coverage and quality of service. One problem could be that coverage holes exist due to the fact that a new building was constructed which shadows a certain area. To detect and improve such problems, radio measurements are needed. These measurements can be done with highly developed equipment directly at the base station or by Drive Tests (DTs) to cover the whole area. They are carried out by cars with measurement equipment. These collect the data in a cell as a snapshot of the cell coverage at a certain time. Furthermore, DTs are used to assess the mobile network performance [3, 4]. The data which was collected with the test equipment can then be post-processed and evaluated for the configuration and failure handling of the networks. Generating and analyzing this measurement data is a large Operation Expenditure (OPEX) and displays only the network state at a defined time and location [4]. In 2008 Long Term Evolution (LTE) as part of 3rd Generation Partnership Project (3GPP) Release 8 was published [3]. Today, there is a collection of different technologies like GSM, UMTS, LTE, WLAN and many more. These heterogeneous networks lead to a new complexity with respect to communication and correct configuration. In order to solve this problem, Self-Organizing Networks (SON) was introduced in Release 8, which targets the problem to configure, optimize and heal cellular networks [3, 14]. SON should now simplify the configuration and management of these networks. The 3GPP also studied and specified solutions in Release 9 under the name Minimization of Drive Tests (MDT) in order to reduce the OPEX for drive tests (DT) [4, 15]. It addresses the automation of the measurements and configurations. The main idea is to use each device which is logged in the network for collecting measurement data. This paper shows use cases and reasons for drive tests and compares the function of MDT and drive tests. Note that in the following the Base Transceiver Station in GSM (BTS), Node B at UMTS and Evolved Node B (enb) at LTE are referred with Radio Access Network (RAN) node. In addition, also the Mobile services Switching Center (MSC), Radio Network Controller (RNC) and Mobility Management Entity (MME) a ​​referred with Core Network (CN) node. The paper is organized as follows. Section 2 describes reasons for drive tests and shows what kind of data the operator needs. The next Section 3 explains how this data is collected. Thereafter, in Section 4 the idea and vision how drive tests can be minimized is explained. Section 5 picks up the functionality of each and compares them in terms of the operator tasks. In the last Section 7 the comparison of DT and MDT is summarized. 2. OPERATOR TASKS - REASON FOR DT AND MDT The main goal of network operators is to provide a network with maximum coverage and minimum usage of hardware. In [15] the 3GPP Technical Specification Group (TSG) RAN defines the main use cases for the minimization of drive tests - which are coverage optimization, mobility optimization, capacity optimization, parametrization for common channels and Quality of Service (QoS) verification. These use cases are shortly described below [3, 15]. 1. Coverage optimization Coverage is an aspect a user can easily observe and 9 doi: / NET _02

20 which mainly influences the user experience. It is a parameter which the user can use to compare different operators. Sub-use cases: Coverage mapping: maps based on signal strength. Coverage hole detection: areas where call drop and radio link failure happen. Identification of weak coverage: areas where the signal level is below the level needed to maintain a planned performance requirement. Detection of excessive interference: due to large overlap of cell coverage areas or an unexpected signal propagation between the cells, excessive interferences can occur. These interferences will degrade the network capacity. Overshoot coverage detection: in a serving cell a strong signal of another cell is detected, which is far away. Uplink coverage verification: a weak uplink coverage can cause call setup failure, call drop, bad uplink voice quality. Ideally, the uplink and downlink coverage should be equal, but normally the downlink coverage from the BTS is larger than the uplink coverage. 2. Mobility optimization Mobility optimization aims at the minimization of handover failures, which happen when mobiles switch between different cells. It is useful to get measurements at the source and neighbor cells, which are linked to the location. 3. Capacity optimization The capacity optimization use case aims at the optimization of network capacity planning. The operator, for example, is interested in those parts of the network where the traffic is unevenly distributed. This data helps to determine places for a new base station. 4. Parametrization for common channels The network performance can also be degraded by configurations of the random access, paging or broadcast channels which are not optimized. An analysis of the connection setup delay, for example, helps to optimize the parameters of a BTS. 5. QoS verification aspects like data rate, delay, service response time, packet loss and interrupts are responsible for the QoS. The QoS is not only influenced by the coverage but also by operator specific scheduling of the packet type connections which can lead to bad data rates. The QoS is usually measured with key performance indicators (KPIs), which assess the network. The KPI Session setup success rate is, for example, influenced by the performance indicators: RRC establishment success rate, S1 link establishment success rate and ERAB establishment success rate [2]. The drive tests are carried out in the following five scenarios [15]: 1. Deployment of new base stations Drive tests are needed when new base stations are deployed. The new base station transmits radio waves in a test mode. Then UL / DL coverage measurements of new and neighbor cells are collected with the help of drive tests. Afterwards the results are used to improve the performance of the cell. 2. Construction of new highways / railways / major buildings In areas where new highways, railways or major buildings are constructed, the coverage is probably reduced due to a shadowing of the cell. In addition, the volume of network traffic is increased by new buildings. In order to analyze and reconfigure this new usage profile, drive tests are needed. 3. Customer s complaints When costumers inform the operator about bad coverage, bad quality of voice or data, the operator also executes drive tests to detect the problem in the relevant area. 4. KPI Alarms at Core Network The operators also monitor the network elements. In order to assess these elements, KPIs are used. Most KPIs are composed of several counters, which contain information like dropped calls or handover failures. The operator can execute drive tests for detailed information. If the amount of failures increase in a certain area, the operator in generally carries out drive tests for detailed information [6]. 5. Periodic drive tests Periodic drive tests are additionally used to monitor particular cells. They are needed in order to provide a continuously high level quality and to monitor the coverage and throughput of the network [3]. 3. DRIVE TESTS So far, drive tests are the main source for collecting measurement data from cellular networks. As shown in Figure 1, drive tests are usually carried out with the help of measurement cars, which contain systems of scanners and test mobiles. The scanners can be configured to scan all technologies. This is used for detecting interferences and monitoring all accessible base stations. A scanner operates completely passive and is not recognized by the network. As a result, only data which is not encrypted can be collected, which is mostly the signaling and broadcast messages. Additionally, test mobiles are needed because they are logged in the network and provide, for example, the details for the handover procedures. Furthermore, they are used for checking the speech quality and transfer (up- and download) rates [13]. In the following the functions of scanners and test mobiles are explained more precisely. 3.1 Scanner Drive tests are mostly carried out by Scanners like TSMW from Rohde & Schwarz, as illustrated in Figure 2. These scanners support the measurements in different networks and positioning with Global Navigation Satellite Systems (GNSSs). The difference to the test mobile is that the scanner has a broadband RF front-end and a baseband processing system. This is the reason why the scanner can support 10 doi: / NET _02

21 6. Higher level and time accuracy compared to mobile based measurements: The scanner uses GNSS signals for synchronizing the local clocks and achieves a more precise timing then normal user equipments (UEs). 7. Scanners are passive: Scanners only listen on the broadcast channels and do not influence the network. This makes it possible, for example, to monitor networks on the borderline without roaming costs. Figure 1: A measurement car from Rohde & Schwarz [9] all types of technologies in the defined frequency range. A test mobile normally supports only a few technologies and cannot detect interferences with other networks like DVB-T [13]. The following list contains the advantages of a scanner from [13]: 1. Significantly higher measurement speed than test mobiles: High-speed measurements allow better statistics and lower the possibility of missing important problems, such as interferences. 2. Measurements are independent of the network: Mobiles only measure channels which are provided by the BTS neighborhood list. The scanner can measure all available channels and allows the detection of hidden neighborhoods. 3. Use of only one unit for different networks and applications: Scanners support a number of different technologies like GSM, UMTS, LTE, TETRA and DVB-T. 4. Spectrum analysis provides additional information: Spectrum scans with multiple frequency ranges, for example, between 80 MHz and 6 GHz makes it possible to detect in-band and external interferences. 5. Independent of mobile chipsets: Scanners have a broadband RF front-end and baseband processing system which is independent from any mobile phone chipset and supports different technologies. Therefore, it can be used as the reference system. Figure 2: A scanner from Rohde & Schwarz [10] 3.2 Test Mobile The main feature of the test mobile is that it works in the network and receives the messages. It therefore reflects the behavior of the user mobiles and is used for the operational tests. This could be: throughput, connection quality, video quality, voice quality, handover and network quality tests. An identifier for network quality could be the ratio of dropped to successful calls [11]. 3.3 Data aggregation software With real-time and post-processing tools like R&S ROMES and Network Problem Analyzer (NPA) the data from the scanners and the test mobiles can be analyzed [11]. The major analysis features are [12]: 1. Coverage Analysis Provides information where weak coverage, coverage holes or overshoot coverage exists. 2. Interference Analysis Provides information where interference exists and from which technology the interference comes. 3. Call Analysis Provides information about the number of successful, dropped or blocked calls. 4. Data Transaction Analysis Provides information about the response times. 5. Throughput Analysis Provides information about the possible data throughput. 11 doi: / NET _02

22 6. Neighborhood Analysis Checks if the provided neighborhood lists of the base stations are equal to the received broadcast information at the scanner. 7. Handover Analysis Shows the duration of handovers or details to a handover failure. 8. Spectrum Analysis Provides the information which frequency bands are used and how strong the signals are. 4. MINIMIZATION OF DRIVE TESTS The problem that drive tests need human effort to collect measurement data and that only spot measurements can be performed, has led to automated solutions which include the UEs from the end user. This approach should provide measurement data for fault detection and optimization in all possible locations covered by the network. The feature for this evolution in the 3GPP standard is named MDT. It started in 2008 in the Next Generation Mobile Networks (NGMN) forum, where the automation of drive tests became operators requirement. The 3GPP also realized the need and followed up on the subject. One of the first studies in 3GPP was in 2009 with the 3GPP Technical Report (TR) [3]. The 3GPP started two parallel work items, which are the functioning for MDT in the UE and the MDT management. The MDT in the UE is done by the 3GPP TSG RAN and the MDT management by the 3GPP TSG Service and System Aspects (SA) [3]. The MDT should also reduce the operational efforts, increase network performance, quality and, at the same time, decrease the maintenance costs [3]. In the 3GPP TR the 3GPP TSG RAN group defined the requirements and constraints for MDT solution. The topics are [15]: 1. The operate shall be able to configure the UE measurements independently from the network configuration. 2. The UE reports measurement logs at a particular event (e.g. radio link failure). To limit the impact on the power consumption one should take the positioning components and UE measurements into account. These UE measurements should rely on measurements from the radio resource management as much as possible [15]. 4.1 Architecture In the beginning of MDT in 3GPP TR a user plane and control plane solution was discussed [4]. In the case of the user plane solution, the measurement data is reported by uploading it to a file server. Therefore, the typical data connection is used and the transport is transparent for the radio access network (RAN) and the core network (CN) [3, 17]. At the control plane solution, the reporting is controlled by the Operations, Administration, and Maintenance (OAM) system. It is targeted over the enb / RNC to the UE via RRC connections. The measurement should then allow to collect and combine the data at the enb / rnc and send it back to the OAM system [3, 17]. The result of the study phase was that the control plane architecture is better, because the measurement results can be reused for any automated network parameter optimization. With respect to the SON the redundant data handling is avoided [4]. In 3GPP Technical Specification (TS) there are two types of MDT defined in the network signaling perspective and radio configuration perspective [4]. These are described in the following area and subscription based MDT Out of the network signaling perspective, one type is the area based MDT and the other is the subscription based MDT [4]. At the area based MDT the data is collected in an area, which is defined as a list of cells or as a list of tracking / routing / location areas [18]. As shown in Figure 3, the MDT activation from the OAM is directly forwarded to the RAN node, which selects the UE of the defined area. 3. The operator shall have the possibility to configure the logging in geographical areas. 4. The measurements must be linked with information which makes it possible to derive the location information. 5. The measurement shall be linked to a time stamp. 6. The terminal for measurements shall provide device type information, for selecting the right terminals for specific measurements. 7. The MDT shall be able to work independently from SON. The solution shall take the following two constraints into account. The UE measurement logging is an optional feature. Figure 3: Area based MDT according to [4] The subscription based MDT addresses the measurement for one specific UE [18, 3]. It is carried out in the OAM by selecting a UE with an unique identifier. As shown in Figure 4, the OAM sends the MDT configuration parameters to the 12 doi: / NET _02

23 HSS, which decides if the MDT activation for the selected UE is allowed. The HSS sends the MDT configuration over the CN node and RAN node to the UE [4]. Figure 6: Logged MDT according to [3] However, the logged MDT is an optional capability and only the immediate MDT is mandatory for the UE [5]. Figure 4: Subscription based MDT according to [4] Immediate and logged MDT Out of the radio configuration perspective, one type is the logged MDT and the other the immediate MDT [4]. The immediate MDT allows measurements only in a connected state. It is possible to use several measurement triggers. The UE immediately reports the measurement results when the configured triggers are met or the reporting configuration matches [16]. These results are collected at the RNC node and as it is shown in Figure 5, this notifies the Trace Collection Entity (TCE), which collects all MDT measurements. After the notification, the TCE uses a file transfer protocol for downloading the MDT log [3] Architecture Elements The MDT data collection is initiated and controlled by the OAM system. The core network is used for MDT but has no specific MDT logic. The UE and RAN collects the data and sends it to the TCE, which stores the data and can be used for post-processing analysis [5]. This architecture and the usage of the MDT types are shown in Figure 7. Figure 7: MDT architecture according to [4] Figure 5: Immediate MDT according to [3] To support also measurements in the idle state, the logged MDT is used. With the help of logged MDT it is also possible to configure periodical triggers.If these triggers are met, the UE stores the measured information [16]. Additionally, this provides the possibility to store failures if the network is not reachable. The decoupling of measurements and reporting also reduces the battery consumption and the network signaling load. Figure 6 shows that the UE stores the measurements locally and after a defined trigger it is transferred to the RAN node and then to the TCE [3]. For the comparison one should consider that coverage holes can only be detected with the logged MDT, because the immediate MDT does not have a connection to the CN node. 4.2 Managing MDT The MDT can be configured in the area or subscription based MDT, where the UEs are in the immediate or logged MDT. The configuration of MDT is decided in the OAM which initiates the defined MDT type. Each constellation between MDT types allows different configurations. In the following is a subset of the MDT configurations parameters from [18] are announced: 1. List of measurements It defines the measurements which shall be collected. This could be: data volume, throughput, received signal code power, reference signal received power. 2. Reporting trigger It defines if measurement report for UMTS or LTE should be created in a periodical or event based way. 13 doi: / NET _02

24 3. Report interval It defines the interval of a periodical reporting from 120ms to 1h. 4. Report amount It defines the number of measurement reports that should be created. 5. Event threshold It defines the threshold for an event based reporting. 6. Logging interval It defines the periodicity for logging in logged MDT. 7. Logging duration It defines how long an MDT configuration is valid. 8. Area scope It defines in which geographical area MDT should be executed. It could be a number of cells or tracking / routing / location areas. 9. TCE ID It is an identifier which can be resolved to the TCE IP at the RAN node. 10. Anonymization of MDT data It defines if the measurement results are saved with an IMEI-TAC 1 or no user information. 11. Positioning Method It defines if positioning with GNSS or E-Cell ID should be used. 4.3 Location Information The location information is as important as the radio measurement itself for drive tests, walk tests and MDT. The analysis of the data is only successful with good location information for each measurement. In MDT 3GPP Release 10 a best effort location acquisition was defined. This means that the measurement can be tagged with location information if it is available for some other location-enabled application [5]. An location-enabled application could be a map application, which already uses the GNSS chip. In Release 11 it is defined that the network can request the UE to use location information for an MDT session. The problem is that this approach can lead to a higher battery consumption. J. Johansson et. al [5] says that the operator may choose to handle this by subscription agreements, and only use requested location for subscribers that have consented to this. Additionally, an approximate location can be estimated with RF fingerprint measurements. This can be obtained by signal strength measurements of the neighboring cell [5]. Another point is the tagging of RAN-based measurements. This should be done by post-processing with correlating the timestamps of the UE-based DL measurements, which already contains the location information tag. 1 The first eight digits of the International Mobile Station Equipment Identity 5. COMPARISON BETWEEN DT AND MDT After the details concerning drive tests and the minimization of drive tests, the DT and MDT is now compared from the perspective of technologies, operator use case and used hardware. 5.1 Technologies The MDT in 3GPP is defined for UMTS and LTE. However, there are other technologies like GSM, TETRA, DVB-T for which measurements are needed. For instance, GSM is used as a communication and signaling network for the railways as GSM-R. There is also a need for coverage measurements. Furthermore, TETRA is now used in Germany by authorities and organizations with security functions like police, fire brigades and emergency rescue services. These organizations require a very robust and good covered network, which requires measurements. These measurements for the mentioned and other technologies can only be carried out by the classical drive tests. 5.2 Use Cases Section 2 lists the use cases for the operators from the perspective of MDT. In the next subsections these use cases are used for comparing the DT and MDT Coverage optimization Coverage is one of the most important aspects of the network performance which can be directly recognized by the end user [3]. With the help of MDT it is possible to detect the signal strength and the location of the mobile, which then could be used as coverage indicator [3]. However, the main problem is that the signal strength between mobiles at the same location and time can differ much more than + -6dB [13]. Another problem is that it is unknown if the mobile is inside a bag or some car which leads to signal lose. If logged MDT is used it is not possible to know if it was a network failure or the mobile. Possibly, this problem could be solved by collecting many measurement data, which then could be merged and statistically evaluated. Another problem is how to get the location inside a building. In [3] it is also suggested to use a GNSS chip as a location provider. However, there is also a need for good coverage inside of big buildings like airports. This also requires indoor measurements, but there a GNSS chip is not usable. With the help of DT and a scanner as a measurement device it is possible to collect coverage measurements in a high quality. The problem here is that it provides only a snapshot of that area, where the DT was executed Mobility optimization Another use case for optimizing the mobile network is the mobility optimization. The aim is to have handover failure rates which are as small as possible. Handover failure can happen if the mobile and networks do not recognize that the user travels from one cell to another and then as a consequence of that the user loses the signal from the old cell. Another problem could be that the load is too high and the handover takes too much time. In most cases the call would be dropped. Operators can optimize their network if 14 doi: / NET _02