|Day 1 - Monday, May 5, 2008|
Delay-tolerant Networking (DTN)1 has been used to enable opportunistic communication where the IP-based end-to-end communication would fail. In these communication scenarios the DTN routing layer uses bundles, which consist of a semantically self-contained set of resources, to forward messages on hop-by-hop basis. We take advantage on the self-contained nature of the messages and add a small amount of application layer information to allow intermediate nodes to understand the application data carried in the bundles. Based on this, we create a cooperative storage infrastructure which allows content caching and retrieval from the nodes forming the DTN network. With some of the nodes providing the access to the Internet, the resources can be fetched and packaged for the DTN delivery. To deploy our cooperative storage on networks operating on the existing DTN mechanisms, we do not require modifications to the bundle routing. Instead, we propose for nodes willing to cooperate to support a minimal Application Hints 2 extension. This extension allows intermediate nodes to investigate whether a passing bundle contains a resource potentially interesting beyond its routing layer lifetime. Such a resource could be e.g. a popular web page packaged with the referenced pictures.
Automotive traffic monitoring using probe vehicles with Global Positioning System receivers promises significant improvements in cost, coverage, and accuracy. Current approaches, however, raise privacy concerns because they require participants to reveal their positions to an external traffic monitoring server. To address this challenge, we propose a system based on virtual trip lines and an associated cloaking technique. Virtual trip lines are geographic markers that indicate where vehicles should provide location updates. These markers can be placed to avoid particularly privacy sensitive locations. They also allow aggregating and cloaking several location updates based on trip line identifiers, without knowing the actual geographic locations of these trip lines. Thus they facilitate the design of a distributed architecture, where no single entity has a complete knowledge of probe identities and fine-grained location information. We have implemented the system with GPS smartphone clients and conducted a controlled experiment with 20 phone-equipped drivers circling a highway segment. Results show that even with this low number of probe vehicles, travel time estimates can be provided with less than 15% error, and applying the cloaking techniques reduces travel time estimation accuracy by less than 5% compared to a standard periodic sampling approach.
Really Simple Syndication is the de-facto solution for news publishing. It is an XML-based file-format that defines a news feed and the properties of the feed. The file also contains a set of news items. The publis- hing process is quite simple. The author selects a set of news items and writes them into a file following the RSS formatting rules. This is normally done automatically as a side product by a content management application when the author adds a new entry to his/her web-page. The client application then downloads the resulting file from the server and analyzes its content. If there are new items they are processed and shown to the user. All the old news are discarded. The client repeats the downloading process for example every 10 minutes, depending on the settings of the client application. This pull style mechanism results in a great amount of news duplication. Every downloaded file contains some news that the client had already downloaded during the previous iteration. Delivering the news with a push style mechanism would result in a much better situation, traffic is only generated when there is something new to deliver. There exists numerous proprietary solutions, some dating back to mid 90’s; most of them vanished during dot-com crash. There are also some solutions tied to particular middleware . The aim is to revisit the push concept using modern technologies and make the solution platform indepen- dent. Ideally the solution could be used on any platform and operations sys- tem, from mobile phones to workstations. One very interesting protocol is the Session Initiation Protocol (SIP ) and specially its extension on speci- fic event notifications . The client subscribes to a server with SIP methods and gives a profile that includes the preferences for news delivery and the feeds it is interested in. The server then sends news items to the client as requested. The system does not require changes to existing news servers, instead a proxy server is used to collect the traditional RSS feed and deliver the news items over SIP. A client software was developed for the Nokias N800 internet tablet. Moreover, there are no changes to the standardized SIP protocol, it is merely used as a transport protocol. SIP is the basis for IMS, and many high end mobile phones support it, which makes SIP a very attractive transport. The main goal is to replace the current HTTP transport for RSS feeds with SIP signaling. Tests show that the classic RSS delivery method can result in up to 99% duplication rate, when analysing some common feeds at 30 minute download interval; higher intervals would result in even more duplicate news items. The software that was developed to test the hypothesis, removed the duplication and generated 7.5 times less traffic during the tests. These improvements are beneficial especially for mobile devices because the traffic is reduced and less time is used to pointlessly parsing news that are already known, lowering dramatically bandwidth usage and energy consumption. There is also added value in delivering the news in real time rather than asking for them, say, every 30 minutes.
To improve the availability and cost-effectiveness of computer-based services we need to find ways to automate the maintenance and management of the systems. To enhance the current levels of availability, the entire system may need to react to changes in milliseconds, which is faster than any expert human maintenance person could be able to do. In addition to faster reaction times, removing the maintenance personnel from the cycle would also help to reduce the cost. Still more urgently, even if the cost would not be an issue, there is global shortage of expert maintenance personnel. The IBM’s autonomous computing initiative gives us a target of a system that had all kind of self-* properties. The system could be self-aware and based on this awareness it is able to optimise and even reconfigure itself. The eventual goal is a system that is capable of managing itself. The maintenance personnel would still need to give guidelines for the organisation and management, but should not need to make basic maintenance tasks. During the year 2007, we had a project where a simple proof-of-concept prototype was created to demonstrate an automatic service deployment mechanism in a distributed environment. The prototype architecture had a ’gateway’ machine for client connections. It forwarded the client requests to the management node, which handled the service deployment and starting. Once everything was ready the client got the access information via the gateway node. The management node is the key in this architecture. It makes autonomous decisions about the locations of the requested services. It must also monitor the system to detect failures and inconsistencies. The goal is naturally to improve the availability of the services. The management node could, if intelligent enough, make the decisions fully without human interruptions or it can be just alerting mechanism for the maintenance personnel.
Techniques like Ajax offer a tempting alternative not only for user interface designers, but also for application developers wishing to distribute their program code to the client side. How far each application can or should be extended to the clients is an interesting balance question, for not all clients have equal properties. When it comes to either performance or capability to present applications, the lowest common denominator is not easy to find. For the first time ever, users are voluntarily downgrading to browsers with fewer features than the earlier generation. These mobile browsers feature less display area, less processing power, lower network bandwidth but increased latency, and the completely new factor of a limited battery lifetime. Simultaneously, the use of Ajax involve browser capabilities that have become common enough only during the past few years. A natural question is then whether mobile devices can support Ajax to the extent needed by the applications. In our presentation we focus on handheld devices. More specifically, our tests concentrate on the browser performance of the least capable hardware platforms that still are able to support Ajax. This subset is called the converged mobile devices, and it is loosely defined as the set of devices that combine features from mobile phones, personal digital assistants (PDA), cameras, tablet computers, and other electronic aids. 80 million such devices were sold only in 2006. Our work is an introduction to a more thorough evaluation of a large number of Ajax applications on 3 different converged devices using 5 different browsers in all. Our evaluation is based on (i) how correctly the browser is able to render the applications, and (ii) how long does it take to render the application. A research paper on the same subject has been recently accepted to the Second International Workshop on Improved Mobile User Experience1. Our demonstration at Rutgers will offer a sneak preview of the more complete presentation to be held in Sydney later in May 2008.
|Day 2 - Tuesday, May 6, 2008|
|Day 3 - Wednesday, May 7, 2008|
Ambient lifestyle feedback systems are embedded computer systems designed to motivate changes in a person's lifestyle by reflecting an interpretation of targeted behavior back to the person. Other interactive systems including ∩serious games∩ have been applied for the same purpose in areas such as nutrition, health and energy conservation, but they suffer from drawbacks such as inaccurate self-reporting, burdens placed on the user, and lack of effective feedback. Ambient lifestyle feedback systems overcome these challenges by relying on passive observation, calm presentation style and emotionally engaging feedback content. In this presentation, we propose an ambient lifestyle feedback system concept and provide insights from the design and implementation of two prototype systems, Virtual Aquarium and Mona Lisa Bookshelf. In particular, we discuss the theory and practice of effective feedback design by drawing on elementary behavioral psychology and small-scale user studies. The work is aimed at aiding in the design of ambient persuasive technologies and ambient interaction in general.
Recently, the cross-layer approach has been frequently invoked as the sound way to improve quality of video communication over wireless channels. It is hard to resist the theoretical appeal of this approach, but how exactly can it be engaged with existing protocols and infrastructures and what exactly do new cross-layer design solutions do to improve quality of video service? This is a challenging problem, if only judged by the volume of literature allocated to the topic over the last several years. In our research, we analyze the fundamental issues in IEEE 802.11 wireless networks for real-time uplink video over WiFi applications. Such applications are characterized by many wireless devices transmitting video at various PHY rates over a relatively congested channel. Today's off-the-shelf 802.11 equipment easily suffers catastrophic failures and poor video quality under congested conditions. We analyze both theoretically and in real implementations the value of a dynamic and distributed control strategy to remedy the problems in a pragmatic mode: frame-by-frame control of MAC layer parameters while simultaneously exploiting the prevalent source-rate adaptation in the APP layer. In the process, we define the essential components of cross-layer control over wireless.
Masquerading is the act of changing a wireless card's MAC address from it's factory-set address to that of another card. Existing spoof detection techniques that use only MAC sequence number do not consider the 802.11 class 'e' amendment. The rapid adoption and deployment of the MAC QoS extensions adds a multitude of packet types and internal service queues, which along with different vendor implementations has made practical masquerading detection nearly impossible using only MAC sequence analysis. We will show how such methods can easily obtain very high false positive rates on actual data traffic and propose a light-weight solution for foolproof masquerading detection.
Privacy can be defined as the individuals’ right to control information flow about themselves. However, modern mobile computing systems leak personally identifiable information and also information that can be used to track the location of the user. We have systematically investigated the ways how sensitive or personally identifiable information is leaked from laptops with network connectivity. To mitigate this leakeage, we have investigated privacy protection mechanisms that do not require trusting the network for their operation. The focus is on approaches that are deployable and usable, that is, we attempt to minimize the changes to user experience for common legacy operating systems. We present approaches and analyze the ramifications when only single host is modified or when two communicating hosts have implemented the same privacy protection mechanism. The fundamental lesson in the work has been that privacy needs to be considered on system-wide level.
We describe a simple protocol that allows two parties communicating on a point-to-point wireless link to establish a common secret key using fundamental properties of the wireless medium, without letting an adversary infer any information about the key. The established key can then be used to encrypt communication between Alice and Bob using standard symmetric key algorithms such as Rijndael, DES, etc. The protocol allows Alice and Bob to regularly refresh their keys. It resists cryptanalysis of the generated key by an eavesdropping adversary Eve and unlike key-agreement schemes that have been proposed in prior literature, does not require that Alice and Bob share an authenticated channel. Our algorithm detects and resists a man-in-the-middle type attack by an active adversary. We numerically evaluate the performance of our algorithm and validate it using a measurement based study using 802.11 radios.
Comparing to general purpose systems, embedded systems have different limitation and requirements. Therefore, a virtual machine monitor (VMM) for embedded systems should support different properties. For instance, since the number of guest OSes is limited and fixed, their design could be made more simple and small. In addition, because of their limited hardware resources and requirement of real-time scheduling, VMMs for embedded systems should provide some functions to support precise cooperation between multiple guest OSes. We are now developing a simple virtualization layer for embedded systems, and adding some features to let it support cooperative scheduling andsynchronization between multiple OSes.
A great number of people spend one or more hours each day driving between home and the office. These daily roadway commutes are highly predictable and regular, and provide a great opportunity to form virtual mobile communities. However, even though these commuters are already physically present in the same location, they are limited in their ability to communicate with each other. We will present a framework for building such communities, which we call as Vehicular Social Networks (VSNs), to facilitate better communication between commuters driving on highways. As a proof of concept, we present the design of RoadSpeak, a VSN-based system which allows drivers to automatically join VSNs along popular highways and roadways, and communicate with each other by means of voice chat messages.
|Day 4 - Thursday, May 8, 2008|
We constructed several middleware systems for context-aware Computing in pervasive computing environments. They can maintain contextural informations, e.g., the locations of users and objects, measured by sensing systems and provide services according changes in the real world. We proposed query mechanisms for contextural informations by extending the model checking technique and evaluated our systems in several real spaces, e.g., museums, schools, and hospitals.
Conserving critical resources such as energy, and minimizing the network cost are two primary challenges posed by the ubiquitous applications in mobile devices. Adaptive middleware helps us to save energy, and minimize network cost for resource hungry applications. In this presentation, we use a reconfiguration service enabled middleware model that provides adaptive decisions for context based applications. To illustrate our model we present a popular application on the mobile devices, namely, Youtube, and we will show how such services can get benefitted using our adaptive middleware. One of the most popular video sharing website of today in the Internet is Youtube. Users upload, view, and share videos in Youtube. It uses Adobe's Flash Video(FLV) format for video delivery. Users can upload videos in a variety of formats(e.g., AVI, MPEG and WMV), and YouTube converts them to FLV before posting them. Users also have the choice to trancode the media files to FLV before uploading. For downloading, YouTube utilizes Adobe's progressive download technology which allows the playback to begin while the download is in progress . The challenges in using Youtube services over mobile devices is that they consume more energy and network resources. Power consumption during upload and download of videos in Youtube involves computational and communication cost. While uploading videos, the computational cost is mainly due to the memory access and format conversion (optional), whereas the communication cost is due to the network transmission which is linearly dependent on the file size and bandwidth. The file size varies with different media formats, for example, an AVI file with 34.5M can be transcoded to a FLV file with only 1.7M. Under the same network conditions, communication cost of transferring FLV file is less than for AVI file. But the conversion from AVI to FLV takes more computational cost. In addition, the transmission cost per bit varies with different network access types, and their bandwidth. Hence, there is a tradeoff between computational and communication cost. In contrast, while downloading videos, local playback and transmission from Youtube server incurs more computation and communication cost respectively. The reconfiguration service enabled middleware can help the Youtube applications to perform video transcoding before/after upload and playback during/after download depending on the variations in network condition, battery lifetime and user preferences. For example, if the network bandwidth is very low during the upload of a AVI file, and the battery lifetime is sufficient, the mobile device can choose to convert the AVI to FLV locally and then upload. In this talk, we will present the possible use case scenarios and the decision model for the Youtube application powered by adaptive middleware. We will conclude the lecture with a discussion on future research directions for adaptive middleware to provide context-aware applications.
We study decentralized strategies for facilitating data collection in circular wireless sensor networks, which rely on the stochastic diversity of data storage. The goal is to allow for a reduced delay collection by a mobile data collector (MDC) who accesses the network at a random position and random time. We consider a two-phase data collection: the push phase is mechanized through source packet dissemination strategies based on network coding; the pull phase is based on polling of additional packets from their original source nodes. Dissemination is performed by a set of relays which form a circular route to exchange source packets. The storage nodes within the transmission range of the route's relays linearly combine and store overheard relay transmissions. MDC first collects a set of packets in its physical proximity and, using a message-passing decoder, attempts recovering all original source packets from this set. Whenever the decoder stalls a source packet which restarts decoding is polled/doped from its original source. The number of doped packets can be surprisingly small and, hence, the push-pull doping strategy may have the least collection delay when the density of source nodes is sufficiently large. Furthermore, the Ideal Soliton fountain encoding is a good linear combining strategy at the storage nodes whenever doping is employed.
In recent years, there has been wide interest in future directions of Internet. At least two major directions are having considerable attention. First, the clean- slate direction aims at completely new design which would ultimately replace the wast of the Internet hourglass with more flexible design. The second direction aims at improving the existing design gradually, since replacing the prevailing design of the Internet can be a very complex and time consuming task. However, it is argued that the architecture of Internet is ossified, which means that the existing core technologies are difficult to refine or change. Both of these directions share the important relation between the commu- nication substrate and the applications running over it. Together they are the basic asset whose value can not be underestimated and which should be pro- tected. That is why both directions see security as the primary requirement. In either case, it is inevitable that the evolutionary events will take place. However, these events might require a catalyst or a capability of the environ- ment to get initiated. Recently, Dovrolis approached the discussion of future directions of Internet though a biological metaphor. The evolution of living species is determined by few fundamental mechanism, such as genetic heritage, mutations, and natural selection. Dovrolis concludes that the evolutionary re- search can result in a less costly and more robust designs than the clean-slate approach. Lee et al. have presented a model of a network service selection prob- lem. Their work is motivated by two visions. First, diverse, ubiquitous wireless access to the Internet can be catalyzed by a economic model different from the prevailing one. Second, an open market for new wireless devices and applica- tions can be created through such a wireless infrastructure in which anyone can provide and sell access to the Internet. Moreover, when the groundings of the Internet were established, several assumptions were made. From the security point of view, two most prominent were the explicit trust on end-points themselves and trust on other end-points. Neither of these assumptions hold anymore. In addition, there exists external forces that are driving the Internet towards more closed, unflexible environment for new applications. The problem of lack of trust on end-nodes by their end-users  causes sev- eral side effects. The end-users might be prohibited access to network if their end-point is not trusted by the network itself. This clearly decreases the open- ness of Internet. In addition, typical end-user might not have much to say how trustworthy their end-point is in terms of evaluation criterions specified by ex- ternal authorities. For example, the approach taken by the Trusted Computing Group (TCG)  proposes policies that are enforced at the edges of access net- works. Moreover, end-users can not trust on the other end-users or end-points in current global networking environment where the malicious entities are in- creasing their forces. This significantly decreases the amount of collaboration and knowledge sharing in Internet. As a result, we are far from the ideal situ- ation where new applications and ideas can be shared and experimented freely if the actors choose to do so. In today’s Internet, the sender has all the power in the communication with other end-points increasing the risk of unconsolidated messages. Occasionally, the recipient of the communication is queried before the actual communication takes place. In addition, the end-users are typically forced to use a certain ser- vice despite how unreliable it is or whether it matches the end-user expectations. Clark et al. have proposed a trust-to-trust argument  in accordance to the end-to-end argument. In the trust-to-trust argument, the end-user should be able to select a trustworthy service to which a function can be delegated. Since the trust is very much a subjective concept, each actor has their own perspective of reliable and trustworthy execution of one’s requirements. It follows that end-users trust different servers and services and thus these different servers or services represent different end-points of the application. In order to enable a competitive, evolving environment for connectivity and services, end-points need to uniterally assess trust, function, and reliability of the service. Taking the previous considerations into account, we first aim at clarifying the set of existing mechanisms that can be utilized to enable selec- tion among different, trusted services by the end-user. Second, based on the preliminary research, we aim at elicitating the requirements for the missing components which are then implemented as a prototype system enabling the end-user selection of services and delegation of functions to them.
As the interest of automation industry rises towards different networked systems, also wireless communication becomes an option for data transmission in networked control systems. Wireless provides huge opportunities for efficient and flexible measuring in industrial systems, but it also involves some threats that need further research in connection with variable transmission time-delays and packet losses (reliability) that are due to the shared medium and the random retransmission times. Thus we need to develop new theory to deal with integrated wireless communications and control, but obviously we are simultaneously obliged to develop simulation platforms for testing and verifying the theory. One such proposed tool is PiccSIM, which is a platform for modeling, design, simulation and implementation of networked control systems, and it integrates the control design tools available in MATLAB with the Network simulator (Ns2), a de facto standard simulator for wired/wireless networks. The key features of the platform are: 1) support for powerful control design and implementation tools provided by MATLAB, Simulink and xPC Target enabling automatic code generation from Simulink models for real-time execution, 2) real-time control of a true or simulated process over a user-specified network, 3) capability to emulate any wired/wireless networks readily available in Ns2, 4) easy-to use network configuration tool and 5) the platform is accessible over the Internet, i.e. it supports remote experimenting.
|Day 5 - Friday, May 9, 2008|
In sensor networks, analyzing power consumption before actual deployment is a crucial procedure for maximizing service lifetime. This estimation is feasible by simulation due to its large scale of the network. We have developed the IPEN, an instruction-level power estimator for sensor networks. IPEN is an accurate and fine grain power estimation tool, using an instruction-level simulator. It is independent of the operating system, so many different sensor node software can be simulated for estimation. For the simulator, we have generated the power model of the sensor node. IPEN has shown excellent power estimation accuracy, with less than 5% estimation error compared to the real sensor network implementation. With IPEN's high precision instruction-level energy prediction, users can accurately estimate sensor network's energy consumption and optimize their software in fine-grain. In this talk, I will review other simulators used in the Wireless sensor networks as well.
A wide range of transmit power control (TPC) algorithms have been proposed in recent literature to reduce interference and increase capacity in 802.11 wireless networks. However, few of them have made it to practice. In many cases this gap is attributed to lack of suitable hardware support in wireless cards to implement these algorithms. Even if fine-grained power control mechanisms were to be made available by wireless card vendors, characteristics like multipath and shadowing make the implantation of power control mechanism challenging in practical settings especially in indoor environments. I will show that fine-grained power control cannot be effectively used by such algorithms in a systematic manner. Then I will describe a technique to estimate the appropriate number and choices of power values that are adequate to implement any robust power control mechanism in typical indoor environments.