Research Interests

  • Broadband pricing, Internet economics, E-commerce, Online social networks
  • Technology diffusion, user adoption behaviors, pricing & incentive engineering
  • Network Security, Collaborative defense strategies
  • Network architecture designs

Research Areas

A. TUBE: Mobile broadband pricing by Timing

Charging different prices for Internet access at different times induces users to spread out their bandwidth consumption across times of the day. The potential impact on ISP (Internet service provider) revenue, congestion management, and consumer behavior can be significant, especially given that wireless usage is doubling every year. Users are consuming more of high-bandwidth data applications, with their bandwidth usage concentrated on several peak hours in a day. This problem cannot be solved even with usage-based overages, and so ISPs keep incurring costs in proportion to these peaks. We review current pricing schemes in practice today and analyze why they do not solve the ISP's problem of growing data traffic. To solve this problem, we develop an efficient way to compute the cost-minimizing time dependent prices for an ISP, using both a static session-level model and a dynamic session model with stochastic arrivals. Our representation of the optimization problem yields a formulation that remains computationally tractable for large-scale problems. We also conducted surveys in the US and India, the results of which demonstrates that users are willing to defer Internet usage in exchange for a lower monthly bill. Numerical simulations were also used to illustrate the use and limitation of time-dependent pricing. The result of this research work is a complete end-to-end system called TUBE, which is undergoing a real-world trial in Princeton University campus. Plans for large-scale trials with partner ISPs across the world are currently underway. A free "Personalized Bandwidth Management" app, DataWiz, for iOS and Android was a by-product of this work, and it had 130K+ downloads worldwide (Media mentions). [Learn More…]

B. Mission-aware Resilient Networks

Network Clouds require security, reliability, and performance guarantees, for their smooth operation. In particular, participating entities, that is, the cloud servers and nodes, need to make decisions that will maximize the overall utility of the system even though it may not be the best outcome for them. The resource allocated to each node by the network depends on their requirements and priority level in the nodal hierarchy of the cloud. We use a Nash Bargaining solution (NBS) within a cooperative game theoretic framework to understand how to allocate resources in a cloud so that it will maximize the overall mission objective. The problem of resource allocation is solved in a distributed manner by introducing the notion of congestion prices. However, NBS is not resilient to attacks, and therefore we propose a way for nodes to create a reputation scoring system that will allow them to detect misbehavior by peers and take strategic actions. [Learn More…]

C. Fairness & Efficiency of Resource Allocation

This project aims to explore the inherent tradeoff between fairness and efficiency in resource allocation by generalizing the notion of dominant resource fairness.


D. Online Social Networks

This project explores the issue of identifying consumer opinions and trends that are useful in marketing and business. We consider how to use data from Online Social Networks like Twitter predict the success of a movie or the political inclination of a news media, and how representative it is in comparison to a larger network. Such data analytics help to quantify bias, predict outcomes, and derive business insights for e-commerce and networked systems.


[Back to top]

Network Economics of IT/IS

Networked systems and architectures have a ubiquitous presence in today's world. The Internet, electrical power grids, facilities management networks etc., are just a few examples of such systems that permeate our daily lives. The potential for both economic and technological growth offered by these networked systems have attracted businesses to invest in them, spurring further research. This kind of networking research is a fast emerging multi-disciplinary field that draws from diverse disciplines, such as computer science, economics, and sociology - a convergence driven by the fact that success or failure of a technology depends not only on its scientific merits but also on many complex socio-economic factors. Over the years there have been many network technologies that met their technical specifications, and yet failed to become economically viable. For example, the failure of wide-spread deployment of QoS architectures and solutions in today's Internet is most commonly attributed to economic reasons, like the lack of user demand, high operational costs (compared to over-provisioning), and inter-provider settlement issues. Another such problem is also being encountered in the migration from IPv4 to IPv6. The lesson is that in order to develop technologies that are eventually successful, researchers need to make design choices which account for different economic aspects, such as supply and demand side uncertainties, consumer preferences, weight of incumbency, competition strategies, costs etc. However, in most cases it is often unclear which of these can have a really significant impact. Therefore, developing models that include economic as well as technical factors is of major importance. Additionally, these models can also be very useful to business strategists, policy makers, and social planners alike for analyzing questions regarding corporate and social welfare.

The goal of this research is to identify how various economic factors influence design choices and trade-offs in networked systems and architectures, their deployment and adoption, and thus build a framework for reasoning about choices and decision-making. This framework needs to incorporate different models which address specific issues and questions that network technology providers commonly face, such as, how can a new network technology compete better against an incumbent, or how will a new technical capability (e.g., virtualization) affect network profitability, etc.?

These questions are clearly far-reaching and multi-faceted, and in my research I concentrate on three basic issues that are important from a network provider's perspective. First is the issue of understanding how a provider's decisions and actions impact the process of technology adoption, and in particular, the migration from an incumbent to an entrant technology. A second related issue is that of choosing what type of network architecture should the entrant technology be deployed on, and the third issue is to understand how the range of functionalities included as a part of the network architecture affects its profitability and potential for future innovations. These three topics form the core of my current research work:

A. Network Technology Adoption

New network technologies constantly seek to displace incumbents. The Internet itself competed against alternative packet data technologies before finally displacing the phone network as the de facto communication infrastructure. But recently, the shortcomings of the Internet have come under increased scrutiny, raising calls for new architectures to succeed it. However, the new technologies will face a formidable incumbent in the Internet and their eventual success will depend not just on technical superiority, but also on economic factors, and on their ability to win over the installed base. The failure of widespread adoption of QoS solutions and the ongoing problems in migration from IPv4 to IPv6, remind us the importance of understanding how social and economic factors influence adoption of networked systems.

In this work, we develop a model for analyzing the competition between network technologies, and identify the extent to which different factors, in particular converters (gateways), affect the outcome. Converters can help entrants overcome the influence of the incumbent's installed base by enabling cross-technology interoperability. However, they have development, deployment, and operations costs, and can introduce performance degradations and functionality limitations, so that if, when, why, and how they help is often unclear. To this end, we propose and solve a model for adoption of competing network technologies by individual heterogeneous users. The model incorporates a utility function that captures key aspects of users' adoption decisions, such as, quality, externality, and price. Its solution reveals a number of interesting and at times unexpected behaviors, including the possibility for converters to hurt their own technology and to reduce overall market penetration of both technologies. Converters also have the potential to prevent convergence to a stable state; something that never arises in their absence. These findings were tested for robustness, e.g., different utility functions and adoption models, and found to remain valid across a broad range of scenarios. The identification of these behaviors and understanding the reasons for their provenance will allow network providers to realize the potential impact of various economic and technical factors and help them devise better competition strategies. The work also has implications in terms of potential policy intervention by market regulators.

B. Economic Evaluation of Shared Versus Separate Network Architectures

Advances in network technologies have made it possible for the Internet to evolve from a simple data network to a carrier of various new services like voice and video. Today, the Internet is poised to reach even areas that are either traditionally not networked or accessible only through dedicated services, e.g., health-care, facility monitoring, surveillance, etc. As networks improve and such new services emerge, questions that affect service deployments and network choices arise. The Internet is arguably a successful example of a network shared by many services. However, combining heterogeneous services on the same network need not always be the right answer as this comes at the cost of increased complexity. It often calls for upgrading the network with features required by the new services, and this cost often needs to be borne by services with no need for these features. On the other hand, technologies such as virtualization make deploying new services on separate (dedicated) networks increasingly more viable, making the question of whether to add a new service on an existing network or on a new network "slice" a more realistic one.

This work therefore aims to create an analytical framework for providers to understand the trade-offs between the network infrastructure choices. In doing so, we accounted for various factors like (dis)economies of scope and scale in the deployment and operational costs, the presence of demand uncertainties for new services, and the ability offered by virtualization technologies to facilitate reprovisioning of resources in response to excess demands. In particular, we demonstrate how reprovisioning can influence which network choice is more effective, and provide insight into when and why this happens. This kind of a model is new to the networking community, and it will give network providers a design guideline that considers the potential impact of technological and economic factors on their network choice.

C. Trade-offs between Minimalist versus Functionality-rich Network Architectures

Providers of network platforms need to invest in developing and incorporating functionalities into their network's architecture. In the context of the Internet, infrastructure providers create the platform with some built-in functionalities and capabilities, which in turn enables the deployment of innovative value-added services by service providers like Google and Yahoo. The end-users who join the Internet benefit from using these services. The infrastructure provider facilitates the interaction between the end users and service providers through its platform, and therefore gets to choose the fees charged to these two sides and the investment in its platform's functionalities so as to maximize its profit.

Although the minimalist design of the Internet has worked well in the past, its limitations have been long recognized. The lack of manageability and security functionalities in the platform has spurred much research in the design of new architectures. But there is a natural tradeoff between creating a functionality-rich and a minimalist architecture from the viewpoint of an infrastructure provider. A network with very little functionality will have few services running on it because the costs of developing the additional functionalities will have to be borne by service providers. The limited number of services will also make the network less attractive to users. Therefore, the fees that the infrastructure provider charges to the two sides will have to be kept relatively low, thus adversely affecting its platform's profit. On the other hand, if the infrastructure provider were to invest very heavily in creating a functionality-rich architecture, then to compensate for these expenses it will need to charge higher fees to service providers and/or end-users, resulting in fewer subscriptions, and lower profits. Moreover, these high fees charged to the service providers may also discourage them from innovating better functionalities on their own and instead use the built-in ones, irrespective of their quality. Thus, it seems plausible that in such a scenario the network's functionality richness may end up stifling the quality of innovations. Therefore it is important to analyze the trade-offs in an analytical framework to better understand if and when should network architectures with greater functionalities be created, and what factors influence such choice. To this end, we develop a two-sided market model that captures the interactions between network platform providers, users and service providers, and use it to understand how network platform providers decide on the fee structure and the level of investment in functionalities for the platform. Additionally, we also study how this decision compares with the choice that a regulator/social planner makes for maximizing the social welfare generated by the platform.

[Back to top]

Earlier Research

I am also keenly interested in social networks, user provided networks, network pricing, bundling of network goods & services, network neutrality issues. I have been involved in a few ongoing collaborations on these topics.

In the past I have also worked on network manageability issues, network security and optical networks. I worked with Prof. Roch Guerin on the NSF project (NSF grant CNS-0627004) titled "A Framework for Manageability in Future Routing Systems". It is a joint project with the University of Minnesota (Prof. Zhi-Li Zhang) and the University of Massachusetts (Prof. Lixin Gao). It addresses fundamental questions on building manageability into routing systems for future Internet architectures. Its goals are two-fold:

i) develop a framework for specifying, understanding, and evaluating what features should/could be "designed-in" into routing systems in support of manageability; and

ii) evaluate design choices and trade-offs thereof in terms of performance and manageability.

Message-Efficient Dissemination for Loop-Free Centralized Routing

With steady improvement in the reliability and performance of communication devices, routing instabilities now contribute to many of the remaining service degradations and interruptions in modern networks. This has led to a renewed interest in centralized routing systems that, compared to distributed routing, can provide greater control over routing decisions and better visibility of the results. One benefit of centralized control is the opportunity to readily eliminate transient routing loops, which arise frequently after network changes because of inconsistent routing states across devices. Translating this conceptual simplicity into a solution with tolerable message complexity is non-trivial. Addressing this issue is the focus of this work. We identify when and why avoiding transient loops might require a significant number of messages in a centralized routing system, and demonstrate that this is the case under many common failure scenarios. We also establish that minimizing the number of required messages is NP-hard, and propose a greedy heuristic that we show to perform well under many conditions. The results can facilitate the deployment and evaluation of centralized architectures by leveraging their strengths without incurring unacceptable overhead.

 

During my summer internships at Lucent-Bell Labs, Holmdel, in 2006 and at Intel Research, Santa Clara, in 2007, I worked on network security related issues.

At Bell Labs, I worked on "Performance characterization and improvement of the Intrusion detection system, SNORT" [PDF] (non-proprietary parts only).

Project description:

SNORT is one of the most widely deployed Intrusion detection system (IDS) which uses advanced pattern matching algorithms to search for known virus signatures in data packets. However, the growth in the number of virus signatures over the years continues to demonstrate a troubling trend, and as a result, even better IDS implementations may be required to process data packets at the line rate. This work aimed to improve the performance of SNORT by introducing new data structures capable of using memory efficiently and faster pattern matching. The motivation to reduce memory requirements was two folds: (1) to keep the memory size small so that the whole structure fits into the cache so as to avoid cache misses, (2) to make memory available for the growing number of signature matching rules. Performance analysis of SNORT was studied by varying packet sizes and rule set sizes to observe its effect on the maximum bandwidth at which SNORT's performance remained satisfactory. We implemented new algorithms into SNORT's code and evaluated their time and memory requirements, which led to the conclusion that there is significant scope for improving SNORT's performance. (Mentors: Thomas Woo and Tian Bu)

At Intel Research, I worked on the messaging and membership managment of collaborating network threat detectors in "Distributed Detection and Inference Systems".

Project description:

Distributed Detection & Inference (DDI) is a collaborative worm detection system. It uses the idea of collaborative Intrusion Detection Systems to corroborate the likelihood of attack by providing end hosts with probabilistic graphical models and allowing them to exchange random messaging to gossip state among peer detectors. This ability allows the system to detect slow spreading worms with low false positives. The participating end-hosts in the system can be local and/or global detectors. Local detectors issue local infection reports with a timeout and dissemination scope, indicating whether the end-host is infected. Global detectors collect these reports and issue system-wide alarms when the infection evidence has been corroborated. One of the main requirements of such a system is scalability. Therefore, a low-overhead gossip-based messaging and membership service was designed. It leveraged scoped-gossip to disseminate membership information, piggybacked on the local reports. The membership updates were done probabilistically by maintaining a random membership view at each DDI node. A bootstrapping method was implemented to enable a DDI node obtain a random view of the network from pre-configured rendezvous points with full membership information. Defaulting to bootstrapping helped to recover from network partition and node isolations. This work identified how the system parameters, e.g. timeout and scope, help a DDI network to avoid partitions and how DDI systems can recover. The work contributed towards developing a scalable collaborative intrusion detection system.

My contributions to the project during the internships were to:

  • Design and implementation of messaging & membership modules
  • Design of a more modular approach to DDI architecture
  • Implementation of a membership module with probabilistic addition of view elements into the local partial views, including view aging using timeouts
  • Exploration of the effect and interaction of the various membership management parameters
  • Exploration of conditions leading to system partition