Privacy and Trust in the Social Web

Introduction

The “social revolution” introduced by “classic” Online Social Network (OSN) websites (e.g., Facebook, MySpace, Twitter) and, later, by media sharing websites (e.g., Flickr, Youtube, and Instagram), has lead to the fact that the Web as we were used to know it, is nowadays rapidly evolving to incorporate more and more social aspects. In this Social Web vision, users and their resources are linked together via multiple and different kinds of relationships, crossing the boundaries of the specific services used and their related technologies.

In the last years, users interactions have been usually represented by social graphs, describing the online relationships between individuals, and interest graphs, describing the network of people who share interests, but that do not necessarily know each other personally (e.g., followers in Twitter, bought items in e-commerce websites, searches on the Web).

Nowadays, the strict separation among the above definitions is bound to be outdate. In fact, we can already see “interest graph aspects” in applications based on social graphs (e.g., the possibility for a user in Facebook to receive other users public updates even if they are not in his/her social graph and the user does not know them personally), and “social graph aspects” in interest-based applications (e.g., the possibility for users to restrict their searches and data sharing to particular ‘circles’ in Google+). Moreover, in both cases, not only users are connected between them but, according to the specific context of the social/interest graph, also resources can be involved in relationships with users (and other resources).

The current scenario

From the above discussion, it emerges how the traditional representation of a social network as a graph composed only by symmetric user-to-user relationships is no longer enough to express the complexity of social interactions. On the contrary, the concept of multiple types of social relationships is emerging as one of the key issues for the generation of the so-called augmented/multi-level social graphs (Atzori, Iera, & Morabito, 2011) (Breslin & Decker, 2007) (Kazienko, Musial, & Kajdanowicz, 2011). They can be, within the same social network, graphs connecting users and resources via different “actions” (e.g., a user x “likes” a user y’s resource; a user x “shares” a resource with a user y; a user y “follows” a user x based on his/her interests). They can also be graphs across different social networks, merging the different relationships that a user has on different social networks for different purposes (e.g., a user x can be “friend of” a user y on a specific social network and can be a simple “follower” of y on another one).

This trend is also witnessed by the developments of major Social Network players. For instance, see the Open Graph protocol (The Open Graph Protocol) developed by Facebook, or the OpenSocial public specification (Open Social), which is followed by Google and Myspace (together with a number of other social networks).

Privacy and the Social Web

It clearly emerges how, in such a social scenario, there is an high risk of being exposed to various privacy attacks. In fact, especially in the Social Web, not only users personal information are exposed to privacy risks, but this risk might propagate also to anyone else or any other resource which is part of the user augmented/multi-level social graph.

After an initial phase where privacy mechanisms were sparse or absent, with the majority of user profiles and resources accessible to other members, several research efforts have been carried out to mitigate these problems, with the development of some tools helping users to be more privacy-aware. Notable examples are: relationship-based access control mechanisms (Carminati & Ferrari, 2010), tools on support of privacy preference specification (Fang & Le Fevre, 2010) (Liu, Gummadi, Krishnamurthy, & Mislove, 2011), as well as more expressive and complex privacy settings recently adopted by commercial OSNs, like Facebook and Google+.

Despite the relevance of these proposals, current solutions to prevent information disclosure in OSNs and, more in general, in the Social Web, present unfortunately two main shortcomings:

they are ineffective for a large, decentralized system like the World Wide Web, where it is easy to aggregate information, and it is often possible to infer “private” information not explicitly accessible;

the complexity of privacy mechanisms de-facto forces users not to exploit their real potentialities, leaving a big amount of information still publicly available.

Trust as a means to protect Privacy in the Social Web

Trust can be defined as “the extent to which a given party is willing to depend on something or somebody in a given situation with a feeling of relative security, even though negative consequences are possible” (Josang, Ismail, & Boyd, 2007). In the Social Web field, in order to interact with others (even strangers), users are willing to risk negative consequences connected to the possible misuse of disclosed information. This is due to the possible benefit in terms of social interaction that users aspire. Measuring this extent represents, especially in virtual communities, an important factor to evaluate the degree of uncertainty connected to possible future interactions and, consequently, a means to provide users with access control mechanisms taking into account this risk.

Trust Modeling and Computation has been explored in the context of OSNs (Maheswaran, Hon, & Ghunaim, 2007) (Borzymek & Sydow, 2010) (DuBois, Golbeck, & Srinivasan, 2011) (Nepal, Sherchan, & Paris, 2011), but most of the proposed techniques are based on probabilistic approaches, and on the concept of trust transitivity among users (Jøsang, 2006) (Liu, Wang, & Orgun, 2011). There is today a great debate on how useful such approaches are. For instance, according to social psychology, trust decays slowly in a certain very small number of early hops from a source participant, and then it decays fast until the trust value approaches the minimum. In addition to this, there is still the lack of a conceptual model on top of which privacy tools have to be designed as well as any conscious user decision with respect to information sharing or friendship dynamics, even in recent approaches trying to address these issues (Falcone & Castelfranchi, 2010) (Adali, Wallace, Qian, & Vijayakumar, 2011) (Tang, Gao, & Liu, 2012).

Characteristics of a Trust Model for the Social Web

We believe that a suitable model to compute trust in the Social Web, for access control, privacy preservation and, more general, privacy-aware decision making purposes, should have the following main characteristics

multi-dimensional: trust computation should be based not only on different social network topological aspects, but also on a variety of other dimensions, such as for instance users’ actions and characteristics.

based on controlled transitivity: the majority of proposals appeared so far evaluate trust among not directly connected users regardless of the distance (i.e., depth of the paths). On the contrary, it is necessary to make clear how and to what extent trust is transitive along a social trust path, by also using multi-dimensional social pattern discovery to drive the definition of innovative methods for transitive trust computation.

time-dependent: in judging how much an action (or opinion) has to impact a trust relationship, we should consider the frequency with which a user has performed this kind of action (or received this opinion). For instance, the consequence in trust relationships value should be more important if the system verifies that a tagging action that causes a privacy leakage is repeated over the time, rather than just being an isolated event, as this highlights that user consciously misbehaves.

privacy-aware: trust computation very usually requires the availability of personal information and/or to log some of the user actions. Such activities should be done in a privacy-preserving way.

A Concrete Proposal

Our idea is to build a Multi-dimensional and Event-based Trust Layer on top of any social environment via an augmented social graph able to aggregate all information gathered from the Social Web concerning users and their resources (e.g., actions, opinions, user profile attributes), in order to evaluate users’ trust relationships.

To keep trace of the augmented graph evolution and to evaluate trust accordingly, we believe that a workable solution is to exploit Complex Event Processing (CEP) systems (Luckham, 2002), which are able detect interesting events or event patterns in data streams and react to them in presence of critical situations.

The idea is, therefore, to:

  • gather from the augmented social graph all the events that change the social interactions on the graph (i.e., edges creation/deletion/modification),
  • encode them into streams,
  • evaluate over them a set of meaningful event patterns,
  • specify a set of customizable trust rules, that associate with involved users a given trust value when some meaningful event patterns occur.

Let us consider, for example, Facebook as a target scenario. A domain expert can define a trust rule stating that a user x become “untrusted” with respect to y after that a certain number of “de-tagging” actions have been executed by y on images tagged by x.

vivianiarchitecture

According to our proposal, trust rules are therefore monitored in the Trust Layer by a Complex Event Processing engine (see Figure 1, component (a)), to immediately detect changes on the augmented social graph that implies a new trust value for involved users. Note that, a real-time estimation of users trust values might be fundamental in some scenarios where trust is a key parameter in the decision process. However, if we consider the huge amount of possible changes in a social environment, the CEP-based architecture might imply an high overheard due to the continuous event monitoring and evaluation of trust rules. As such, as an alternative, an event log based architecture can be exploited (see Figure 1, component (b)), over which trust rules are periodically evaluated.

Conclusions

Preventing disclosure of users personal information and consequently privacy attacks is fundamental in a highly socially interactive environment like the one constituted by the Social Web. We are convinced that, without asking to users the knowledge of complex privacy settings tools, we can use trust to automatically tune them, dynamically analyzing multi-level users interactions. To do this, we discuss an architecture to build a multi-level and event-based trust layer on top of the Social Web. To make this proposal effective, a variety of research issues should be addressed, such as for instance the efficiency and privacy guarantees connected to trust computation, or the method to identify interesting trust patterns and corresponding trust rules. Some preliminary results can be found in (Carminati, Ferrari, & Viviani, 2012).

References

Adali, S., Wallace, W. A., Qian, Y., & Vijayakumar, P. (2011). A Unified Framework for Trust in Composite Networks. Proceedings of the 13th AAMAS Workshop on Trust in Agent Societies.

Atzori, L., Iera, A., & Morabito, G. (2011). SIoT: Giving a Social Structure to the Internet of Things. IEEE Communications Letters , 15 (11), 1193–1195.

Borzymek, P., & Sydow, M. (2010). Trust and distrust prediction in social network with combined graphical and review-based attributes. KES-AMSTA’10 Proceedings of the 4th KES international conference on Agent and multi-agent systems: technologies and applications, Part I, (p. 122-131).

Breslin, J., & Decker, S. (2007). The Future of Social Networks on the Internet: The Need for Semantics. IEEE Internet Computing , 11, 86-90.

Carminati, B., & Ferrari, E. (2010). Privacy-aware Access Control in Social Networks: Issues and Solutions. In Privacy and Anonymity in Information Management Systems, Advanced Information and Knowledge Processing (p. 181-195). London: Springer.

Carminati, B., Ferrari, E., & Viviani, M. (2012). A Multi-dimensional and Event-based Model for Trust Computation in the Social Web. SocInfo 2012: The 4th International Conference on Social Informatics, 5–7 December 2012. Proceedings (To appear).

Cook, K. S., & Rice, E. (2006). Social Exchange Theory. In Handbook of Social Psychology (p. 53-76).

DuBois, T., Golbeck, J., & Srinivasan, A. (2011). Predicting Trust and Distrust in Social Networks. Privacy, Security, Risk and Trust (Passat), 2011 IEEE 3rd International Conference on Social Computing (SocialCom), (p. 418-424 ).

Falcone, R., & Castelfranchi, C. (2010). Trust and Transitivity: A Complex Deceptive Relationship. Proceedings of the 12th AAMAS Workshop on Trust in Agent Societies.

Fang, L., & Le Fevre, K. (2010). Privacy Wizards for Social Networking Sites. International Conference on World Wide Web (WWW 2010), (p. 351-360).

Jøsang, A. (2006). Exploring different types of trust propagation. iTrust’06 Proceedings of the 4th international conference on Trust Management.

Josang, A., Ismail, R., & Boyd, C. (2007). A survey of trust and reputation systems for online service provision. Decision Support Systems , 43 (2), 618-644.

Kazienko, P., Musial, K., & Kajdanowicz, T. (2011). Multidimensional Social Network in the Social Recommender System. IEEE Transactions on Systems, Man, and Cybernetics , 41 (4), 746–759.

Liu, G., Wang, Y., & Orgun, M. A. (2011). Trust transitivity in complex social networks. AAAI Conference on Artificial Intelligence.

Liu, Y., Gummadi, K. P., Krishnamurthy, B., & Mislove, A. (2011). Analyzing Facebook Privacy Settings: User Expectations vs. Reality. IMC ’11 Proceedings of the 2011 ACM SIGCOMM Conference on Internet Measurement, (p. 61-70).

Luckham, D. (2002). The Power of Events: An Introduction to Complex Event Processing in Distributed Enterprise Systems. Addison-Wesley Longman Publishing Co.

Maheswaran, M., Hon, C. T., & Ghunaim, A. (2007). Towards a Gravity-Based Trust Model for Social Networking Systems. Distributed Computing Systems Workshops, 2007. ICDCSW ’07. 27th International Conference on, (p. 24).

Nepal, S., Sherchan, W., & Paris, C. (2011). STrust: A Trust Model for Social Networks. Trust, Security and Privacy in Computing and Communications (TrustCom), 2011 IEEE 10th International Conference on, (p. 841-846).

Open Social. (s.d.). Tratto da http://docs.opensocial.org

Squicciarini, A. C., Heng, X., & Xiaolong, Z. (2011). CoPE: Enabling collaborative privacy management in online social networks. Journal of the American Society for Information Science and Technology , 62 (3), 521–534.

Tang, J., Gao, H., & Liu, H. (2012). mTrust: discerning multi-faceted trust in a connected world. WSDM ’12 Proceedings of the fifth ACM International Conference on Web Search and Data Mining.

Taylor, H., Yochem, A., Phillips, L., & Martinez, F. (2009). Event-Driven Architecture: How SOA Enables the Real-Time Enterprise. Addison-Wesley Professional.