Arkalakis, I., Diamantaris, M., Moustakas, S., Ioannidis, S., Polakis, J., & Ilia, P. (2024). Abandon All Hope Ye Who Enter Here: A Dynamic, Longitudinal Investigation of Android’s Data Safety Section. 33rd USENIX Security Symposium, USENIX Security 2024, Philadelphia, PA, USA, August 14-16, 2024.
@inproceedings{arkalakis_sec24,
author = {Arkalakis, Ioannis and Diamantaris, Michalis and Moustakas, Serafeim and Ioannidis, Sotiris and Polakis, Jason and Ilia, Panagiotis},
title = {Abandon All Hope Ye Who Enter Here: {A} Dynamic, Longitudinal Investigation of Android's Data Safety Section},
booktitle = {33rd {USENIX} Security Symposium, {USENIX} Security 2024, Philadelphia, PA, USA, August 14-16, 2024},
publisher = {{USENIX} Association},
year = {2024},
file = {arkalakis_sec24.pdf}
}
Users’ growing concerns about online privacy have led to increased platform support for transparency and consent in the web and mobile ecosystems. To that end, Android recently mandated that developers must disclose what user data their applications collect and share, and that information is made available in Google Play’s Data Safety section. In this paper, we provide the first large-scale, in-depth investigation on the veracity of the Data Safety section and its use in the Android application ecosystem. We build an automated analysis framework that dynamically exercises and analyzes applications so as to uncover discrepancies between the applications’ behavior and the data practices that have been reported in their Data Safety section. Our study on almost 5K applications uncovers a pervasive trend of incomplete disclosure, as 81% misrepresent their data collection and sharing practices in the Data Safety section. At the same time, 79.4% of the applications with incomplete disclosures do not ask the user to provide consent for the data they collect and share, and 78.6% of those that ask for consent disregard the users’ choice. Moreover, while embedded third-party libraries are the most common offender, Data Safety discrepancies can be traced back to the application’s core code in 41% of the cases. Crucially, Google’s documentation contains various "loopholes" that facilitate incomplete disclosure of data practices. Overall, we find that in its current form, Android’s Data Safety section does not effectively achieve its goal of increasing transparency and allowing users to provide informed consent. We argue that Android’s Data Safety policies require considerable reform, and automated validation mechanisms like our framework are crucial for ensuring the correctness and completeness of applications’ Data Safety disclosures.
Solomos, K., Ilia, P., Karami, S., Nikiforakis, N., & Polakis, J. (2022). The Dangers of Human Touch: Fingerprinting Browser Extensions through User Actions. 31st USENIX Security Symposium, USENIX Security 2022, Boston, MA, USA, August 10-12, 2022, 717–733.
@inproceedings{solomos_sec22,
author = {Solomos, Konstantinos and Ilia, Panagiotis and Karami, Soroush and Nikiforakis, Nick and Polakis, Jason},
title = {The Dangers of Human Touch: Fingerprinting Browser Extensions through User Actions},
booktitle = {31st {USENIX} Security Symposium, {USENIX} Security 2022, Boston, MA, USA, August 10-12, 2022},
pages = {717--733},
publisher = {{USENIX} Association},
year = {2022},
file = {solomos_sec22.pdf}
}
Browser extension fingerprinting has garnered considerable attention recently due to the twofold privacy loss that it incurs. Apart from facilitating tracking by augmenting browser fingerprints, the list of installed extensions can be directly used to infer sensitive user characteristics. However, prior research was performed in a vacuum, overlooking a core dimension of extensions’ functionality: how they react to user actions. In this paper, we present the first exploration of user-triggered extension fingerprinting. Guided by our findings from a large-scale static analysis of browser extensions we devise a series of user action templates that enable dynamic extension-exercising frameworks to comprehensively uncover hidden extension functionality that can only be triggered through user interactions. Our experimental evaluation demonstrates the effectiveness of our proposed technique, as we are able to fingerprint 4,971 unique extensions, 36% of which are not detectable by state-of-the-art techniques. To make matters worse, we find that ≈67% of the extensions that require mouse or keyboard interactions lack appropriate safeguards, rendering them vulnerable to pages that simulate user actions through JavaScript. To assist extension developers in protecting users from this privacy threat, we build a tool that automatically includes origin checks for fortifying extensions against invasive sites.
Lin, X., Ilia, P., Solanki, S., & Polakis, J. (2022). Phish in Sheep’s Clothing: Exploring the Authentication Pitfalls of Browser Fingerprinting. 31st USENIX Security Symposium, USENIX Security 2022, Boston, MA, USA, August 10-12, 2022, 1651–1668.
@inproceedings{lin_sec22,
author = {Lin, Xu and Ilia, Panagiotis and Solanki, Saumya and Polakis, Jason},
title = {Phish in Sheep{\textquoteright}s Clothing: Exploring the Authentication Pitfalls of Browser Fingerprinting},
booktitle = {31st {USENIX} Security Symposium, {USENIX} Security 2022, Boston, MA, USA, August 10-12, 2022},
pages = {1651--1668},
publisher = {{USENIX} Association},
year = {2022},
file = {lin_sec22.pdf}
}
As users navigate the web they face a multitude of threats; among them, attacks that result in account compromise can be particularly devastating. In a world fraught with data breaches and sophisticated phishing attacks, web services strive to fortify user accounts by adopting new mechanisms that identify and prevent suspicious login attempts. More recently, browser fingerprinting techniques have been incorporated into the authentication workflow of major services as part of their decision-making process for triggering additional security mechanisms (e.g., two-factor authentication). In this paper we present the first comprehensive and in-depth exploration of the security implications of real-world systems relying on browser fingerprints for authentication. Guided by our investigation, we develop a tool for automatically constructing fingerprinting vectors that replicate the process of target websites, enabling the extraction of fingerprints from users’ devices that exactly match those generated by target websites. Subsequently, we demonstrate how phishing attackers can replicate users’ fingerprints on different devices to deceive the risk-based authentication systems of high-value web services (e.g., cryptocurrency trading) to completely bypass two-factor authentication. To gain a better understanding of whether attackers can carry out such attacks, we study the evolution of browser fingerprinting practices in phishing websites over time. While attackers do not generally collect all the necessary fingerprinting attributes, unfortunately that is not the case for attackers targeting certain financial institutions where we observe an increasing number of phishing sites capable of pulling off our attacks. To address the significant threat posed by our attack, we have disclosed our findings to the vulnerable vendors.
Karami, S., Kalantari, F., Zaeifi, M., Maso, X. J., Trickel, E., Ilia, P., Shoshitaishvili, Y., Doupé, A., & Polakis, J. (2022). Unleash the Simulacrum: Shifting Browser Realities for Robust Extension-Fingerprinting Prevention. 31st USENIX Security Symposium, USENIX Security 2022, Boston, MA, USA, August 10-12, 2022, 735–752.
@inproceedings{karami_sec22,
author = {Karami, Soroush and Kalantari, Faezeh and Zaeifi, Mehrnoosh and Maso, Xavier J. and Trickel, Erik and Ilia, Panagiotis and Shoshitaishvili, Yan and Doup{\'{e}}, Adam and Polakis, Jason},
title = {Unleash the Simulacrum: Shifting Browser Realities for Robust Extension-Fingerprinting Prevention},
booktitle = {31st {USENIX} Security Symposium, {USENIX} Security 2022, Boston, MA, USA, August 10-12, 2022},
pages = {735--752},
publisher = {{USENIX} Association},
year = {2022},
file = {karami_sec22.pdf}
}
Online tracking has garnered significant attention due to the privacy risk it poses to users. Among the various approaches, techniques that identify which extensions are installed in a browser can be used for fingerprinting browsers and tracking users, but also for inferring personal and sensitive user data. While preventing certain fingerprinting techniques is relatively simple, mitigating behavior-based extension-fingerprinting poses a significant challenge as it relies on hiding actions that stem from an extension’s functionality. To that end, we introduce the concept of DOM Reality Shifting, whereby we split the reality users experience while browsing from the reality that webpages can observe. To demonstrate our approach we develop Simulacrum, a prototype extension that implements our defense through a targeted instrumentation of core Web API interfaces. Despite being conceptually straightforward, our implementation highlights the technical challenges posed by the complex and often idiosyncratic nature and behavior of web applications, modern browsers, and the JavaScript language. We experimentally evaluate our system against a state-of-theart DOM-based extension fingerprinting system and find that Simulacrum readily protects 95.37% of susceptible extensions. We then identify trivial modifications to extensions that enable our defense for the majority of the remaining extensions. To facilitate additional research and protect users from privacy-invasive behaviors we will open-source our system.
Solomos, K., Ilia, P., Nikiforakis, N., & Polakis, J. (2022). Escaping the Confines of Time: Continuous Browser Extension Fingerprinting Through Ephemeral Modifications. Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security, CCS 2022, Los Angeles, CA, USA, November 7-11, 2022, 2675–2688.
@inproceedings{solomos_ccs22,
author = {Solomos, Konstantinos and Ilia, Panagiotis and Nikiforakis, Nick and Polakis, Jason},
title = {Escaping the Confines of Time: Continuous Browser Extension Fingerprinting Through Ephemeral Modifications},
booktitle = {Proceedings of the 2022 {ACM} {SIGSAC} Conference on Computer and Communications Security, {CCS} 2022, Los Angeles, CA, USA, November 7-11, 2022},
pages = {2675--2688},
publisher = {{ACM}},
year = {2022},
doi = {10.1145/3548606.3560576},
file = {solomos_ccs22.pdf}
}
Browser fingerprinting continues to proliferate across the web. Critically, popular fingerprinting libraries have started incorporating extension-fingerprinting capabilities, thus exacerbating the privacy loss they can induce. In this paper we propose continuous fingerprinting, a novel extension fingerprinting technique that captures a critical dimension of extensions’ functionality that allowed them to elude all prior behavior-based techniques. Specifically, we find that ephemeral modifications are prevalent in the extension ecosystem, effectively rendering such extensions invisible to prior approaches that are confined to analyzing snapshots that capture a single moment in time. Accordingly, we develop Chronos, a system that captures the modifications that occur throughout an extension’s life cycle, enabling it to fingerprint extensions that make transient modifications that leave no visible traces at the end of execution. Specifically, our system creates behavioral signatures that capture nodes being added to or removed from the DOM, as well as changes being made to node attributes. Our extensive experimental evaluation highlights the inherent limits of prior snapshot-based approaches, as Chronos is able to identify 11,219 unique extensions, increasing coverage by 66.9% over the state of the art. Additionally, we find that our system captures a unique modification event (i.e., mutation) for 94% of the extensions, while also being able to resolve 97% of the signature collisions across extensions that affect existing snapshot-based approaches. Our study more accurately captures the extent of the privacy threat presented by extension fingerprinting, which warrants more attention by privacy-oriented browser vendors that, up to this point, have focused on deploying countermeasures against other browser fingerprinting vectors.
Karami, S., Ilia, P., & Polakis, J. (2021). Awakening the Web’s Sleeper Agents: Misusing Service Workers for Privacy Leakage. 28th Annual Network and Distributed System Security Symposium, NDSS 2021, Virtually, February 21-25, 2021.
@inproceedings{karami_ndss21,
author = {Karami, Soroush and Ilia, Panagiotis and Polakis, Jason},
title = {Awakening the Web's Sleeper Agents: Misusing Service Workers for Privacy Leakage},
booktitle = {28th Annual Network and Distributed System Security Symposium, {NDSS} 2021, virtually, February 21-25, 2021},
publisher = {The Internet Society},
year = {2021},
doi = {10.14722/ndss.2021.23104},
file = {karami_ndss21.pdf}
}
Service workers are a powerful technology supported by all major modern browsers that can improve users’ browsing experience by offering capabilities similar to those of native applications. While they are gaining significant traction in the developer community, they have not received much scrutiny from security researchers. In this paper, we explore the capabilities and inner workings of service workers and conduct the first comprehensive large-scale study of their API use in the wild. Subsequently, we show how attackers can exploit the strategic placement of service workers for history-sniffing in most major browsers, including Chrome and Firefox. We demonstrate two novel history-sniffing attacks that exploit the lack of appropriate isolation in these browsers, including a non-destructive cache-based version. Next, we present a series of use cases that illustrate how our techniques enable privacy-invasive attacks that can infer sensitive application-level information, such as a user’s social graph. We have disclosed our techniques to all vulnerable vendors, prompting the Chromium team to explore a redesign of their site isolation mechanisms for defending against our attacks. We also propose a countermeasure that can be incorporated by websites to protect their users, and develop a tool that streamlines its deployment, thus facilitating adoption at a large scale. Overall, our work presents a cautionary tale on the severe risks of browsers deploying new features without an in-depth evaluation of their security and privacy implications.
Chen, Q., Ilia, P., Polychronakis, M., & Kapravelos, A. (2021). Cookie Swap Party: Abusing First-Party Cookies for Web Tracking. WWW ’21: The Web Conference 2021, Virtual Event / Ljubljana, Slovenia, April 19-23, 2021, 2117–2129.
@inproceedings{chen_www21,
author = {Chen, Quan and Ilia, Panagiotis and Polychronakis, Michalis and Kapravelos, Alexandros},
title = {Cookie Swap Party: Abusing First-Party Cookies for Web Tracking},
booktitle = {{WWW} '21: The Web Conference 2021, Virtual Event / Ljubljana, Slovenia, April 19-23, 2021},
pages = {2117--2129},
publisher = {{ACM} / {IW3C2}},
year = {2021},
doi = {10.1145/3442381.3449837},
file = {chen_www21.pdf}
}
As a step towards protecting user privacy, most web browsers perform some form of third-party HTTP cookie blocking or periodic deletion by default, while users typically have the option to select even stricter blocking policies. As a result, web trackers have shifted their efforts to work around these restrictions and retain or even improve the extent of their tracking capability. In this paper, we shed light into the increasingly used practice of relying on first-party cookies that are set by third-party JavaScript code to implement user tracking and other potentially unwanted capabilities. Although unlike third-party cookies, first-party cookies are not sent automatically by the browser to third-parties on HTTP requests, this tracking is possible because any included third-party code runs in the context of the parent page, and thus can fully set or read existing first-party cookies—which it can then leak to the same or other third parties. Previous works that survey user privacy on the web in relation to cookies, third-party or otherwise, have not fully explored this mechanism. To address this gap, we propose a dynamic data flow tracking system based on Chromium to track the leakage of first-party cookies to third parties, and used it to conduct a large-scale study of the Alexa top 10K websites. In total, we found that 97.72% of the websites have first-party cookies that are set by third-party JavaScript, and that on 57.66% of these websites there is at least one such cookie that contains a unique user identifier that is diffused to multiple third parties. Our results highlight the privacy-intrusive capabilities of first-party cookies, even when a privacy-savvy user has taken mitigative measures such as blocking third-party cookies, or employing popular crowd-sourced filter lists such as EasyList/EasyPrivacy and the Disconnect list.
Karami, S., Ilia, P., Solomos, K., & Polakis, J. (2020). Carnus: Exploring the Privacy Threats of Browser Extension Fingerprinting. 27th Annual Network and Distributed System Security Symposium, NDSS 2020, San Diego, California, USA, February 23-26, 2020.
@inproceedings{karami_ndss20,
author = {Karami, Soroush and Ilia, Panagiotis and Solomos, Konstantinos and Polakis, Jason},
title = {Carnus: Exploring the Privacy Threats of Browser Extension Fingerprinting},
booktitle = {27th Annual Network and Distributed System Security Symposium, {NDSS} 2020, San Diego, California, USA, February 23-26, 2020},
publisher = {The Internet Society},
year = {2020},
doi = {10.14722/ndss.2020.24383},
file = {karami_ndss20.pdf}
}
With users becoming increasingly privacy-aware and browser vendors incorporating anti-tracking mechanisms, browser fingerprinting has garnered significant attention. Accordingly, prior work has proposed techniques for identifying browser extensions and using them as part of a device’s fingerprint. While previous studies have demonstrated how extensions can be detected through their web accessible resources, there exists a significant gap regarding techniques that indirectly detect extensions through behavioral artifacts. In fact, no prior study has demonstrated that this can be done in an automated fashion. In this paper, we bridge this gap by presenting the first fully automated creation and detection of behavior-based extension fingerprints. We also introduce two novel fingerprinting techniques that monitor extensions’ communication patterns, namely outgoing HTTP requests and intra-browser message exchanges. These techniques comprise the core of Carnus, a modular system for the static and dynamic analysis of extensions, which we use to create the largest set of extension fingerprints to date. We leverage our dataset of 29,428 detectable extensions to conduct a comprehensive investigation of extension fingerprinting in realistic settings and demonstrate the practicality of our attack. Our experimental evaluation against a state-of-the-art countermeasure confirms the robustness of our techniques as 87.92% of our behavior-based fingerprints remain effective. Subsequently, we aim to explore the true extent of the privacy threat that extension fingerprinting poses to users, and present a novel study on the feasibility of inference attacks that reveal private and sensitive user information based on the functionality and nature of their extensions. We first collect over 1.44 million public user reviews of our detectable extensions, which provide a unique macroscopic view of the browser extension ecosystem and enable a more precise evaluation of the discriminatory power of extensions as well as a new deanonymization vector. We also automatically categorize extensions based on the developers’ descriptions and identify those that can lead to the inference of personal data (religion, medical issues, etc.). Overall, our research sheds light on previously unexplored dimensions of the privacy threats of extension fingerprinting and highlights the need for more effective countermeasures that can prevent our attacks.
Solomos, K., Ilia, P., Ioannidis, S., & Kourtellis, N. (2020). Clash of the Trackers: Measuring the Evolution of the Online Tracking Ecosystem. 4th Network Traffic Measurement and Analysis Conference, TMA 2020, Berlin, Germany, June 10, 2020. arXiv:1907.12860.
@inproceedings{solomos_tma20,
author = {Solomos, Konstantinos and Ilia, Panagiotis and Ioannidis, Sotiris and Kourtellis, Nicolas},
title = {Clash of the Trackers: Measuring the Evolution of the Online Tracking Ecosystem},
booktitle = {4th Network Traffic Measurement and Analysis Conference, {TMA} 2020, Berlin, Germany, June 10, 2020},
publisher = {{IFIP}},
year = {2020},
note = {arXiv:1907.12860},
file = {solomos_tma20.pdf}
}
Websites are constantly adapting the methods used, and intensity with which they track online visitors. However, the wide-range enforcement of GDPR since one year ago (May 2018) forced websites serving EU-based online visitors to eliminate or at least reduce such tracking activity, given they receive proper user consent. Therefore, it is important to record and analyze the evolution of this tracking activity and assess the overall "privacy health" of the Web ecosystem and if it is better after GDPR enforcement. This work makes a significant step towards this direction. In this paper, we analyze the online ecosystem of 3rd-parties embedded in top websites which amass the majority of online tracking through 6 time snapshots taken every few months apart, in the duration of the last 2 years. We perform this analysis in three ways: 1) by looking into the network activity that 3rd-parties impose on each publisher hosting them, 2) by constructing a bipartite graph of "publisher-to-tracker", connecting 3rd parties with their publishers, 3) by constructing a "tracker-to-tracker" graph connecting 3rd-parties who are commonly found in publishers. We record significant changes through time in number of trackers, traffic induced in publishers (incoming vs. outgoing), embeddedness of trackers in publishers, popularity and mixture of trackers across publishers. We also report how such measures compare with the ranking of publishers based on Alexa. On the last level of our analysis, we dig deeper and look into the connectivity of trackers with each other and how this relates to potential cookie synchronization activity.
Lin, X., Ilia, P., & Polakis, J. (2020). Fill in the Blanks: Empirical Analysis of the Privacy Threats of Browser Form Autofill. CCS ’20: 2020 ACM SIGSAC Conference on Computer and Communications Security, Virtual Event, USA, November 9-13, 2020, 507–519.
@inproceedings{lin_ccs20,
author = {Lin, Xu and Ilia, Panagiotis and Polakis, Jason},
title = {Fill in the Blanks: Empirical Analysis of the Privacy Threats of Browser Form Autofill},
booktitle = {{CCS} '20: 2020 {ACM} {SIGSAC} Conference on Computer and Communications Security, Virtual Event, USA, November 9-13, 2020},
pages = {507--519},
publisher = {{ACM}},
year = {2020},
doi = {10.1145/3372297.3417271},
file = {lin_ccs20.pdf}
}
Providing functionality that streamlines the more tedious aspects of website interaction is of paramount importance to browsers as it can significantly improve the overall user experience. Browsers’ autofill functionality exemplifies this goal, as it alleviates the burden of repetitively typing the same information across websites. At the same time, however, it also presents a significant privacy risk due to the inherent disparity between the browser’s interpretation of a given web page and what users can visually perceive. In this paper we present the first, to our knowledge, comprehensive exploration of the privacy threats of autofill functionality. We first develop a series of new techniques for concealing the presence of form elements that allow us to obtain sensitive user information while bypassing existing browser defenses. Alarmingly, our large-scale study in the Alexa top 100K reveals the widespread use of such deceptive techniques for stealthily obtaining user-identifying information, as they are present in at least 5.8% of the forms that are autofilled by Chrome. Subsequently, our in-depth investigation of browsers’ autofill functionality reveals a series of flaws and idiosyncrasies, which we exploit through a series of novel attack vectors that target specific aspects of browsers’ behavior. By chaining these together we are able to demonstrate a novel invasive side-channel attack that exploits browser’s autofill preview functionality for inferring sensitive information even when users choose to not utilize autofill. This attack affects all major Chromium-based browsers and allows attackers to probe users’ autofill profiles for over a hundred thousand candidate values (e.g., credit card and phone numbers). Overall, while the preview mode is intended as a protective measure for enabling more informed decisions, ultimately it creates a new avenue of exposure that circumvents a user’s choice to not divulge their information. In light of our findings, we have disclosed our techniques to the affected vendors, and have also created a Chrome extension that can prevent our attacks and mitigate this threat until our countermeasures are incorporated into browsers.
Drakonakis, K., Ilia, P., Ioannidis, S., & Polakis, J. (2019). Please Forget Where I Was Last Summer: The Privacy Risks of Public Location (Meta)Data. 26th Annual Network and Distributed System Security Symposium, NDSS 2019, San Diego, California, USA, February 24-27, 2019.
@inproceedings{drakonakis_ndss19,
author = {Drakonakis, Kostas and Ilia, Panagiotis and Ioannidis, Sotiris and Polakis, Jason},
title = {Please Forget Where {I} Was Last Summer: The Privacy Risks of Public Location (Meta)Data},
booktitle = {26th Annual Network and Distributed System Security Symposium, {NDSS} 2019, San Diego, California, USA, February 24-27, 2019},
publisher = {The Internet Society},
year = {2019},
doi = {10.14722/ndss.2019.23151},
file = {drakonakis_ndss19.pdf}
}
The exposure of location data constitutes a significant privacy risk to users as it can lead to de-anonymization, the inference of sensitive information, and even physical threats. In this paper we present LPAuditor, a tool that conducts a comprehensive evaluation of the privacy loss caused by public location metadata. First, we demonstrate how our system can pinpoint users’ key locations at an unprecedented granularity by identifying their actual postal addresses. Our evaluation on Twitter data highlights the effectiveness of our techniques which outperform prior approaches by 18.9%-91.6% for homes and 8.7%-21.8% for workplaces. Next we present a novel exploration of automated private information inference that uncovers “sensitive” locations that users have visited (pertaining to health, religion, and sex/nightlife). We find that location metadata can provide additional context to tweets and thus lead to the exposure of private information that might not match the users’ intentions. We further explore the mismatch between user actions and information exposure and find that older versions of the official Twitter apps follow a privacy-invasive policy of including precise GPS coordinates in the metadata of tweets that users have geotagged at a coarse-grained level (e.g., city). The implications of this exposure are further exacerbated by our finding that users are considerably privacy-cautious in regards to exposing precise location data. When users can explicitly select what location data is published, there is a 94.6% reduction in tweets with GPS coordinates. As part of current efforts to give users more control over their data, LPAuditor can be adopted by major services and offered as an auditing tool that informs users about sensitive information they (indirectly) expose through location metadata.
Solomos, K., Ilia, P., Ioannidis, S., & Kourtellis, N. (2019). TALON: An Automated Framework for Cross-Device Tracking Detection. 22nd International Symposium on Research in Attacks, Intrusions and Defenses, RAID 2019, Chaoyang District, Beijing, China, September 23-25, 2019, 227–241.
@inproceedings{solomos_raid19,
author = {Solomos, Konstantinos and Ilia, Panagiotis and Ioannidis, Sotiris and Kourtellis, Nicolas},
title = {{TALON:} An Automated Framework for Cross-Device Tracking Detection},
booktitle = {22nd International Symposium on Research in Attacks, Intrusions and Defenses, {RAID} 2019, Chaoyang District, Beijing, China, September 23-25, 2019},
pages = {227--241},
publisher = {{USENIX} Association},
year = {2019},
isbn = {978-1-939133-07-6},
file = {solomos_raid19.pdf}
}
Although digital advertising fuels much of today’s free Web, it typically does so at the cost of online users’ privacy, due to the continuous tracking and leakage of users’ personal data. In search for new ways to optimize the effectiveness of ads, advertisers have introduced new advanced paradigms such as cross-device tracking (CDT), to monitor users’ browsing on multiple devices and screens, and deliver (re)targeted ads in the most appropriate screen. Unfortunately, this practice leads to greater privacy concerns for the end-user. Going beyond the state-of-the-art, we propose a novel methodology for detecting CDT and measuring the factors affecting its performance, in a repeatable and systematic way. This new methodology is based on emulating realistic browsing activity of end-users, from different devices, and thus triggering and detecting cross-device targeted ads. We design and build Talon, a CDT measurement framework that implements our methodology and allows experimentation with multiple parallel devices, experimental setups and settings. By employing Talon, we perform several critical experiments, and we are able to not only detect and measure CDT with average AUC score of 0.78-0.96, but also to provide significant insights about the behavior of CDT entities and the impact on users’ privacy. In the hands of privacy researchers, policy makers and end-users, Talon can be an invaluable tool for raising awareness and increasing transparency on tracking practices used by the ad-ecosystem.
Papadopoulos, P., Ilia, P., Polychronakis, M., Markatos, E. P., Ioannidis, S., & Vasiliadis, G. (2019). Master of Web Puppets: Abusing Web Browsers for Persistent and Stealthy Computation. 26th Annual Network and Distributed System Security Symposium, NDSS 2019, San Diego, California, USA, February 24-27, 2019.
@inproceedings{papadopoulos_ndss19,
author = {Papadopoulos, Panagiotis and Ilia, Panagiotis and Polychronakis, Michalis and Markatos, Evangelos P. and Ioannidis, Sotiris and Vasiliadis, Giorgos},
title = {Master of Web Puppets: Abusing Web Browsers for Persistent and Stealthy Computation},
booktitle = {26th Annual Network and Distributed System Security Symposium, {NDSS} 2019, San Diego, California, USA, February 24-27, 2019},
publisher = {The Internet Society},
year = {2019},
doi = {10.14722/ndss.2019.23070},
file = {papadopoulos_ndss19.pdf}
}
The proliferation of web applications has essentially transformed modern browsers into small but powerful operating systems. Upon visiting a website, user devices run implicitly trusted script code, the execution of which is confined within the browser to prevent any interference with the user’s system. Recent JavaScript APIs, however, provide advanced capabilities that not only enable feature-rich web applications, but also allow attackers to perform malicious operations despite the confined nature of JavaScript code execution. In this paper, we demonstrate the powerful capabilities that modern browser APIs provide to attackers by presenting MarioNet: a framework that allows a remote malicious entity to control a visitor’s browser and abuse its resources for unwanted computation or harmful operations, such as cryptocurrency mining, password-cracking, and DDoS. MarioNet relies solely on already available HTML5 APIs, without requiring the installation of any additional software. In contrast to previous browser- based botnets, the persistence and stealthiness characteristics of MarioNet allow the malicious computations to continue in the background of the browser even after the user closes the window or tab of the initially visited malicious website. We present the design, implementation, and evaluation of our prototype system, which is compatible with all major browsers, and discuss potential defense strategies to counter the threat of such persistent in- browser attacks. Our main goal is to raise awareness about this new class of attacks, and inform the design of future browser APIs so that they provide a more secure client-side environment for web applications.
Papadopoulos, P., Ilia, P., & Markatos, E. P. (2019). Truth in Web Mining: Measuring the Profitability and the Imposed Overheads of Cryptojacking. Information Security - 22nd International Conference, ISC 2019, New York City, NY, USA, September 16-18, 2019, Proceedings, 11723, 277–296.
@inproceedings{papadopoulos_isc19,
author = {Papadopoulos, Panagiotis and Ilia, Panagiotis and Markatos, Evangelos P.},
title = {Truth in Web Mining: Measuring the Profitability and the Imposed Overheads of Cryptojacking},
booktitle = {Information Security - 22nd International Conference, {ISC} 2019, New York City, NY, USA, September 16-18, 2019, Proceedings},
series = {Lecture Notes in Computer Science},
volume = {11723},
pages = {277--296},
publisher = {Springer},
year = {2019},
doi = {10.1007/978-3-030-30215-3\_14},
file = {papadopoulos_isc19.pdf}
}
In recent years, we have been observing a new paradigm of attacks, the so-called cryptojacking attacks. Given the lower-risk/lower-effort nature of cryptojacking, the number of such incidents in 2018 were nearly double of those of ransomware attacks. Apart from the cryptojackers, web-cryptomining library providers also enabled benign publishers to use this mechanism as an alternative monetization schema for web in the era of declined ad revenues. In spite of the buzz raised around web-cryptomining, it is not yet known what is the profitability of web-cryptomining and what is the actual cost it imposes on the user side. In this paper, we respond to this exact question by measuring the overhead imposed to the user with regards to power consumption, resources utilization, network traffic, device temperature and user experience. We compare those overheads along with the profitability of web-cryptomining to the ones imposed by advertising to examine if web-cryptomining can become a viable alternative revenue stream for websites. Our results show that web-cryptomining can reach the profitability of advertising under specific circumstances, but users need to sustain a significant cost on their devices.
Tsirantonakis, G., Ilia, P., Ioannidis, S., Athanasopoulos, E., & Polychronakis, M. (2018). A Large-scale Analysis of Content Modification by Open HTTP Proxies. 25th Annual Network and Distributed System Security Symposium, NDSS 2018, San Diego, California, USA, February 18-21, 2018.
@inproceedings{tsirantonakis_ndss18,
author = {Tsirantonakis, Giorgos and Ilia, Panagiotis and Ioannidis, Sotiris and Athanasopoulos, Elias and Polychronakis, Michalis},
title = {A Large-scale Analysis of Content Modification by Open {HTTP} Proxies},
booktitle = {25th Annual Network and Distributed System Security Symposium, {NDSS} 2018, San Diego, California, USA, February 18-21, 2018},
publisher = {The Internet Society},
year = {2018},
doi = {10.14722/ndss.2018.23244},
file = {tsirantonakis_ndss18.pdf}
}
Open HTTP proxies offer a quick and convenient solution for routing web traffic towards a destination. In contrast to more elaborate relaying systems, such as anonymity networks or VPN services, users can freely connect to an open HTTP proxy without the need to install any special software. Therefore, open HTTP proxies are an attractive option for bypassing IP-based filters and geo-location restrictions, circumventing content blocking and censorship, and in general, hiding the client’s IP address when accessing a web server. Nevertheless, the consequences of routing traffic through an untrusted third party can be severe, while the operating incentives of the thousands of publicly available HTTP proxies are questionable. In this paper, we present the results of a large-scale analysis of open HTTP proxies, focusing on determining the extent to which user traffic is manipulated while being relayed. We have designed a methodology for detecting proxies that, instead of passively relaying traffic, actively modify the relayed content. Beyond simple detection, our framework is capable of macroscopically attributing certain traffic modifications at the network level to well-defined malicious actions, such as ad injection, user fingerprinting, and redirection to malware landing pages. We have applied our methodology on a large set of publicly available HTTP proxies, which we monitored for a period of two months, and identified that 38% of them perform some form of content modification. The majority of these proxies can be considered benign, as they do not perform any harmful content modification. However, 5.15% of the tested proxies were found to perform modification or injection that can be considered as malicious or unwanted. Specifically, 47% of the malicious proxies injected ads, 39% injected code for collecting user information that can be used for tracking and fingerprinting, and 12% attempted to redirect the user to pages that contain malware. Our study reveals the true incentives of many of the publicly available web proxies. Our findings raise several concerns, as we uncover multiple cases where users can be severely affected by connecting to an open proxy. As a step towards protecting users against unwanted content modification, we built a service that leverages our methodology to automatically collect and probe public proxies, and generates a list of safe proxies that do not perform any content modification, on a daily basis.
Solomos, K., Ilia, P., Ioannidis, S., & Kourtellis, N. (2018). Automated Measurements of Cross-Device Tracking. Information and Operational Technology Security Systems - First International Workshop, IOSec 2018, CIPSEC Project, Heraklion, Crete, Greece, September 13, 2018, 11398, 73–80.
@inproceedings{solomos_iosec18,
author = {Solomos, Konstantinos and Ilia, Panagiotis and Ioannidis, Sotiris and Kourtellis, Nicolas},
title = {Automated Measurements of Cross-Device Tracking},
booktitle = {Information and Operational Technology Security Systems - First International Workshop, IOSec 2018, {CIPSEC} Project, Heraklion, Crete, Greece, September 13, 2018},
series = {Lecture Notes in Computer Science},
volume = {11398},
pages = {73--80},
publisher = {Springer},
year = {2018},
doi = {10.1007/978-3-030-12085-6\_7},
file = {solomos_iosec18.pdf}
}
Although digital advertising fuels much of today’s free Web, it typically do so at the cost of online users’ privacy, due to continuous tracking and leakage of users’ personal data. In search for new ways to optimize effectiveness of ads, advertisers have introduced new paradigms such as cross-device tracking (CDT), to monitor users’ browsing on multiple screens, and deliver (re)targeted ads in the appropriate screen. Unfortunately, this practice comes with even more privacy concerns for the end-user. In this work, we design a methodology for triggering CDT by emulating realistic browsing activity of end-users, and then detecting and measuring it by leveraging advanced machine learning tools.
Ilia, P., Carminati, B., Ferrari, E., Fragopoulou, P., & Ioannidis, S. (2017). SAMPAC: Socially-Aware collaborative Multi-Party Access Control. Proceedings of the Seventh ACM Conference on Data and Application Security and Privacy, CODASPY 2017, Scottsdale, AZ, USA, March 22-24, 2017, 71–82.
@inproceedings{ilia_codaspy17,
author = {Ilia, Panagiotis and Carminati, Barbara and Ferrari, Elena and Fragopoulou, Paraskevi and Ioannidis, Sotiris},
title = {{SAMPAC:} Socially-Aware collaborative Multi-Party Access Control},
booktitle = {Proceedings of the Seventh {ACM} Conference on Data and Application Security and Privacy, {CODASPY} 2017, Scottsdale, AZ, USA, March 22-24, 2017},
pages = {71--82},
publisher = {{ACM}},
year = {2017},
doi = {10.1145/3029806.3029834},
file = {ilia_codaspy17.pdf}
}
According to the current design of content sharing services, such as Online Social Networks (OSNs), typically (i) the service provider has unrestricted access to the uploaded resources and (ii) only the user uploading the resource is allowed to define access control permissions over it. This results in a lack of control from other users that are associated, in some way, with that resource. To cope with these issues, in this paper, we propose a privacy-preserving system that allows users to upload their resources encrypted, and we design a collaborative multi-party access control model allowing all the users related to a resource to participate in the specification of the access control policy. Our model employs a threshold-based secret sharing scheme, and by exploiting users’ social relationships, sets the trusted friends of the associated users responsible to partially enforce the collective policy. Through replication of the secret shares and delegation of the access control enforcement role, our model ensures that resources are timely available when requested. Finally, our experiments demonstrate that the performance overhead of our model is minimal and that it does not significantly affect user experience.
Chariton, A. A., Degkleri, E., Papadopoulos, P., Ilia, P., & Markatos, E. P. (2017). CCSP: A compressed certificate status protocol. 2017 IEEE Conference on Computer Communications, INFOCOM 2017, Atlanta, GA, USA, May 1-4, 2017, 1–9.
@inproceedings{chariton_infocom17,
author = {Chariton, Antonios A. and Degkleri, Eirini and Papadopoulos, Panagiotis and Ilia, Panagiotis and Markatos, Evangelos P.},
title = {{CCSP:} {A} compressed certificate status protocol},
booktitle = {2017 {IEEE} Conference on Computer Communications, {INFOCOM} 2017, Atlanta, GA, USA, May 1-4, 2017},
pages = {1--9},
publisher = {{IEEE}},
year = {2017},
doi = {10.1109/INFOCOM.2017.8057065},
file = {chariton_infocom17.pdf}
}
Trust in SSL-based communications is provided by Certificate Authorities (CAs) in the form of signed certificates. Checking the validity of a certificate involves three steps: (i) checking its expiration date, (ii) verifying its signature, and (iii) ensuring that it is not revoked. Currently, such certificate revocation checks are done either via Certificate Revocation Lists (CRLs) or Online Certificate Status Protocol (OCSP) servers. Unfortunately, despite the existence of these revocation checks, sophisticated cyber-attackers, may trick web browsers to trust a revoked certificate, believing that it is still valid. Consequently, the web browser will communicate (over TLS) with web servers controlled by cyber-attackers. Although frequently updated, nonced, and timestamped certificates may reduce the frequency and impact of such cyber-attacks, they impose a very large overhead to the CAs and OCSP servers, which now need to timestamp and sign on a regular basis all the responses, for every certificate they have issued, resulting in a very high overhead. To mitigate this overhead and provide a solution to the described cyber-attacks, we present CCSP: a new approach to provide timely information regarding the status of certificates, which capitalizes on a newly introduced notion called signed collections. In this paper, we present the design, preliminary implementation, and evaluation of CCSP in general, and signed collections in particular. Our preliminary results suggest that CCSP (i) reduces space requirements by more than an order of magnitude, (ii) lowers the number of signatures required by 6 orders of magnitude compared to OCSP-based methods, and (iii) adds only a few milliseconds of overhead in the overall user latency.
Chariton, A. A., Degkleri, E., Papadopoulos, P., Ilia, P., & Markatos, E. P. (2016). DCSP: performant certificate revocation a DNS-based approach. Proceedings of the 9th European Workshop on System Security.
@inproceedings{chariton_eurosec16,
author = {Chariton, Antonios A. and Degkleri, Eirini and Papadopoulos, Panagiotis and Ilia, Panagiotis and Markatos, Evangelos P.},
title = {DCSP: performant certificate revocation a DNS-based approach},
year = {2016},
isbn = {9781450342957},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
doi = {10.1145/2905760.2905767},
booktitle = {Proceedings of the 9th European Workshop on System Security},
articleno = {1},
numpages = {6},
location = {London, United Kingdom},
series = {EuroSec '16},
file = {chariton_eurosec16.pdf}
}
Trust in SSL-based communication on the Internet is provided by Certificate Authorities (CAs) in the form of signed certificates. Checking the validity of a certificate involves three steps: (i) checking its expiration date, (ii) verifying its signature, and (iii) making sure that it is not revoked. Currently, Certificate Revocation checks (i.e. step (iii) above) are done either via Certificate Revocation Lists (CRLs) or Online Certificate Status Protocol (OCSP) servers. Unfortunately, both current approaches tend to incur such a high overhead that several browsers (including almost all mobile ones) choose not to check certificate revocation status, thereby exposing their users to significant security risks. To address this issue, we propose DCSP: a new low-latency approach that provides up-to-date and accurate certificate revocation information. DCSP capitalizes on the existing scalable and high-performance infrastructure of DNS. DCSP minimizes end user latency while, at the same time, requiring only a small number of cryptographic signatures by the CAs. Our design and initial performance results show that DCSP has the potential to perform an order of magnitude faster than the current state-of-the-art alternatives..
Ilia, P., Polakis, I., Athanasopoulos, E., Maggi, F., & Ioannidis, S. (2015). Face/Off: Preventing Privacy Leakage From Photos in Social Networks. Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, Denver, CO, USA, October 12-16, 2015, 781–792.
@inproceedings{ilia_ccs2015,
author = {Ilia, Panagiotis and Polakis, Iasonas and Athanasopoulos, Elias and Maggi, Federico and Ioannidis, Sotiris},
title = {Face/Off: Preventing Privacy Leakage From Photos in Social Networks},
booktitle = {Proceedings of the 22nd {ACM} {SIGSAC} Conference on Computer and Communications Security, Denver, CO, USA, October 12-16, 2015},
pages = {781--792},
publisher = {{ACM}},
year = {2015},
doi = {10.1145/2810103.2813603},
file = {ilia_ccs2015.pdf}
}
The capabilities of modern devices, coupled with the almost ubiquitous availability of Internet connectivity, have resulted in photos being shared online at an unprecedented scale. This is further amplified by the popularity of social networks and the immediacy they offer in content sharing. Existing access control mechanisms are too coarse-grained to handle cases of conflicting interests between the users associated with a photo; stories of embarrassing or inappropriate photos being widely accessible have become quite common. In this paper, we propose to rethink access control when applied to photos, in a way that allows us to effectively prevent unwanted individuals from recognizing users in a photo. The core concept behind our approach is to change the granularity of access control from the level of the photo to that of a user’s personally identifiable information (PII). In this work, we consider the face as the PII. When another user attempts to access a photo, the system determines which faces the user does not have the permission to view, and presents the photo with the restricted faces blurred out. Our system takes advantage of the existing face recognition functionality of social networks, and can interoperate with the current photo-level access control mechanisms. We implement a proof-of-concept application for Facebook, and demonstrate that the performance overhead of our approach is minimal. We also conduct a user study to evaluate the privacy offered by our approach, and find that it effectively prevents users from identifying their contacts in 87.35% of the restricted photos. Finally, our study reveals the misconceptions about the privacy offered by existing mechanisms, and demonstrates that users are positive towards the adoption of an intuitive, straightforward access control mechanism that allows them to manage the visibility of their face in published photos.
Polakis, I., Ilia, P., Tzermias, Z., Ioannidis, S., & Fragopoulou, P. (2015). Social Forensics: Searching for Needles in Digital Haystacks. 4th International Workshop on Building Analysis Datasets and Gathering Experience Returns for Security, BADGERS@RAID 2015, Kyoto, Japan, November 5, 2015, 54–66.
@inproceedings{polakis_badgers2015,
author = {Polakis, Iasonas and Ilia, Panagiotis and Tzermias, Zacharias and Ioannidis, Sotiris and Fragopoulou, Paraskevi},
title = {Social Forensics: Searching for Needles in Digital Haystacks},
booktitle = {4th International Workshop on Building Analysis Datasets and Gathering Experience Returns for Security, BADGERS@RAID 2015, Kyoto, Japan, November 5, 2015},
pages = {54--66},
publisher = {{IEEE}},
year = {2015},
doi = {10.1109/BADGERS.2015.017},
file = {polakis_badgers2015.pdf}
}
The use of online social networks and other digital communication services has become a prevalent activity of everyday life. As such, users’ social footprints contain a massive amount of data, including exchanged messages, location information and photographic coverage of events. While digital forensics has been evolving for several years with a focus on recovering and investigating data from digital devices, social forensics is a relatively new field. Nonetheless, law enforcement agencies have realized the significance of employing online user data for solving criminal investigations. However, collecting and analyzing massive amounts of data scattered across multiple services is a challenging task. In this paper, we present our modular framework designed for assisting forensic investigators in all aspects of these procedures. The data collection modules extract the data from a user’s social network profiles and communication services, by taking advantage of stored credentials and session cookies. Next, the correlation modules employ various techniques for mapping user profiles from different services to the same user. The visualization component, specifically designed for handling data representing activities and interactions in online social networks, provides dynamic "viewpoints" of varying granularity for analyzing data and identifying important pieces of information. We conduct a case study to demonstrate the effectiveness of our system and find that our automated correlation process achieves significant coverage of users across services.
Polakis, I., Ilia, P., Maggi, F., Lancini, M., Kontaxis, G., Zanero, S., Ioannidis, S., & Keromytis, A. D. (2014). Faces in the Distorting Mirror: Revisiting Photo-based Social Authentication. Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, Scottsdale, AZ, USA, November 3-7, 2014, 501–512.
@inproceedings{polakis_ccs2014,
author = {Polakis, Iasonas and Ilia, Panagiotis and Maggi, Federico and Lancini, Marco and Kontaxis, Georgios and Zanero, Stefano and Ioannidis, Sotiris and Keromytis, Angelos D.},
title = {Faces in the Distorting Mirror: Revisiting Photo-based Social Authentication},
booktitle = {Proceedings of the 2014 {ACM} {SIGSAC} Conference on Computer and Communications Security, Scottsdale, AZ, USA, November 3-7, 2014},
pages = {501--512},
publisher = {{ACM}},
year = {2014},
doi = {10.1145/2660267.2660317},
file = {polakis_ccs2014.pdf}
}
In an effort to hinder attackers from compromising user accounts, Facebook launched a form of two-factor authentication called social authentication (SA), where users are required to identify photos of their friends to complete a log-in attempt. Recent research, however, demonstrated that attackers can bypass the mechanism by employing face recognition software. Here we demonstrate an alternative attack. that employs image comparison techniques to identify the SA photos within an offline collection of the users’ photos. In this paper, we revisit the concept of SA and design a system with a novel photo selection and transformation process, which generates challenges that are robust against these attacks. The intuition behind our photo selection is to use photos. that fail software-based face recognition, while remaining recognizable to humans who are familiar with the depicted people. The photo transformation process. creates challenges in the form of photo collages, where faces are transformed so as to render image matching techniques ineffective. We experimentally confirm the robustness of our approach against three template. matching algorithms that solve 0.4% of the challenges, while requiring four orders of magnitude more processing effort. Furthermore, when the transformations are applied, face detection software fails to detect even a single face. Our user studies confirm that users are able to identify their friends in over 99% of the photos with faces unrecognizable by software, and can solve over 94% of the challenges with transformed photos.
Ilia, P., Oikonomou, G. C., & Tryfonas, T. (2013). Cryptographic Key Exchange in IPv6-Based Low Power, Lossy Networks. Information Security Theory and Practice. Security of Mobile and Cyber-Physical Systems, 7th IFIP WG 11.2 International Workshop, WISTP 2013, Heraklion, Greece, May 28-30, 2013. Proceedings, 7886, 34–49.
@inproceedings{ilia_wistp13,
author = {Ilia, Panagiotis and Oikonomou, George C. and Tryfonas, Theo},
title = {Cryptographic Key Exchange in IPv6-Based Low Power, Lossy Networks},
booktitle = {Information Security Theory and Practice. Security of Mobile and Cyber-Physical Systems, 7th {IFIP} {WG} 11.2 International Workshop, {WISTP} 2013, Heraklion, Greece, May 28-30, 2013. Proceedings},
series = {Lecture Notes in Computer Science},
volume = {7886},
pages = {34--49},
publisher = {Springer},
year = {2013},
doi = {10.1007/978-3-642-38530-8\_3},
file = {ilia_wistp13.pdf}
}
The IEEE 802.15.4 standard for low-power radio communications defines techniques for the encryption of layer 2 network frames but does not discuss methods for the establishment of encryption keys. The constrained nature of wireless sensor devices poses many challenges to the process of key establishment. In this paper, we investigate whether any of the existing key exchange techniques developed for traditional, application-centric wireless sensor networks (WSN) are applicable and viable for IPv6 over Low power Wireless Personal Area Networks (6LoWPANs). We use Elliptic Curve Cryptography (ECC) to implement and apply the Elliptic Curve Diffie Hellman (ECDH) key exchange algorithm and we build a mechanism for generating, storing and managing secret keys. The mechanism has been implemented for the Contiki open source embedded operating system. We use the Cooja simulator to investigate a simple network consisting of two sensor nodes in order to identify the characteristics of the ECDH technique. We also simulate a larger network to examine the solution’s performance and scalability. Based on those results, we draw our conclusions, highlight open issues and suggest further work.
Refereed Journal Articles
Pachilakis, M., Chariton, A. A., Papadopoulos, P., Ilia, P., Degkleri, E., & Markatos, E. P. (2020). Design and Implementation of a Compressed Certificate Status Protocol. ACM Transactions on Internet Technology (TOIT), 20(4), 34:1–34:25.
@article{pachilakis_toit20,
author = {Pachilakis, Michalis and Chariton, Antonios A. and Papadopoulos, Panagiotis and Ilia, Panagiotis and Degkleri, Eirini and Markatos, Evangelos P.},
title = {Design and Implementation of a Compressed Certificate Status Protocol},
journal = {{ACM} Transactions on Internet Technology {(TOIT)}},
volume = {20},
number = {4},
pages = {34:1--34:25},
year = {2020},
doi = {10.1145/3392096},
file = {pachilakis_toit20.pdf}
}
Trust in Secure Sockets Layer–based communications is traditionally provided by Certificate (or Certification) Authorities (CAs) in the form of signed certificates. Checking the validity of a certificate involves three steps: (i) checking its expiration date, (ii) verifying its signature, and (iii) ensuring that it is not revoked. Currently, such certificate revocation checks (i.e., step (iii) above) are done either via Certificate Revocation Lists (CRLs), or Online Certificate Status Protocol (OCSP) servers. Unfortunately, despite the existence of these revocation checks, sophisticated cyber-attackers can still trick web browsers to trust a revoked certificate, believing that it is still valid. Although frequently updated, nonced, and timestamped certificates can reduce the frequency and impact of such cyber-attacks, they add a huge burden to the CAs and OCSP servers. Indeed, CAs and/or OCSP servers need to timestamp and sign on a regular basis all the responses, for every certificate they have issued, resulting in a very high overhead. To mitigate this and provide a solution to the described cyber-attacks, we present CCSP : a new approach to provide timely information regarding the status of certificates, which capitalizes on a newly introduced notion called Signed Collections. In this article, we present in detail the notion of Signed Collections and the complete design, implementation, and evaluation of our approach. Performance evaluation shows that CCSP (i) reduces space requirements by more than an order of magnitude, (ii) lowers the number of signatures required by six orders of magnitude compared to OCSP-based methods, and (iii) adds only a few milliseconds of overhead in the overall user latency.