Download Free Web Authentication Using Third Parties In Untrusted Environments Book in PDF and EPUB Free Download. You can read online Web Authentication Using Third Parties In Untrusted Environments and write the review.

With the increasing personalization of the Web, many websites allow users to create their own personal accounts. This has resulted in Web users often having many accounts on different websites, to which they need to authenticate in order to gain access. Unfortunately, there are several security problems connected to the use and re-use of passwords, the most prevalent authentication method currently in use, including eavesdropping and replay attacks. Several alternative methods have been proposed to address these shortcomings, including the use of hardware authentication devices. However, these more secure authentication methods are often not adapted for mobile Web users who use different devices in different places and in untrusted environments, such as public Wi-Fi networks, to access their accounts. We have designed a method for comparing, evaluating and designing authentication solutions suitable for mobile users and untrusted environments. Our method leverages the fact that mobile users often bring their own cell phones, and also takes into account different levels of security adapted for different services on the Web. Another important trend in the authentication landscape is that an increasing number of websites use third-party authentication. This is a solution where users have an account on a single system, the identity provider, and this one account can then be used with multiple other websites. In addition to requiring fewer passwords, these services can also in some cases implement authentication with higher security than passwords can provide. How websites select their third-party identity providers has privacy and security implications for end users. To better understand the security and privacy risks with these services, we present a data collection methodology that we have used to identify and capture third-party authentication usage on the Web. We have also characterized the third-party authentication landscape based on our collected data, outlining which types of third-parties are used by which types of sites, and how usage differs across the world. Using a combination of large-scale crawling, longitudinal manual testing, and in-depth login tests, our characterization and analysis has also allowed us to discover interesting structural properties of the landscape, differences in the cross-site relationships, and how the use of third-party authentication is changing over time. Finally, we have also outlined what information is shared between websites in third-party authentication, dened risk classes based on shared data, and proled privacy leakage risks associated with websites and their identity providers sharing data with each other. Our ndings show how websites can strengthen the privacy of their users based on how these websites select and combine their third-parties and the data they allow to be shared.
Simulations are frequently used techniques for training, performance assessment, and prediction of future outcomes. In this thesis, the term “human-centered simulation” is used to refer to any simulation in which humans and human cognition are integral to the simulation’s function and purpose (e.g., simulation-based training). A general problem for human-centered simulations is to capture the cognitive processes and activities of the target situation (i.e., the real world task) and recreate them accurately in the simulation. The prevalent view within the simulation research community is that cognition is internal, decontextualized computational processes of individuals. However, contemporary theories of cognition emphasize the importance of the external environment, use of tools, as well as social and cultural factors in cognitive practice. Consequently, there is a need for research on how such contemporary perspectives can be used to describe human-centered simulations, re-interpret theoretical constructs of such simulations, and direct how simulations should be modeled, designed, and evaluated. This thesis adopts distributed cognition as a framework for studying human-centered simulations. Training and assessment of emergency medical management in a Swedish context using the Emergo Train System (ETS) simulator was adopted as a case study. ETS simulations were studied and analyzed using the distributed cognition for teamwork (DiCoT) methodology with the goal of understanding, evaluating, and testing the validity of the ETS simulator. Moreover, to explore distributed cognition as a basis for simulator design, a digital re-design of ETS (DIGEMERGO) was developed based on the DiCoT analysis. The aim of the DIGEMERGO system was to retain core distributed cognitive features of ETS, to increase validity, outcome reliability, and to provide a digital platform for emergency medical studies. DIGEMERGO was evaluated in three separate studies; first, a usefulness, usability, and facevalidation study that involved subject-matter-experts; second, a comparative validation study using an expert-novice group comparison; and finally, a transfer of training study based on self-efficacy and management performance. Overall, the results showed that DIGEMERGO was perceived as a useful, immersive, and promising simulator – with mixed evidence for validity – that demonstrated increased general self-efficacy and management performance following simulation exercises. This thesis demonstrates that distributed cognition, using DiCoT, is a useful framework for understanding, designing and evaluating simulated environments. In addition, the thesis conceptualizes and re-interprets central constructs of human-centered simulation in terms of distributed cognition. In doing so, the thesis shows how distributed cognitive processes relate to validity, fidelity, functionality, and usefulness of human-centered simulations. This thesis thus provides a new understanding of human-centered simulations that is grounded in distributed cognition theory.
Development of new functionality and smart systems for different types of vehicles is accelerating with the advent of new emerging technologies such as connected and autonomous vehicles. To ensure that these new systems and functions work as intended, flexible and credible evaluation tools are necessary. One example of this type of tool is a driving simulator, which can be used for testing new and existing vehicle concepts and driver support systems. When a driver in a driving simulator operates it in the same way as they would in actual traffic, you get a realistic evaluation of what you want to investigate. Two advantages of a driving simulator are (1.) that you can repeat the same situation several times over a short period of time, and (2.) you can study driver reactions during dangerous situations that could result in serious injuries if they occurred in the real world. An important component of a driving simulator is the vehicle model, i.e., the model that describes how the vehicle reacts to its surroundings and driver inputs. To increase the simulator realism or the computational performance, it is possible to divide the vehicle model into subsystems that run on different computers that are connected in a network. A subsystem can also be replaced with hardware using so-called hardware-in-the-loop simulation, and can then be connected to the rest of the vehicle model using a specified interface. The technique of dividing a model into smaller subsystems running on separate nodes that communicate through a network is called distributed simulation. This thesis investigates if and how a distributed simulator design might facilitate the maintenance and new development required for a driving simulator to be able to keep up with the increasing pace of vehicle development. For this purpose, three different distributed simulator solutions have been designed, built, and analyzed with the aim of constructing distributed simulators, including external hardware, where the simulation achieves the same degree of realism as with a traditional driving simulator. One of these simulator solutions has been used to create a parameterized powertrain model that can be configured to represent any of a number of different vehicles. Furthermore, the driver's driving task is combined with the powertrain model to monitor deviations. After the powertrain model was created, subsystems from a simulator solution and the powertrain model have been transferred to a Modelica environment. The goal is to create a framework for requirement testing that guarantees sufficient realism, also for a distributed driving simulation. The results show that the distributed simulators we have developed work well overall with satisfactory performance. It is important to manage the vehicle model and how it is connected to a distributed system. In the distributed driveline simulator setup, the network delays were so small that they could be ignored, i.e., they did not affect the driving experience. However, if one gradually increases the delays, a driver in the distributed simulator will change his/her behavior. The impact of communication latency on a distributed simulator also depends on the simulator application, where different usages of the simulator, i.e., different simulator studies, will have different demands. We believe that many simulator studies could be performed using a distributed setup. One issue is how modifications to the system affect the vehicle model and the desired behavior. This leads to the need for methodology for managing model requirements. In order to detect model deviations in the simulator environment, a monitoring aid has been implemented to help notify test managers when a model behaves strangely or is driven outside of its validated region. Since the availability of distributed laboratory equipment can be limited, the possibility of using Modelica (which is an equation-based and object-oriented programming language) for simulating subsystems is also examined. Implementation of the model in Modelica has also been extended with requirements management, and in this work a framework is proposed for automatically evaluating the model in a tool.
Probabilistic topic models have proven to be an extremely versatile class of mixed-membership models for discovering the thematic structure of text collections. There are many possible applications, covering a broad range of areas of study: technology, natural science, social science and the humanities. In this thesis, a new efficient parallel Markov Chain Monte Carlo inference algorithm is proposed for Bayesian inference in large topic models. The proposed methods scale well with the corpus size and can be used for other probabilistic topic models and other natural language processing applications. The proposed methods are fast, efficient, scalable, and will converge to the true posterior distribution. In addition, in this thesis a supervised topic model for high-dimensional text classification is also proposed, with emphasis on interpretable document prediction using the horseshoe shrinkage prior in supervised topic models. Finally, we develop a model and inference algorithm that can model agenda and framing of political speeches over time with a priori defined topics. We apply the approach to analyze the evolution of immigration discourse in the Swedish parliament by combining theory from political science and communication science with a probabilistic topic model. Probabilistiska ämnesmodeller (topic models) är en mångsidig klass av modeller för att estimera ämnessammansättningar i större corpusar. Applikationer finns i ett flertal vetenskapsområden som teknik, naturvetenskap, samhällsvetenskap och humaniora. I denna avhandling föreslås nya effektiva och parallella Markov Chain Monte Carlo algoritmer för Bayesianska ämnesmodeller. De föreslagna metoderna skalar väl med storleken på corpuset och kan användas för flera olika ämnesmodeller och liknande modeller inom språkteknologi. De föreslagna metoderna är snabba, effektiva, skalbara och konvergerar till den sanna posteriorfördelningen. Dessutom föreslås en ämnesmodell för högdimensionell textklassificering, med tonvikt på tolkningsbar dokumentklassificering genom att använda en kraftigt regulariserande priorifördelningar. Slutligen utvecklas en ämnesmodell för att analyzera "agenda" och "framing" för ett förutbestämt ämne. Med denna metod analyserar vi invandringsdiskursen i Sveriges Riksdag över tid, genom att kombinera teori från statsvetenskap, kommunikationsvetenskap och probabilistiska ämnesmodeller.
More and more services are moving to the cloud, attracted by the promise of unlimited resources that are accessible anytime, and are managed by someone else. However, hosting every type of service in large cloud datacenters is not possible or suitable, as some emerging applications have stringent latency or privacy requirements, while also handling huge amounts of data. Therefore, in recent years, a new paradigm has been proposed to address the needs of these applications: the edge computing paradigm. Resources provided at the edge (e.g., for computation and communication) are constrained, hence resource management is of crucial importance. The incoming load to the edge infrastructure varies both in time and space. Managing the edge infrastructure so that the appropriate resources are available at the required time and location is called orchestrating. This is especially challenging in case of sudden load spikes and when the orchestration impact itself has to be limited. This thesis enables edge computing orchestration with increased resource-awareness by contributing with methods, techniques, and concepts for edge resource management. First, it proposes methods to better understand the edge resource demand. Second, it provides solutions on the supply side for orchestrating edge resources with different characteristics in order to serve edge applications with satisfactory quality of service. Finally, the thesis includes a critical perspective on the paradigm, by considering sustainability challenges. To understand the demand patterns, the thesis presents a methodology for categorizing the large variety of use cases that are proposed in the literature as potential applications for edge computing. The thesis also proposes methods for characterizing and modeling applications, as well as for gathering traces from real applications and analyzing them. These different approaches are applied to a prototype from a typical edge application domain: Mixed Reality. The important insight here is that application descriptions or models that are not based on a real application may not be giving an accurate picture of the load. This can drive incorrect decisions about what should be done on the supply side and thus waste resources. Regarding resource supply, the thesis proposes two orchestration frameworks for managing edge resources and successfully dealing with load spikes while avoiding over-provisioning. The first one utilizes mobile edge devices while the second leverages the concept of spare devices. Then, focusing on the request placement part of orchestration, the thesis formalizes it in the case of applications structured as chains of functions (so-called microservices) as an instance of the Traveling Purchaser Problem and solves it using Integer Linear Programming. Two different energy metrics influencing request placement decisions are proposed and evaluated. Finally, the thesis explores further resource awareness. Sustainability challenges that should be highlighted more within edge computing are collected. Among those related to resource use, the strategy of sufficiency is promoted as a way forward. It involves aiming at only using the needed resources (no more, no less) with a goal of reducing resource usage. Different tools to adopt it are proposed and their use demonstrated through a case study.
This thesis presents Machine Psychology as an interdisciplinary paradigm that integrates learning psychology principles with an adaptive computer system for the development of Artificial General Intelligence (AGI). By synthesizing behavioral psychology with a formal intelligence model, the Non-Axiomatic Reasoning System (NARS), this work explores the potential of operant conditioning paradigms to advance AGI research. The thesis begins by introducing the conceptual foundations of Machine Psychology, detailing its alignment with the theoretical constructs of learning psychology and the formalism of NARS. It then progresses through a series of empirical studies designed to systematically investigate the emergence of increasingly complex cognitive behaviors as NARS interacts with its environment. Initially, operant conditioning is established as a foundational principle for developing adaptive behavior with NARS. Subsequent chapters explore increasingly sophisticated cognitive capabilities, all studied with NARS using experimental paradigms from operant learning psychology: Generalized identity matching, Functional equivalence, and Arbitrarily Applicable Relational Responding. Throughout this research, Machine Psychology is demonstrated to be a promising framework for guiding AGI research, allowing both the manipulation of environmental contingencies and the system’s intrinsic logical processes. The thesis contributes to AGI research by showing how using operant psychological paradigms with NARS can enable cognitive abilities similar to human cognition. These findings set the stage for AGI systems that learn and adapt more like humans, potentially advancing the creation of more general and flexible AI. Denna avhandling introducerar Maskinpsykologi som ett tvärvetenskapligt område där principer från inlärningspsykologi integreras med ett adaptivt datorsystem. Genom att kombinera forskning från beteendepsykologi med en formell modell för intelligens (Non-Axiomatic Reasoning System; NARS), undersöker avhandlingen hur operant betingning kan användas för att driva utvecklingen av Artificiell General Intelligens (AGI) framåt. Avhandlingen börjar med att förklara grunderna i Maskinpsykologi och hur dessa relaterar till både inlärningspsykologi och NARS. Därefter presenteras en serie experiment som systematiskt undersöker hur allt mer komplexa kognitiva beteenden kan uppstå när NARS interagerar med sin omgivning. Till att börja med etableras operant betingning som en central metod för att utveckla adaptiva beteenden med NARS. I de följande kapitlen utforskas hur NARS, genom experiment inspirerade av operant inlärningspsykologi, kan utveckla mer avancerade kognitiva förmågor som till exempel generaliserad identitetsmatchning, funktionell ekvivalens och så kallade arbiträrt applicerbara relationsresponser. Denna forskning visar att Maskinpsykologi är ett lovande verktyg för att vägleda AGI-forskning, eftersom det möjliggör att både påverka omgivningsfaktorer och styra systemets interna logiska processer. Avhandlingen bidrar till AGI-forskning genom att visa hur operanta psykologiska metoder, tillämpade på NARS, kan möjliggöra kognitiva förmågor som liknar mänskligt tänkande. Dessa insikter öppnar nya möjligheter för att utveckla AI-system som kan lära sig och anpassa sig på ett mer mänskligt sätt, vilket kan leda till skapandet av mer generell och flexibel AI.
Services are prone to change in the form of expected and unexpected variations and disruptions, more so given the increasing interconnectedness and complexity of service systems today. These changes require service systems to be resilient and designed to adapt, to ensure that services continue to work smoothly. This thesis problematises the prevailing view and assumptions underpinning the current understanding of resilience in services. Drawing on literature from service management, service design, systems thinking and social-ecological resilience theory, this work investigates how service design can foster resilience in service systems. Supported by empirical input from three research projects in healthcare, the findings show service design can contribute to the adaptability and transformability of service systems through its holistic, human-centred, participatory and experimental approaches. Through the analysis, this research identifies key intervention points for cultivating service systems resilience through service design, including the design of service interactions, processes, enabling structures and multi-level governance. The study makes two important contributions. First, it extends the understanding of service systems resilience as the collective capacity for intentional action in responding to ongoing change, coordinated across scales in order to create value. This is supported by offering alternative assumptions about resilience in service. Second, it positions service design as an enabler of service resilience by explicitly linking design practice(s) to processes that contribute to resilience. By extending the understanding of service systems resilience, this thesis lays the groundwork for future research at the intersection of service design, systemic change and resilience.
This thesis addresses the need to balance the use of facial recognition systems with the need to protect personal privacy in machine learning and biometric identification. As advances in deep learning accelerate their evolution, facial recognition systems enhance security capabilities, but also risk invading personal privacy. Our research identifies and addresses critical vulnerabilities inherent in facial recognition systems, and proposes innovative privacy-enhancing technologies that anonymize facial data while maintaining its utility for legitimate applications. Our investigation centers on the development of methodologies and frameworks that achieve k-anonymity in facial datasets; leverage identity disentanglement to facilitate anonymization; exploit the vulnerabilities of facial recognition systems to underscore their limitations; and implement practical defenses against unauthorized recognition systems. We introduce novel contributions such as AnonFACES, StyleID, IdDecoder, StyleAdv, and DiffPrivate, each designed to protect facial privacy through advanced adversarial machine learning techniques and generative models. These solutions not only demonstrate the feasibility of protecting facial privacy in an increasingly surveilled world, but also highlight the ongoing need for robust countermeasures against the ever-evolving capabilities of facial recognition technology. Continuous innovation in privacy-enhancing technologies is required to safeguard individuals from the pervasive reach of digital surveillance and protect their fundamental right to privacy. By providing open-source, publicly available tools, and frameworks, this thesis contributes to the collective effort to ensure that advancements in facial recognition serve the public good without compromising individual rights. Our multi-disciplinary approach bridges the gap between biometric systems, adversarial machine learning, and generative modeling to pave the way for future research in the domain and support AI innovation where technological advancement and privacy are balanced.
Vast amounts of data are continually being generated by a wide variety of data producers. This data ranges from quantitative sensor observations produced by robot systems to complex unstructured human-generated texts on social media. With data being so abundant, the ability to make sense of these streams of data through reasoning is of great importance. Reasoning over streams is particularly relevant for autonomous robotic systems that operate in physical environments. They commonly observe this environment through incremental observations, gradually refining information about their surroundings. This makes robust management of streaming data and their refinement an important problem. Many contemporary approaches to stream reasoning focus on the issue of querying data streams in order to generate higher-level information by relying on well-known database approaches. Other approaches apply logic-based reasoning techniques, which rarely consider the provenance of their symbolic interpretations. In this work, we integrate techniques for logic-based stream reasoning with the adaptive generation of the state streams needed to do the reasoning over. This combination deals with both the challenge of reasoning over uncertain streaming data and the problem of robustly managing streaming data and their refinement. The main contributions of this work are (1) a logic-based temporal reasoning technique based on path checking under uncertainty that combines temporal reasoning with qualitative spatial reasoning; (2) an adaptive reconfiguration procedure for generating and maintaining a data stream required to perform spatio-temporal stream reasoning over; and (3) integration of these two techniques into a stream reasoning framework. The proposed spatio-temporal stream reasoning technique is able to reason with intertemporal spatial relations by leveraging landmarks. Adaptive state stream generation allows the framework to adapt to situations in which the set of available streaming resources changes. Management of streaming resources is formalised in the DyKnow model, which introduces a configuration life-cycle to adaptively generate state streams. The DyKnow-ROS stream reasoning framework is a concrete realisation of this model that extends the Robot Operating System (ROS). DyKnow-ROS has been deployed on the SoftBank Robotics NAO platform to demonstrate the system's capabilities in a case study on run-time adaptive reconfiguration. The results show that the proposed system - by combining reasoning over and reasoning about streams - can robustly perform stream reasoning, even when the availability of streaming resources changes.
In this thesis we study the worst-case complexity ofconstraint satisfaction problems and some of its variants. We use methods from universal algebra: in particular, algebras of total functions and partial functions that are respectively known as clones and strong partial clones. The constraint satisfactionproblem parameterized by a set of relations ? (CSP(?)) is the following problem: given a set of variables restricted by a set of constraints based on the relations ?, is there an assignment to thevariables that satisfies all constraints? We refer to the set ? as aconstraint language. The inverse CSPproblem over ? (Inv-CSP(?)) asks the opposite: given a relation R, does there exist a CSP(?) instance with R as its set of models? When ? is a Boolean language, then we use the term SAT(?) instead of CSP(?) and Inv-SAT(?) instead of Inv-CSP(?). Fine-grained complexity is an approach in which we zoom inside a complexity class and classify theproblems in it based on their worst-case time complexities. We start by investigating the fine-grained complexity of NP-complete CSP(?) problems. An NP-complete CSP(?) problem is said to be easier than an NP-complete CSP(?) problem if the worst-case time complexity of CSP(?) is not higher thanthe worst-case time complexity of CSP(?). We first analyze the NP-complete SAT problems that are easier than monotone 1-in-3-SAT (which can be represented by SAT(R) for a certain relation R), and find out that there exists a continuum of such problems. For this, we use the connection between constraint languages and strong partial clones and exploit the fact that CSP(?) is easier than CSP(?) when the strong partial clone corresponding to ? contains the strong partial clone of ?. An NP-complete CSP(?) problem is said to be the easiest with respect to a variable domain D if it is easier than any other NP-complete CSP(?) problem of that domain. We show that for every finite domain there exists an easiest NP-complete problem for the ultraconservative CSP(?) problems. An ultraconservative CSP(?) is a special class of CSP problems where the constraint language containsall unary relations. We additionally show that no NP-complete CSP(?) problem can be solved insub-exponential time (i.e. in2^o(n) time where n is the number of variables) given that theexponentialtime hypothesisis true. Moving to classical complexity, we show that for any Boolean constraint language ?, Inv-SAT(?) is either in P or it is coNP-complete. This is a generalization of an earlier dichotomy result, which was only known to be true for ultraconservative constraint languages. We show that Inv-SAT(?) is coNP-complete if and only if the clone corresponding to ? contains essentially unary functions only. For arbitrary finite domains our results are not conclusive, but we manage to prove that theinversek-coloring problem is coNP-complete for each k>2. We exploit weak bases to prove many of theseresults. A weak base of a clone C is a constraint language that corresponds to the largest strong partia clone that contains C. It is known that for many decision problems X(?) that are parameterized bya constraint language ?(such as Inv-SAT), there are strong connections between the complexity of X(?) and weak bases. This fact can be exploited to achieve general complexity results. The Boolean domain is well-suited for this approach since we have a fairly good understanding of Boolean weak bases. In the final result of this thesis, we investigate the relationships between the weak bases in the Boolean domain based on their strong partial clones and completely classify them according to the setinclusion. To avoid a tedious case analysis, we introduce a technique that allows us to discard a largenumber of cases from further investigation.