Moderator: Peter Freeman, Chair, CCICADA Advisory Board
Panelists: Dennis Egan (Rutgers), Ryan Manning (USCG CG-FAC), Scott Tousley (DHS S&T Cyber Security Division), Robin Dillon-Merrill (Georgetown)
This panel will review issues in cyber security from the technical to the human. Panelists will give specific examples of work that has been done and speculate about research and implementation questions and challenges in this general area. Sample questions might be: What is the role of information sharing in protecting against future cyber-attacks? What does it mean to be “cyber ready” and how do we improve cyber readiness? What is the role of education for good cyber behavior and how early does it start? What are best practices for and challenge sin retraining today’s workforce in good cyber behavior? What are the specific challenges faced in different domains, specifically the maritime, from the rapidly increasing dependence on cyber-physical systems? How do we protect against hybrid attacks that might start with cyber but aim at physical damage, or vice versa? To what extent is cyber security a technical question or a question of human error?
Moderator: James Wojtowicz (Rutgers)
Panelists: Kostas Bekris (Rutgers), Hao Tang (BMCC), Zhigang Zhu (CCNY), Syed Mohammad (DHS S&T), Carey Schwartz (DARPA), Darby LaJoye (TSA)
This panel will review the role of modeling and simulation in homeland security, with some emphasis on applications to large gathering places, and panelists will give specific examples of work that has been done and speculate about research and implementation questions and challenges in this general area. Sample questions might be: What might we learn from modeling and simulation and how would the lessons learned be applied? What can be done to make the models and simulations as realistic as possible? Venues are very different, but are there general tools of modeling and simulation that can be used across many types of venues? How do malls, convention centers, transportation facilities, etc. differ in terms of the security needs that can be modeled and simulated, and how are they similar? How can models help with different scenarios such as an evacuation situation, an active shooter, or an overcrowding situation caused by transportation delays? How do we protect the public, pre-screening area at an airport? How does today’s large amount of data affect assist with making better models and simulations, and what are some of the challenges faced by models and simulations of the availability of so much data?
Moderator: Georges Grinstein (UMASS)
Panelists: Stephen Dennis (DHS S&T), Eduard Hovy (CMU), Marty Martinez (USCG CGIS), Russell Holmes (USCG NAVCEN), Victor Harrison (Object Management Group – OMG)
This panel will discuss the general question of how to gain insight from large (or small) amounts of data: going from data to hypotheses, from data to leads, from data to conclusions. Panelists will describe specific examples of successful use of data science in homeland security (such as identifying hoax callers or finding under-age children engaged in sex trafficking) and speculate about research and implementation questions and challenges in this general area. Sample questions include: What is the role of information sharing, and what are the technological and the human factors that make information sharing challenging? How do we combine data from multiple media (auditory, visual, sensor, etc.) to gain insight? How might large amounts of data affect our ability to get early warning of changing trends or anomalies?
Moderator: Fred Roberts (Rutgers)
Panelists: Bruce Davidson (OSAI), Terri Adams-Fuller (Howard), Richard Fenton (Ilitch Holdings), Joe Borkoski, Sr. (Regal Decision Systems), Daniel DeLorenzi (MetLife Stadium), Sly Servance (Washington Nationals), Kyle Wolf (DHS PSA), Moises Grimes (Monumental Sports & Entertainment)
This panel will explore challenges in keeping people safe in large gathering places such as sports stadiums and airports. Panelists will describe specific examples of changes in large venue security resulting from research, review resources coming out of DHS work that have contributed to those changes, and speculate about research and implementation questions and challenges in this general area. Sample questions to address might be: What is the role of technology in large venue security and what new technologies might be relevant? How does training of employees and exercising of security plans enter into the equation? What about insider threats? How do you use randomness in patron screening, background checks, sweeps, and other aspects of large venue security? How do you design security procedures to minimize the chance of being accused of profiling? What new vulnerabilities are introduced by increased screening? Speakers will also address federal government liability protection for venues through the SAFETY Act, challenges for venues in obtaining SAFETY Act certification/designation, and resources that will aid in applying for such protection.
Moderator: Midge Cozzens (Rutgers)
Panelists: Faculty: Asamoah Nkwanta (Morgan State), Talitha Washington (Howard), Martene Stanberry (Tenn. State); Students: Oluseyi Adekanye (Howard), Moussa Doumbia (Howard), Omoikhefe Eboreime (TSU), Dexter Harris (Morgan State), Aaron Rowen (RPI)
Panelists will address the impact of CCICADA’s research and education programs on their institution and on their own careers, and speculate about ideas for implementing new educational programs that will aid in the development of the homeland security workforce of the future.
Moderator: Paul Kantor (Rutgers)
Panelists: Al Wallace (RPI), Endre Boros (Rutgers), Arch Turner (DHS S&T – tentative), Robert O’Connor (NSF), Mark Montezemolo (USCIS)
This panel will discuss ways in which data and modeling can lead to decision support with an emphasis on decisions about resource allocation or re-allocation, and panelists will speculate about research and implementation questions and challenges in this general area. Sample questions might be: How do you take immigration data about people applying for different kinds of immigration forms and use that to allocate/reallocate your workforce? How do you take data about past needs for different kinds of Coast Guard vessels at different boat stations and use that to reallocate your vessels to different places? How do you take data about past natural disasters such as storms or oil spills and use that to allocate resources to alternative mitigation and response strategies for future disasters? To what extent can we measure the cost of errors in data that is used in decision-support algorithms? And should you reallocate resources to cleaning data because that is less expensive than the errors caused by using bad data? Are there metrics we can use to decide how well we are doing with security, metrics we can use to make decisions about how to allocate resources?
Many learning algorithms are struggling with large data sets, and miss information simply for computational reasons. A larger and mostly hidden problem is that many algorithms learn (unnoticed and unwanted) patterns that are not present in the data. Using such classifiers, we may jump to conclusions that are unjustifiable based on our existing data sets. We demonstrate this issue with a small example, and show some results about the existence of learning algorithms that always guarantee a “justifiable” classifier.
CCICADA Best Practices Resource Guide & BPATS in Developing SAFETY Act Applications: Comerica Park (Detroit Tigers), Little Caesars Arena (Detroit Red Wings & Detroit Pistons)
This talk will discuss resources that helped Ilitch Holdings with SAFETY Act certification for Comerica Park, home of the Detroit Tigers, and two SAFETY Act applications for the new Little Caesars Arena, home of the Detroit Red Wings and Detroit Pistons. General security principles used in the new arena design will be described, as will the process of implementing protective measures.
A Systems Framework for Using Remotely Sensed Data in Damage Assessment during Large-Scale Extreme Event
Remotely sensed data such as satellite imagery and airborne LiDAR data have been frequently used in damage assessment during large-scale extreme events. Compared with traditional foot-on-ground damage assessment approaches, remote sensing based damage assessment offers many advantages such as safety, efficiency, and less constrained by accessibility issues. Nevertheless, remote sensing based damage assessment also faces growing challenges in processing an ever-increasing volume and variety of data and in addressing deep uncertainty in infrastructure system performance based on partially observed condition data. This presentation will discuss a systems framework to address these challenges. The framework is underpinned by machine learning methods, mechanistic models, and system modeling techniques. In the presentation, we will use assessment of building structures and natural gas infrastructure systems as the driving example to illustrate and evaluate the framework.
Estimating Detector Effectiveness in Operational Settings; A Bayesian Approach
When small samples are used to estimate the performance of a Walk-Through Metal Detector (WTMD), it is difficult to compare the observations made with different numbers of tests, on different machines. Standard tests for metal detectors, whether hand-held (Wand) or walk-through are expressed in terms of specific test objects, and a requirement, typically of the form: “the object is to be detected at least 19 times in 20 trials.” Complementary requirements for suppressing false positives take various forms, depending on what is to be “not detected.” An example is the case of “no object at all” for which the requirement may be “20 trials with no alarms.” Practitioners use much shorter tests to assess whether a machine is working on the day of the event, and they have no principled guidance about how to interpret that information, particularly if the information from each test is logged, and thus might be aggregated to detect that a specific machine is slipping out of calibration. This note develops a principled Bayesian analysis that can convert a test of any size into a common language of expected odds ratio, which can then be used to compare tests with different sample sizes. Some technical issues of forming a “reasonable prior” are discussed. This research has the potential to make “quick checks,” conducted in operational settings, into a rigorous and accumulated source of useful information about detection devices and procedures, in operational settings.
Immigration Data Science Challenges at DHS
Recent administrations and Congress have placed increased pressure for better immigration data integration including pressure to have a cradle to grave tracking of immigration enforcement data. The complexities of merging data from multiple systems (e.g. EID, CLAIMS 3, TECS…) and across multiple agencies (e.g. ICE, CBP, USCIS, EOIR, DOS) share common issues familiar to the data science community. These include antiquated data storage structures, dissimilar units of measurement, a lack of common identifiers, multitudes of many to many relationships, incomplete administrative records, and transactional data systems. The purpose of this talk will be to explore some of these common issues in relation to immigration research with examples from current integration efforts and how these limitations hinder efforts at more robust predictive modeling.
Combating Search and Rescue (SAR) Hoax Calls
Discussion will include an overview of the prevalence of SAR Hoax Calls received by the Coast Guard, the role of the Coast Guard Investigative Service (CGIS) in investigating SAR Hoax Calls, penalties for making a SAR Hoax (false distress) Call to the Coast Guard, the challenges posed and ongoing initiatives to investigate and prosecute persons responsible for making a SAR Hoax Call to the Coast Guard.
Cultural Demystification within Countering Violent Extremism (CVE) – Arab & Muslim Communities: Communication + Understanding = Trust?
Nawar Shora’s training presentations have been successfully implemented over the past 16 years with local, state, and federal law enforcement, intel community, and academic and private institutions. The course, based on his book, The Arab American Handbook, covers a range of topics to assist with Countering Violent Extremism. Content includes basics on Arabs, Arab-Americans, and Islam, as well as an interactive discussion on pop culture, history, geography, social and behavioral norms and mores, and radicalization to violence.
Mathematical Models and Emerging Infectious Diseases
The number of outbreaks of emerging infectious diseases is growing and zoonotic diseases, which can be spread between animals and humans, continue to represent well over half of all infections in humans. In this talk, we will highlight some of the 25 deadliest diseases in human history and the top 10 most deadly diseases in Africa. In case studies, we will introduce mathematical models of Ebola and malaria, and illustrate how they might be used to mitigate outbreaks of these diseases and other pathogens.
Learning and Utilization of Crowd Analytics for Security and Navigation with 3D Semantic Models
The CUNY-Rutgers joint NIST Global City Teams Challenge (GCTC) 2016 Action Cluster Smart Transportation Hub aims to collaborate with NJ Transit and Port Authority, to create and test Smart Transportation Hub (Smart T-Hub) with minimal changes of the current cyber-physical infrastructure of their stations. The SAT-Hub testbed is leading to project inspired discovery at least in the following areas: (1) 3D semantic facility model based localization with smartphone images, which offers a renovation- or reconstruction-free infrastructure approach for user localization and meanwhile provides a way of facility model updating. (2) Optimal sensor placement and rapid calibration using 3D semantic model in 3D space, which will advance our understanding in deploying smart sensors in a complex and large-scale dynamic environment. (3) Deep-learning-based human crowd analysis, which will go beyond the typical image classification and understanding tasks for deep learning into complex 3D environments. (4) Human-in-the-loop traveling guidance with multi-facet inputs, including information of physical 3D models, crowd and traffic flows, security alerts, transportation schedules and user preferences. The research is supported by the DHS Summer Research Team (SRT) Program and NSF Emerging Frontiers in Research and Innovation (EFRI) Program.
Climate and Malaria Transmission Incidences in the United States, 1970-2004
About 1,500 malaria cases are diagnosed each year in United States. However, almost all the cases are imported rather than locally transmitted. Since 1950, 21 outbreaks of locally transmitted cases have been reported in the United States. We have used available monthly malaria data from Centers for Disease Control and Prevention (CDC) and climate data from different platforms (satellite, in situ and model) to establish the climate factors favorable for malaria outbreak. Our analyses revealed that most of the cases occurred in California (1986 and 1988), Florida (1994) and Texas (2004) during the summer months when monthly surface temperature is ~18-19 OC and relative humidity is ~85-87 %. The ultimate aim is to develop malaria incidence forecasting system based on reported cases and climate conditions.
Walk-Through Metal Detectors and Stadium Contraband
Walk through metal detectors (WTMDs) are used in a variety of applications including at large stadiums, as a tool to counter possible security threats. With WTMDs at large stadium venues being the focal point of this research, we examined WTMD effectiveness in detecting real stadium contraband items. Contraband items were loaned to us by a sports stadium, as was use of current field-used WTMDs. Experiments were performed to identify factors affecting detection of contraband items. Some factors include height and orientation of the real contraband object passing through and speed of the person walking through the metal detector. In total about 5,900 experiments were performed. In addition, field observations of security screening patrons with WTMDs were observed at a large stadium venue. Data collected from our observations at these events helped us to validate the types of contraband items actually found, along with conversations with stadium security personnel.
Optical Character Recognition (OCR) Using Principal Component Analysis (PCA)
Optical Character Recognition, OCR, has gotten increasing attention and application over the years because of the need to convert text characters in images into machine readable text. In this project, OCR is performed using Principal Component Analysis, PCA, a statistical/linear algebra method that uses orthogonal transformations to decompose a piece of data with potentially correlated components into a linearly uncorrelated set of data containing principal components. A first test of this approach was performed using MATLAB and focusing on the English capital letters and numbers. An average performance of 23.3% was achieved using 35 sample images (30 images for training, 5 images for testing) in their natural scenes for each character. Methods of improving this result have been proposed.
Risk Information System (IRIS) Viewer
It is very much in every coastal dweller’s financial and survival interests to know exactly the elevation of her home and to understand exposure to flooding vulnerability. However, accurate structural elevation data are costly to acquire. In addition to the cost of acquiring elevation data, structural elevation data, as is recorded on elevation certificates, are abstract and difficult to visualize in a manner that might better inform flooding risks. Consequently, flood risk mitigation is difficult to implement consistently at the community scale as many building structures lack elevation certificates, and flooding risks are often downplayed in local debates. Advance in reality capture technologies such as laser scanning from mobile platforms have made it possible to rapidly collect high-resolution 3D data at the street level. Although these data provide great details about building structures, visualization and manipulation of large 3D point cloud is still not as efficient as that of 2D image. In this research, we present a lightweight web-based 3D data visualization and exploration platform that provides capabilities to visualize, explore, and interact with city-scale 3D data on a variety of computing devices. We demonstrate the use of this system to extract crucial structural elevation information to support county-scale flood plain management. We will also highlight the capability of the tool in supporting real-time flood risk visualization and in supporting real-time evacuation planning.
A Deep Learning Approach with Focus of Attention for Facial Action Unit Detection
Facial Action Unit (AU) detection is an essential process in facial analysis. With a robust AU detector, facial expression and facial action problems can be solved more effectively. AU detection is the process to find some basic facial actions defined by FACS, the Facial Action Coding System. In this work, we propose a deep learning based approach for facial action unit (AU) detection by enhancing and cropping regions of interest of face images. The approach is implemented by adding two groups of novel layers – the enhancing layers and the cropping layers, to a pretrained convolutional neural network (CNN) model. For the enhancing layers (noted as E-Net), we have designed an attention map based on facial landmark features and apply it to a pre-trained neural network to conduct enhanced learning. For the cropping layers (noted as C-Net), we crop facial regions around the detected landmarks and design individual convolutional layers to learn deeper features for each facial region. We then combine the E-Net and the C-Net to construct a so-called Enhancing and Cropping Net (EAC-Net), which can learn both features enhancing and region cropping functions effectively.
The EAC-Net integrates three important elements, i.e. learning transfer, attention coding, and regions of interest processing, making our AU detection approach more efficient and more robust to facial position and orientation changes. Our approach shows a significant performance improvement over the state-of-the-art methods when tested on the popular BP4D and DISFA AU datasets. We have also studied the performance of the proposed EAC-Net under two very challenging conditions: (1) faces with partial occlusion and (2) faces with large head pose variations. Experimental results show that the EAC-Net learns facial AUs correlation effectively and predicts AUs reliably even with only half of a face being visible, especially for the lower half. Furthermore, our EAC-Net model also works well under very large head poses, which outperforms significantly a compared baseline approach. In addition, experiments have shown that the EAC-Net works much better without a face alignment than with face alignment as pre-processing, in terms of computational efficiency and AU detection accuracy.
In addition to the performance improvement in metrics, our approach also has the following technical contributions: (1). We propose an AU detection approach which is more robust to face position and orientation changes. (2). No facial preprocessing such as normalization is required to apply to the input images in our approach, which not only saves lots of preprocessing time, but also maintain the original facial expressions. (3).Although face landmarks are used in our EAC-Net, they do not need to be very accurately located, i.e. the approach is robust to landmark detection errors. (4). We have found human’s lower part half face can deliver rich facial action information. We can even predict the facial actions on the “unseen” part with the lower half face by applying our approach. As a future work, we will try to find more responsive areas for the enhancing and cropping nets, as currently we manually locate the positions. In addition, we will explore integrating more temporal information into the EAC-Net framework to deal with the wild video AU detection problem.
Garda – Robust Gesture-Based Authentication for Mobile Systems
Touchscreens, the dominant input type for mobile phones, require unique authentication solutions. We have proposed gesture passwords as an alternative ubiquitous authentication technique. Gesture authentication relies on recognition, wherein raw data is collected from user input and recognized by measuring the similarity between gestures with different algorithms.
We have designed, implemented and evaluate a novel secure, robust and usable multi-expert recognizer for gesture passwords: Garda. In addition to Garda, we also implemented and analyzed 12 other approaches for building a recognizer, including Dynamic Time Warping, Longest Common Subsequences, Edit distance on Real Sequence, Support Vector Machine, and Hidden Markov Model. We evaluated the 13 recognizers with three criteria: authentication accuracy, and resistance against both brute-force and imitation attacks. Garda achieved the lowest error rate (0.015) in authentication accuracy, the lowest error rate (0.040) under imitation attacks, and resisted all brute-force attacks. Additionally, Garda has a short and stable processing time for authentication (0.15 second) when running on mobile device. More information and related work can be found at http://securegestures.org. The original work presenting Garda formally appeared in CHI’17, May 06 – 11, 2017, Denver, CO, USA,DOI: http://dx.doi.org/10.1145/3025453.3025879
The Stadium Simulator is a robust piece of software designed to simulate a large number of scenarios involving patrons moving through a physical security setup. The simulator is designed with queuing theory in mind, focusing on the delay and lines induced by individual components or groups of components of any security setup. Components communicate by passing messages allowing them to request patrons from, and send patrons to, other components. Each component records statistics about it’s own state, and these statistics can be aggregated to produce statistics about the simulation as a whole. Statistics can be graphed against each other, allowing the user to follow the changes in the simulation in real time.
To allow the simulator to remain flexible, separate elements of a security setup are implemented as different components in the simulation, allowing users to program new components easily. Moreover, the simulator accepts a configuration file, which allows the user to fine tune the simulators behavior to match the physical situation being simulated.
Randomization and Stadium Security
Stadium venue security can be enhanced by adding randomization. Randomization helps to harden venues as well as confusing adversaries and those who may be doing counter- surveillance. Randomization can be added to stadium security in a variety of ways. This research will focus on patron security, presenting some early concepts that may be useful to practitioners. The insider threat, including both employees and the media will also be touched on.
Inference of a Dyadic Measure and its Simplicial Geometry from Binary Feature Data and Application to Data Quality
The CCICADA Data Quality Team performed a data quality analysis study for a customer. The team conducted extensive interviews with domain experts, acquired domain knowledge and analyzed 4 sample data sets. They defined a set of 30 complex data quality constraints and produced statistics summarizing the compliance of each sample data set with the new data quality constraints. The new constraints are providing some of the requirements for a new implementation of a critical operations database.
In this research the statistics about the compliance of the sample data set to the new data quality constraints were used to develop and demonstrate a general method for inferring and visualizing non-parametric multiscale statistical measures for data sets summarized by binary features. The data quality constraints provided an example of binary features. (The statistics were abstract and conveyed no proprietary information about the data or the data quality constraints.) In addition, a method for summarizing the topological dimensions of a binary features data set was developed and demonstrated on the set of data quality statistics. The CHOMP open source software from Rutgers was used in this demonstration. The abstract topological dimensions were sufficient to distinguish the sample data sets, which arose from different operational centers.
The results of the research could potentially be applied to many other types of data sets. The results could potentially be used to automatically summarize, compare and distinguish data quality results for queries and whole subsets of data sets. They may be helpful in identifying roots causes for differences in data quality arising from data in different centers with different operational practices.
Name Variation in Community Question Answering Systems
Community question answering systems are forums where users can ask and answer questions in various categories. Examples are Yahoo! Answers, Quora, and Stack Overflow. A common challenge with such systems is that a significant percentage of asked questions are left unanswered. In this paper, we propose an algorithm to reduce the number of unanswered questions in Yahoo! Answers by reusing the answer to the most similar past resolved question to the unanswered question, from the site. Semantically similar questions could be worded differently, thereby making it difficult to find questions that have shared needs. For example, Who is the best player for the Reds? and Who is currently the biggest star at Manchester United? have a shared need but are worded differently; also, Reds and Manchester United are used to refer to the soccer team Manchester United football club. In this research, we focus on question categories that contain a large number of named entities and entity name variations. We show that in these categories, entity linking can be used to identify relevant past resolved questions with shared needs as a given question by disambiguating named entities and matching these questions based on the disambiguated entities, identified entities, and knowledge base information related to these entities. We evaluated our algorithm on a new dataset constructed from Yahoo! Answers. The dataset contains annotated question pairs, (Qgiven, [Qpast, Answer]). We carried out experiments on several question categories and show that an entity-based approach gives good performance when searching for similar questions in entity rich categories.
MetricVis: A Visualization Tool to Identify Officer Experience
Law enforcement agencies often lack an effective method to quickly identify the types of cases officers encounter across the entire police force. For many police departments, case files in the Record Management System (RMS) are not able to be explored interactively and visually. Through collaboration with a partner law enforcement agency, we have developed a visual analytics system to allow end-users to explore the category and frequency of cases encountered by officers in the department over different temporal periods. MetricVis allows the selection, filtering, ranking, and correlation of case data, which can identify activity patterns for individuals or groups of officers. Initial feedback from our partner agency is that the visual representations provided by MetricVis can lead to improved training, decision-making, and resource allocation by commanding officers.
Interactive Graphical Illustrations of Processed Data
Data visualization displaying data patterns, trend and correlations is one of the most effective ways for a politician to run his or her campaign, a scientist to communicate his or her findings, a stock broker to take quick decisions, a news commentator to explore real time reaction of current events on twitter. Multi-component visual exploration conveys ideas and tells stories behind collected data driven experiences; it is a great tool to make data driven decisions by communicating the information clearly and effectively through graphical means.
In our talk we will present several interactive graphical illustrations emphasizing the role that mathematics plays in the preprocessing of the raw data.
Mitigating the Perception of Bias in Security Protocol and Enforcement
The National Center for Spectator Sports Safety and Security asserts that, on a global scale, over $2 billion is spent on “sports security efforts” each year (Hall, et. al, 2012). Integral to the success of these security measures is the extent to which customers are willing to participate in security protocols and procedures. This willingness is often directly related to how invasive these procedures are on the actual spectator experience, and the extent to which security efforts give the appearance (and reassurance) of being purposeful. It is important, then, for organizations that seek to ensure the physical well-being of its customers to also consider their emotional well-being, and to eliminate unfair treatment and biased practices. Security processes should appear as impartial as possible.
This research takes an interdisciplinary approach to analyzing factors that cause patrons to perceive bias during enforcement of security protocols. It explores the mechanisms that amplify perceptions of bias, and offers some recommendations for mitigating the appearance of partiality in security protocols and practices.
Randomization Incentives and Satisfaction of the Patrons
Terroristic activities around the globe occurring in venues ranging from large scale such as airports to stadiums to smaller such as clubs and restaurants have been on the rise. These unfortunate and tragic incidents have necessitated the employment of more strict security measures, complex procedures, and commensurate staff training. For example, the Department of Homeland Security has assigned $3.7B of their budget to screen operations to maintain aviation security and more effectively allocate screening resources based on risk. The objective is to enhance safety and minimize the risks by utilizing state of the art technology and methods. Increasing terror threat has impacted not only public venues airports but also professional large organizations such as NFL and NBA teams. These organizations have put more attention to safety measures by working for certification or designation under the Support Anti-Terrorism by Fostering Effective Technologies Act (SAFETY Act) in order to protect their arenas more comprehensively.
The increasing number of tragic incidents has arguably made consumers/patrons more sensitive to the notion of security. At its extreme, such concerns may deter attendance to high profile events and negatively impact overall attendance. Alas, as much as patrons like to be secure, they do not equally appreciate the security processes they have to experience when checking into venues. These venues sell more than a ticket, they sell unique, emotional experiences, and strict security measures like bag checks, electronic screening, pat-downs and long lines might detract from the experience. Furthermore, patron perception of randomized selection and/or security processes can be mixed due to profiling or lack of staff training. Hence, there are concrete customer satisfaction and ultimately revenue consequences for venue owners. Yet, a review of the literature shows that none of the existing scales to measure customer satisfaction directly account for perception of security or security measures employed by service providers. Further complicating the issue is that it is easy to quantify the costs of improvements in security but not its benefits for venue owners.
With this research project, we apply the principles of entrepreneurial marketing to an area where it has not previously been applied but where it is badly needed. Building upon a review of literature and transcripts of over a dozen interviews (conducted by the CCICADA project team on Economics and Randomization of Security Processes) with practitioners from professional organizations that deal with large scale security issues, we identify and develop a framework to reframe the perception of security processes at large venues through the joint application of randomization and marketing incentives. In particular, randomized security processes can be utilized in addition to or perhaps as a substitute for existing security processes. Marketing incentives can then be applied to reframe how such random screening is perceived, shifting sentiment from a negative one associated with loss of time, hassle, privacy issues, to a positive sentiment associated with luck, lottery, winning, and economic savings.
We develop a model and demonstrate that under certain conditions, reframing of randomization to provide clear patron benefits as well as aiding security processes not only can reduce average queue/waiting time but also can increase patron satisfaction as well as net profits of venue owners. These secondary security processes are also expected to improve security by reducing the pressure to screen very rapidly. We conduct sensitivity analysis to show that the benefits are robust to changes in several key parameters. Potential economic impact is estimated for venues of varying size. Consistent with effectuation theory, the randomized incentives primarily utilize existing means of an organization i.e., they can be integrated to existing operations, do not require significant resources, and they can be cash-flow positive in weeks.
Fidelity of Counter-terrorism Best Practices: the Importance of Adhering to Theoretical Foundations in the Absence of Evidence
The concept of ‘evidence based practices’ requires that the employed initiatives or strategies have been found through research to be effective in meeting their outcome goals. This approach is consistently and broadly used in the evaluation of efforts in the general law enforcement/criminal justice field. In the homeland security arena, it is very difficult to evaluate counter-terrorism strategies in this way; the frequency of event is extremely small (thankfully) and experimental and even quasi-experimental designs are at best challenging but more likely infeasible. It is suggested, however, that there are two (2) characteristics of strategies that can be reviewed to make some evaluation of good or best practices: adherence to: a) a theoretical foundation and b) fidelity.
This poster presentation will define the key terms and characteristic components. The counter-terrorism strategies of deterrence and randomized deterrence will be evaluated in the context of theory and fidelity as case examples. Limitations and future research ideas will also be presented.
An Efficient Acceleration of Solving Heat and Mass Transfer Equations with the Third Kind Boundary Conditions in Solid Cylinder Using Programmable Graphics Hardware
With the recent developments in computing technology, increased efforts have gone into simulation of various scientific methods and phenomenon in engineering fields. One such case is the simulation of heat and mass transfer equations which is becoming more and more important in analyzing various scenarios in engineering applications. Analyzing the heat and mass transfer phenomenon in a thermal environment requires us to simulate it. However, this process of numerical solution of heat and mass transfer equation is very much time consuming.
This research work aims at utilizing one of the acceleration techniques developed in the graphics community that exploits a graphics processing unit (GPU) which is applied to the numerical solutions of heat and mass transfer equations. The nVidia Compute Unified Device Architecture (CUDA) programming model caters a good method of applying parallel computing to program the graphical processing unit. This research work shows a good improvement in the performance while solving the heat and mass transfer equations for solid capillary porous cylinder with the third kind of boundary conditions numerically running on GPU. This heat and mass transfer simulation is implemented using CUDA platform on nVidia Quadro FX 4800 graphics card. Our experimental results depict the drastic performance improvement when GPU is used to perform heat and mass transfer simulation. GPU can significantly accelerate the performance with a maximum observed speedup of more than 7 times. Therefore, the GPU is a good approach to accelerate the heat and mass transfer simulation.
Copyright © CCICADA. Website designed and maintained by The Lubetkin Media Companies LLC. All rights reserved..