IEEE Conference on Technologies for Homeland Security (HST ’12)

blog pic 1

Recently I attended the twelfth annual IEEE Conference on Technologies for Homeland Security (HST ’12), held right here in our neck of the woods, Waltham, Massachusetts. The conference aims to bring together innovators from leading universities, research laboratories, Homeland Security Centers of Excellence, small businesses, system integrators and the end user community to provide a forum to discuss ideas, concepts and experimental results. I gave a poster presentation on our Semantic Technologies for Civil Information Management in Complex Emergencies within the Attack and Disaster Preparation, Recovery, and Response area, as well as gave a paper presentation on our development of A Social Agent Dynamic Honeynet for Attack Modeling within the Cyber Security track. Both presentations generated lively debates and discussions on the challenges of applying technology solutions these problemspaces. 

With regards to our social agent honeynet research, here we were presenting initial findings from an effort to develop an agent based dynamic honeynet that simulates user interactions with social networks for the purposes of developing attack models. You can check out our demo here. Our solution allows security professionals to create networks simulating user activity for companies and government entities through the provision of a set of parameters. Our research pointed to the importance of instantiating a social dimension to our virtual agents, providing the agent with the ability to interact with a variety of social networks. For this purpose, we developed influence models to learn patterns from actual users’ activity on social networks to improve the effectiveness of the social agents.

One of the questions from the audience was why use agents to collect attack data when regular users in the course of interacting with social networks get attacked enough as it is? Our response was that a deception network enables us to feed false information to the adversary as needed, track adversarial movements to learn attack patterns and attributes, and use the information collected during the attempted infiltration for the purposes of building more robust defenses and developing more targeted offensive operations. Additionally, deception networks force our adversaries to expend resources attacking our fake network. Another line of questioning asked if we were wasting people’s time who decided to follow our fake agents since about 50% of the followers of our agents were real and 50% were found to be malicious. This generated a lively debate, whereby someone else in the audience responded with the idea that identifying these people might be useful for preventative defense. Maybe these are people who are more vulnerable and would be more likely to click on spam and that perhaps Twitter or others might want to know this. A further question had to do with how do we know that the users following our agents are malicious? This is fairly straightforward because the users attempted to pass us links that are associated with known bad actors. As a future effort we plan to automatically parse the tweets and see if the embedded links are already in a black list which would trigger alerts. We maintain what we believe to be the world’s largest intelligence database on botnets to cross-reference our malicious entities as well. You can check out that project here.  

There were several ideas that came out of the collaboration at this conference related to our agents. One idea was to use our agents to collect and harvest social media artifacts for the purpose of understanding Arab Spring-like events. Additionally, our agents could potentially interact with users to explore the shaping of opinion, collaborating with users beyond just posting information to Twitter and following other users. We will definitely be exploring these avenues in the near future, so keep your eyes peeled for developments in this space.

One of the most interesting presentations I attended was from Laurin Buchanan of Secure Decisions who was involved in the CAMUS project, Mapping Cyber Assets to Missions and Users. This project was very relevant to our Commander’s Learning Agent (CLEARN) and Cyber Incident Mission Incident Assessment (CIMIA) work, which is an existing capability developed as part of an AFRL SBIR Phase II Enhancement that automatically learns the commander’s mission while bringing contextual knowledge and assigning priorities to resources supporting the commander’s mission in Air Operations planning and execution support. CLEARN/CIMIA monitors the workflow of operations personnel using Joint Operation Planning and Execution System (JOPES), the Air Mobility Command (AMC) Global Decision Support System (GDSS), Consolidated Air Mobility Planning System (CAMPS), and Global Air Transportation Execution System (GATES) to learn the resources necessary for each mission, and recommend workarounds when one or more the resources become unavailable.

Our semantic wiki work also generated interest during the poster session. One presentation that was interesting and tangentially related was SPAN (Smart Phone Ad Hoc Networks) by MITRE, which utilizes mobile ad hoc network technology to provide a resilient backup framework for communication when all other infrastructure is unavailable. I thought it was pretty neat that this was also an open source project. This research was interesting given our work in using mobile devices for data collection in austere environments during operations and exercises in the PACOM AOR in our MARCIMS (Marine Corps Civil Information Management System) project. Pretty cool to see all of the developments in this area.

Mobile App for Risk Based Route Planning

Mobile devices such as the iPod Touch and iPhone have spurred the “every soldier a sensor” vision into reality. Inspired by the rapid-transition success of TIGR, we built an Android App - RouteRisk - for risk-based route planning to investigate the design issues involved to support server infrastructure, Web services and soldier-sourced tactical data input requirements. httpv://www.youtube.com/watch?v=Xz9U1wc7UYM

Current path planning systems such as the US Army’s Battlespace Terrain Reasoning and Awareness – Battle Command (BTRA-BC) involve time intensive terrain analysis computations, and require an expert user with GIS experience and knowledge of terrain analysis. These systems do not provide an easy-to-use web accessible interface by the boots on the ground. As a planning and re-planning system, RouteRisk calculates risk and recommends routes based on soldier-sourced data provided through tactical intelligence and route planning systems like TIGR (Tactical Ground Reporting), DCGS-A (Distribute Common Ground System – Army), and BFT (Blue Force Tracker). And when new intelligence is discovered, like a previously unreported poppy field by a soldier on patrol or an S2, that the intelligence gets pushed out to all units, because the servers and smartphones are connected through the cloud.

RouteRisk leverages our Risk Based Route Planning web service solution developed in earlier projects. Risk-based Route Planning is a Google Maps web service application allowing the user to plan safe routes in Baghdad, Iraq by avoiding known hotspots and predicted hotspots learned from patterns of past incidents. The web service application generates a risk surface from the incident reports using a Bayesian spatial similarity approach. Our Bayesian model learns the causal relationship between attack characteristics (such as attack type, the intended target, emplacement method, explosive device characteristics, etc.) and spatial attributes (distance to proximal features such as overpasses, government facilities, police checkpoints, etc.). For a given region, we use spatial attributes (distance to nearest overpass, major religion, within 300m of district border, neighborhood) as evidence in the model and we perform inference on the data.

By selecting the “Route” tab on the main navigation, the user can easily create a new route plan. The map is launched and the user is instructed to tap points on the map to define waypoints for the route (starting, intermediate and ending locations). To drag waypoints the user would Press-and-Hold. Optionally, the user can also bookmark locations or search for locations by placename (e.g. “Camp Helmand” or “Paktika District”) or grid reference. By pressing and holding down on waypoints, the user can choose among several actions to perform, such as “move waypoint” or “define time window”. Once a pair of waypoints are defined or a new one is added, a route plan is automatically computed and shown using the current routing preferences and selected factors. The user can change the routing preferences by clicking a button that animates the corner of the map to curl up and reveal the routing preferences. The user can select preferences such as “fastest route” or “shortest distance” or “safest route”.

We are currently researching the software architecture design alternatives for adding voice control capabilities to our RouteRisk app.

Commander's Learning Agent demo

Milcord demonstrates [[Commander's Learning Agent]]. The demonstration shows the capability of software agents to automatically capture the commander’s current mission, augment with contextual knowledge, and assign priorities to resources supporting the commander’s mission.

Milcord at MobiSensors'07

Milcord presented a position paper titled "A Commercial Perspective: Collaborating on Application Prototypes as anInfrastructure Provider"at the NSF Workshop on Data Management for Mobile Sensor Networks (MobiSensors).

Sensor data management and fusion is a technical component in a number of our projects across a range of applications and technologies, including: · Monitoring [[SPE|Earth Science]] Data – NASA · [[GEMI|Intelligent Video Surveillance]] – Army · Enemy [[Course of Action Forecasting|Course of Action]] Analysis – Army · Quality of Service in Tactical Networks – Air Force · [[Botnet Defense|Botnet Detection]] and Mitigation – DHS