IEEE Conference on Technologies for Homeland Security (HST ’12)

blog pic 1

Recently I attended the twelfth annual IEEE Conference on Technologies for Homeland Security (HST ’12), held right here in our neck of the woods, Waltham, Massachusetts. The conference aims to bring together innovators from leading universities, research laboratories, Homeland Security Centers of Excellence, small businesses, system integrators and the end user community to provide a forum to discuss ideas, concepts and experimental results. I gave a poster presentation on our Semantic Technologies for Civil Information Management in Complex Emergencies within the Attack and Disaster Preparation, Recovery, and Response area, as well as gave a paper presentation on our development of A Social Agent Dynamic Honeynet for Attack Modeling within the Cyber Security track. Both presentations generated lively debates and discussions on the challenges of applying technology solutions these problemspaces. 

With regards to our social agent honeynet research, here we were presenting initial findings from an effort to develop an agent based dynamic honeynet that simulates user interactions with social networks for the purposes of developing attack models. You can check out our demo here. Our solution allows security professionals to create networks simulating user activity for companies and government entities through the provision of a set of parameters. Our research pointed to the importance of instantiating a social dimension to our virtual agents, providing the agent with the ability to interact with a variety of social networks. For this purpose, we developed influence models to learn patterns from actual users’ activity on social networks to improve the effectiveness of the social agents.

One of the questions from the audience was why use agents to collect attack data when regular users in the course of interacting with social networks get attacked enough as it is? Our response was that a deception network enables us to feed false information to the adversary as needed, track adversarial movements to learn attack patterns and attributes, and use the information collected during the attempted infiltration for the purposes of building more robust defenses and developing more targeted offensive operations. Additionally, deception networks force our adversaries to expend resources attacking our fake network. Another line of questioning asked if we were wasting people’s time who decided to follow our fake agents since about 50% of the followers of our agents were real and 50% were found to be malicious. This generated a lively debate, whereby someone else in the audience responded with the idea that identifying these people might be useful for preventative defense. Maybe these are people who are more vulnerable and would be more likely to click on spam and that perhaps Twitter or others might want to know this. A further question had to do with how do we know that the users following our agents are malicious? This is fairly straightforward because the users attempted to pass us links that are associated with known bad actors. As a future effort we plan to automatically parse the tweets and see if the embedded links are already in a black list which would trigger alerts. We maintain what we believe to be the world’s largest intelligence database on botnets to cross-reference our malicious entities as well. You can check out that project here.  

There were several ideas that came out of the collaboration at this conference related to our agents. One idea was to use our agents to collect and harvest social media artifacts for the purpose of understanding Arab Spring-like events. Additionally, our agents could potentially interact with users to explore the shaping of opinion, collaborating with users beyond just posting information to Twitter and following other users. We will definitely be exploring these avenues in the near future, so keep your eyes peeled for developments in this space.

One of the most interesting presentations I attended was from Laurin Buchanan of Secure Decisions who was involved in the CAMUS project, Mapping Cyber Assets to Missions and Users. This project was very relevant to our Commander’s Learning Agent (CLEARN) and Cyber Incident Mission Incident Assessment (CIMIA) work, which is an existing capability developed as part of an AFRL SBIR Phase II Enhancement that automatically learns the commander’s mission while bringing contextual knowledge and assigning priorities to resources supporting the commander’s mission in Air Operations planning and execution support. CLEARN/CIMIA monitors the workflow of operations personnel using Joint Operation Planning and Execution System (JOPES), the Air Mobility Command (AMC) Global Decision Support System (GDSS), Consolidated Air Mobility Planning System (CAMPS), and Global Air Transportation Execution System (GATES) to learn the resources necessary for each mission, and recommend workarounds when one or more the resources become unavailable.

Our semantic wiki work also generated interest during the poster session. One presentation that was interesting and tangentially related was SPAN (Smart Phone Ad Hoc Networks) by MITRE, which utilizes mobile ad hoc network technology to provide a resilient backup framework for communication when all other infrastructure is unavailable. I thought it was pretty neat that this was also an open source project. This research was interesting given our work in using mobile devices for data collection in austere environments during operations and exercises in the PACOM AOR in our MARCIMS (Marine Corps Civil Information Management System) project. Pretty cool to see all of the developments in this area.

Notes from CSIIRW-09

We attended and presented at the Cyber Security and Information Intelligence Research Workshop, April 13-15, 2009 at Oak Ridge National Labs (ORNL). www.ioc.ornl.gov/csiirw/ . The audience numbered about 150 attendees, with academic and government representing the biggest segment, and a few representatives from government, systems integrator , and technology providers. In his keynote, Dr. Doug Maughan from DHS reviewed and assessed federal cyber initiatives from 2003 to the present. While noting that the amount of activity around cyber security is encouraging, Doug challenged the cyber security research community to be “bolder and riskier in their thinking”, to do a better job of capitalizing on the increased interest, and to come together on an agreement for a “National Cyber Security R&D Agenda”. In other featured presentations, Dr. Nabil Adam from DHS and Rutgers University introduced issues and programs at the intersection of Cyber and Physical Systems Security. SCADA and Smart Grid systems were highlighted. In his “Are we on the Right Road” presentation, George Hull from Northrop Grumman confronted basic challenges. With 5.4 million unique malware samples discovered in 2007, and companies like Symantec now doing up to 300 updates per day, signature-based systems don’t and can’t work. And as systems become ever more complex, the complexity works against security and reliability. Hull suggested that cyber security is not about the endpoints or the network. Rather, the real focus needs to be defending the information. Dr. Robert Stratton from Symantec presented findings from Symantec’s Internet Security Threat Report (April 2009). Of particular interest to Milcord was the finding that in 2008 “Symantec observed an average of 75,158 active bot-infected computers per day in 2008, an increase of 31 percent from the previous period.”

The panel discussions surfaced some points for pondering, including observations that as venture capitalists seem to be moving away from cyber security as an investment area the government needs to fill the void in R&D funding. Some questioned the effectiveness of some government cyber R&D programs like NSF, going so far as to refer to it as ‘welfare for scientists’, disconnected from real-world needs, and unlikely to produce innovation that results in deployable systems.

Milcord presented findings from its DHS-sponsored botnet research on the Behavioral Analysis of Fast Flux Service Networks. Specifically we discussed behavioral patterns of domains, name servers, and bots that we discovered from our FastFlux Monitor into the short-term behavior, long-term behavior, organizational behavior, and operational behavior of botnets that use fast flux service networks. www.csiir.ornl.gov/csiirw/09/CSIIRW09-Proceedings/Slides/Caglayan-Slides.pdf

Reflections on CATCH

I attended the Cybersecurity Applications and Technology Conference for Homeland Security conference on March 3-4, 2009 in Washington, DC. I had to leave on Sunday to escape the snowstorm but it was well worth the effort. The keynote speech American Crisis in Innovation by Pascal Levensohn was the most thought provoking presentation. (See related BusinessWeek blog.) Pascal articulated the broken ecosystem of innovation in USA, and argued forcefully about the need for promoting effective innovation partnerships between government and university research organizations, corporations, and entrepreneurs. Pascal quoted several statistics from Judy Estrin's book Closing the Innovation Gap. Estrin has empirically proven that America has relied too much on incremental innovation in recent years at the expense of the open-ended scientific research that eventually leads to truly breakthrough innovation. How true! NRL funded the development of GPS in 1970s when no one could foresee the applications it spawned today. How many American organizations are investing today in the GPSs of the future? More importantly, how many decision makers are heeding Levensohn's alarm? 

Another interesting session was the panel discussion on the second day. I was particularly impressed with the comments of DHS Cybersecurity Chief Rod Beckstrom, who called for the adoption of Web 2.0 platforms within the government and the development of a generalized model for sensorizing the Internet. I was sad to read that Rod Beckstrom resigned today. It's great loss for DHS.

Our presentation on Real-time Detection of Fast Flux Service Networks was received well. The presentation generated lots of questions, and considerable interest in our Fast Flux Monitor demo at the expo. Tina Williams of Unisys asked one of the more interesting questions: From the tens of thousands of IPs in your DB, what user segments (ISP, edu, enterprise...) have this problem? Is the solution policy or technology? There is no question that ISPs and universities in USA are most seriously inflicted with the fast flux problem. The enterprise has a botnet problem with its mobile workforce. The government has started doing a better job in protecting its machines being recruited into zombies. The solution is both technology and policy. You can't be aware of the problem without the technology. However, you still need to train your personnel for effective remedies.

One final note. Congratulations to Dr. Doug Maughan, who runs the cybersecurity R&D at DHS using a collaborative model. As Milcord, we have participated in this program for the last three years. Open collaboration did improve our botnet defense solution with the suggestions of our colleagues in this program. Collaborative research programs in information technology are rare within the government. I wish more Program Managers adopted such a philosophy.

Milcord presents FastFlux Botnet Intelligence service at CATCH Conference

Milcord, LLC. - WALTHAM, MA – Milcord LLC presented findings from and announced the launch of a Cyber Security Intelligence Web service that detects and monitors Fast Flux botnets at the CATCH (Cybersecurity Applications and Technology Conference for Homeland Security) Conference in Washington D.C. The Web service was developed under a Phase II STTR (Small Business Innovation Research – Technology Transfer) project funded by DHS Cyber S&T.  Milcord also received support from Sandia National Labs. The FastFlux Monitor service is a tool for cyber defenders in government and enterprises that detects and tracks the behavior of key components (domain names, IP addresses, domain name servers, ISPs) in fast flux botnets.  The service is available for evaluation and subscription. About Milcord: Since 2003 Milcord has been delivering knowledge management technologies and solutions for a range of applications including cyber defense, human and social modeling, geospatial intelligence, and information management. Milcord’s federal customers include Air Force Research Labs, Office of Naval Research, Army Research Labs, Army Geospatial Center, Office of Secretary of Defense, Department of Energy, and NASA.  For more information see www.milcord.com.