Friday, June 14, 2013

Protecting the Enterprise from Cyber Espionage

As many have noted, current security products are struggling to protect the enterprise from Cyber Espionage attacks and the loss of intellectual property. Cyber Criminals have attacked more than 90% of companies and are ciphering intellectual property (IP) back into nation states or into organized crime. There are more traditional approaches to security such as the distribution of signature files to detect malware (malicious or malevolent software), however these are less than 5% effective because the malware software morphs quickly making it very difficult to keep the signature files distributed and up to date. Other security vendors ask you to place "security agents" on all your enterprise endpoints such as PCs, workstations, mobile devices, and servers. With the growing number of devices, especially mobile devices with enterprise trends such as "Bring Your Own Device" (BYOD), it is very difficult to manage these agents and keep them up to date. As we enter the age of Machine-to-Machine (M2M) where cloud connected automobiles or a city full of sensors and internet connected cameras emerge, providing agents to protect the billions of Internet things will be impossible.In addition, the new BYOD mobile trend now brings the cyber attacks from the inside-out, rather than through the perimeters that traditional firewalls used to secure.

Thanks to a new innovation in Cyber Security, a new breed of Big Data streaming analytic companies will enter the market with a new breed of anomaly-based products and services. The new software will be able to listen to abstracted "flows" of network traffic of speeds beyond 100Gb and then machine-learn what the "normal behavior" of enterprise devices, applications, and the packets they generate. It will take "Cloud to fight Cloud", meaning you need a cloud architecture for Cyber Security to scale to the massive Big Data found in Cloud architectures. Flows will be the new abstraction for Software Defined Networks (SDN) found in next-generation enterprise cloud architectures. Abstracted flow-based cyber security solutions will be the only solution for tracking "persistent threats" (breaches that occur over a longer period of time such as months) securing the emerging new hybrid cloud architectures using OpenFlow-based technology.

Once you have a baseline footprint of normal enterprise behavior (e.g communication behavior between devices), you can sift through the mountain of Big Data packet information to find the needle in the haystack. Meaning the analytics software will detect the presence of threat actors because their activity is not always similar to "normal employee behavior". Innovative and scalable advanced analytical techniques will use similar methods found at the center for disease control to detect outbreaks of diseases coming at a city, this is known as syndromic surveillance. But that is not enough, other analytical techniques such as those used in estimating crop yields from satellite images can also come into play for detection of anomalies (changes over time in your network). Once the anomaly is detected, advanced ontology engines (methods) can be deployed to start building a timeline for an Advanced Persistent Threat (APT). 

Ontology has been used in information science by security companies such as Semantic Research to tie together relationships of data across a multitude of data sources to create an "inference" (or to infer) on how you are being attacked, how the anomalies tie to security incidents and then tie to APT phases of an attack. in addition these techniques can serve to discover who the perpetrators are and to identify the intellectual property they are after. The end result is that you have now sifted through petabytes of data, turned that data into "information" as it relates to suspicious activity, then turned the information into a rich set of "actionable knowledge" for your enterprise to protect core assets and IP. For industrial solutions, this same Cyber Security innovation and approach can also be applied to industrial control systems or SCADA. This includes oil fields, water treatment plants, nuclear facilities, or even planes in flight!

CyberFlow Analytics is a new startup based in San Diego that will realize the vision of a new breed of effective Cyber Intelligence-as-a-Service innovation. With a focus on a variety of Bit Data Analytical Streaming Engines combined with Ontology mapping back-end processing, this new SaaS offering will enable the enterprise or service provider to rapidly plug in our partner hardware probes and be up and running quickly. No security experts or IT experts will be required for installation and configuration. The system will use machine learning techniques to understand the normal activity and behavior of the enterprise and rapidly accelerate the detection and tracking of Advanced Persistent Threats in your business. 

By using a multi-tenant CyberIntelligence-as-a-Service (CyberIaaS) cloud, CyberFlow Analytics will become the next information center of Cyber Security to intelligently inform you of Cyber attack trending activity across the industry and across the tenants of the cloud SaaS service. For example, when armies of Botnet attacks happen, you want to know whether this is a widespread industry attack or just an attack on your organization. Botnets are patient and subtle, but can wreak widespread havoc. News headlines speak to their trophies: Hackers Take Down the Most Wired Country in Europe; DDOS Attacks Crush Twitter, Hobble Facebook; How a basic attack crippled Yahoo; DDoS attack strikes UltraDNS, affects Amazon, Wal-Mart. With a cloud-based security solution there is strength in numbers. If one company detects and solves an attack, with a SaaS-based service all other tenants of the cloud will benefit. In fact ontology engines and Big Data analytics in the CyberIaaS cloud can begin to provide even a richer set of actionable knowledge based on the intelligence across a collective group of tenants of a security service.

From Clean Pipes to Clean Clouds, Policy-based Security in the Hybrid Cloud


A sea change of transformation is emerging in the security industry to address the evolving requirements for Hybrid Cloud Computing. Security MUST be pervasive throughout the Cloud stack, including the Cloud Platform-as-a-Service (PaaS) layer. As we move to a distributed hybrid cloud model, a new security paradigm is needed to effectively fight and protect our systems from Cyber Espionage. Dr. Jim Metzler, a distinguished research fellow from Ashton Metzler and Associates defines the Hybrid Cloud as the following and then goes on to describe associated security threats. 
Like so much of the terminology of cloud computing, there is not a uniformly agreed to definition of the phrase hybrid cloud computing. According to Wikipedia "Hybrid cloud is a composition of two or more clouds (private, community, or public) that remain unique entities but are bound together, offering the benefits of multiple deployment models. Briefly it can also be defined as a multiple cloud systems which are connected in a way that allows programs and data to be moved easily from one deployment system to another.” Based on this definition, one form of a hybrid cloud is an n-tier application in which the web tier is implemented within one or more public clouds while the application and database tiers are implemented within a private cloud.
A component of the concerns that IT organization have about security and confidentiality stems from the overall increase in the sophistication of hackers, For example, until relatively recently the majority of security attacks were caused by individual hackers, such as Kevin Mitnick, who served five years in prison in the late 1990s for computer and communications-related hacking crimes. The goal of this class of hacker is usually to gain notoriety for themselves and they often relied on low-technology techniques such as dumpster diving.

However, over the last few years a new class of hacker has emerged and this new class of hacker has the ability in the current environment to rent a botnet or to develop their own R&D lab. This new class includes crime families and hactivists such as Anonymous. In addition, some national governments now look to arm themselves with Cyber Warfare units and achieve their political aims by virtual rather than by physical means. 
With Cloud Computing moving towards “Big Data” hybrid cloud topologies, the security problem intensifies and become much more complex to solve. In order to overcome such complexity and maintain a secure cloud, enterprises must find new cost-effective ways to ensure that their global networks are safe from Cyber threat. Cloud Security needs to be equally as scalable, distributed and autonomic. In 2013, e-commerce and financial services companies will be hit by increasingly sophisticated attackers and attacks. It is estimated that over 95% of enterprises have been affected by a security breach. Targeted firms MUST arm themselves and avoid costly damage (Gartner).  

“With the speed and complexity of the threat landscape constantly evolving and the prevalence of combined threats, organizations need to start moving away from being retrospective and reactive to being proactive and preventative” (Information Security Forum, 2012) 

The first step of the new security paradigm is to automate the protection and handling of your cloud applications and data. Policy hooks should be placed at the network layer, systems layer, and at the services application layer to ensure a pervasive approach to “Policy-based Security”. I think of it as automated and distributed rules systems (policy engine) for security orchestration. Basically we leverage the power of distributed computing across hybrid clouds to enable a dynamic overlay system (a safety umbrella) to protect your services and applications (and their data) in the cloud. Automated Policy-based Security Orchestration must maintain business continuity of your Cloud SaaS application even through a Cyber Espionage security breach: High Availability, Reliability, Self-Healing Resiliency, Elastic Global Scalability, and Security.

Here are some basic best practices a Policy-Based Security Orchestration System should provide:
  
1. If the hybrid cloud gets attacked, break off the attacked or infected cloud and scale up a new replicate cloud somewhere safe. In a sense, if your hand is infected, just cut if off and grow a new one! With Federated Multi-Cloud capabilities including integrated Elastic Scale and Fault tolerance, you can kill off potentially affected virtual machines and scale up new clean ones in their place. In fact keep the infected cloud running on the side and scale up fake honeypot nodes with special analytic modules to make security forensics more effective.

2. Classified data must be categorized upon collection and the appropriate policy protection must follow it EVERYWHERE in the hybrid cloud. I am not sure how people pushing centralized security systems will pull this one off efficiently. Don't leave data lying around in the wrong places. Don’t let certain data get in the public Cloud or cross country borders (some data just needs to be in a private cloud data center with the right physical security in place).  All these “do’s and don’ts”  are implemented in rules within the policy-based security system.

3. The Policy Engine must be hierarchical, multi-tenant, distributed, scalable, high performance. Hybrid cloud is distributed by nature, so you need the right policy in the right cloud location at the right time to execute without causing a disruptive overhead or latency in your cloud SaaS application. Having a central security system and backhauling everything to one place for off-line analysis is risky. Having a central dashboard is a good approach, but the system itself must be distributed and apply policy to real-time system operations as close to where the transaction, issue, analytics or data operation is occurring. The Policy engine and security system itself must be virtualized and have cloud elastic scale to remain cost efficient yet high performance under peak loads.    
  • A side note, Obama and other government officials are pushing for the enterprise to share their data so we can build a more collectively strong line of Cyber defense against Cyber Threat Actors (Cyber Criminals). In a policy-based security system you can implement a service egress policy on a gateway that mandates data must be made anonymous, filtered, and normalized before transporting it to a partner system 
  • Policy management must be simple (not a complex security guru task), easy to update and change without the needs of developers and the need to recompile rules. You should not have to take down the security system or reboot to implement updated security policy rules. 
  •  A policy-based system should include the capability to implement “policy-wrapped data” to maintain and secure privacy issues surrounding classified data in hybrid cloud applications. 
  • You also need a policy system that has another layer of policy to check that the first level security policy executed successfully. If your security policy dies in the middle of execution, you should have another redundant copy that will execute successfully. This is back to the concept of business continuity with a reliable, resilient, highly available security system. 
4. Monitor a limited set of key security metrics to understand if your cloud is healthy and clean, don't go crazy and monitor everything. This is back to a distributed approach to leveraging domain monitoring “services” that can publish events to a distributed event management system. A side note here, the distributed event management system needs a way to maintain “consistency” for event logs in such a way that data is not lost under faulty conditions such as Denial of Service attacks or heavy load conditions. The security policy system must be flexible and easy to implement new monitoring APIs into the policy rules as new cyber threat vectors emerge 

5. It takes a Cloud to fight Cloud Cyber Threats. A new sea change of transformation will be required in cloud security products to address the distributed hybrid cloud market. The marketing guys are back at it again “Cloud Washing” their old security products and telling you they are cloud ready. This will become even worse with “Cyber Washing” because of the emphasis Obama has placed on Cyber Espionage. For those of you that don’t know,  "Cloud washing” (also spelled cloudwashing) is the purposeful and sometimes deceptive attempt by a vendor to rebrand an old product or service by associating the buzzword "cloud" with it. I now see this with companies that have 10+ year old software and labeling it with “Cyber” and “Cloud” to make it appear as if this was designed for Cyber Threat Analytics for the Cloud (even though Cloud was not around 10 years ago). Cloud washing and now Cyber Washing just confuses the market and could build a dangerous set of confidence around their security systems for future hybrid clouds. 

As a side note I wanted to add that some legacy systems are still struggling to be effective with their non-cloud security problems they want to solve. In 2012, Imperva, along with the Technion Israeli Institute of Technology conducted a study of more than 80 malware samples to assess the effectiveness of popular antivirus software. Their published results found that:
  • The initial detection rate of a newly created virus is less than 5%. Although vendors try to update their detection mechanisms, the initial detection rate of new viruses is nearly zero. The majority of antivirus products on the market can’t keep up with the rate of virus propagation on the Internet. 
  • For certain antivirus vendors, it may take up to four weeks to detect a new virus from the time of the initial scan. However it has been found that if cyber threats can be detected within two hours than 60% of threats can be mitigated successfully (heard this from a GigaOM security webinar). Guess we need another evolution of security technology to achieve this goal.
Policy-based security is just one of the new paradigms emerging in this new generation of cloud-based security products to address the need to secure Cloud Computing from Cyber Espionage. The enterprise will need a number of tools like this in their arsenal to fight Cyber Crime. Hybrid clouds bring new complexity but new orchestration systems will ensure business continuity is maintained, even in the presence of faults from security breaches. Policy-based orchestration also addresses some of the privacy issues around proper handling of classified data.

OpenFlow enables Federated PaaS to become Next Control Plane for the InterCloud


The Platform-as-a-Service (PaaS) layer is becoming the most strategic and innovative part of the Cloud computing stack. Large data centers that are used by cloud infrastructure providers such as Amazon are becoming more numerous and cloud capacity is being built up in every large city of the world. Cloud infrastructure is becoming less centralized and more distributed on a regional basis. This new distributed cloud model applies to private clouds, public clouds, or a hybrid of the two that includes cloud bursting and brokering capabilities. The PaaS layer provides the glue or federation for the cloud as application components are distributed across different cloud infrastructure. A messaging framework known as a service bus enables the application components to communicate with each other. In the new distributed cloud model the WAN or Internet is a critical piece of cloud infrastructure that has previously been “assumed” to be over-provisioned and always available, reliable, and secure.  Just as the Internet has transformed the world as a global network of networks, the “InterCloud” is now evolving as a “world of many clouds”. In other words, a federation of many clouds that will be transformational for the next generation of distributed SaaS applications for cloud services. Technologies such as OpenFlow and Software Defined Networking (SDN) hold the promise of enabling a new control plane for the Wide Area Network (WAN). The Federated PaaS will become the next generation Operational Support System (OSS) to orchestrate a distributed mesh of federated cloud nodes for cloud scale and high availability. The automated policy system of the PaaS will respond to events as an OSS and then make changes to the flows of cloud applications across the Internet to ensure an exceptional cloud user experience. 

The notion of “federation” is an evolution of Grid and Mesh Computing. A grid architecture is a computational network infrastructure based on a cooperative use of the different computing resources connected by the Internet. Mesh networks have also evolved with grid computing to help connect distributed nodes and enable automatic reconfiguration when faults occur, broken connections happen, or nodes disappear. Cloud infrastructure has enabled applications to operate on the lowest-cost servers and scale up or down additional compute power when needed. Cloud application developers will still have more specialized requirements for some of their application components that may require specialized infrastructure for CPU intensive operations or for greater performance to reduce latency in user response times.  With cloud providers building data centers in all the major cities across the world, cloud computing is becoming less centralized and more regionally available. Cloud infrastructure in the InterCloud model will be defined to any place you can find compute, storage, and a network: in a central data center, a regional data center, in future routers/switches in telecom network, and in mobile devices such as your cell phone or PC. In the future this will even be in cloud-connected automobiles.

To connect the world of the InterCloud a “Federated PaaS Model” will be required. This is one of the three models that CloudAve contributor Krishnan Subramanian discusses as a trend in the enterprise PaaS space. He distinguishes three models of service delivery: the Heroku Model, the Amazon Model, and the Federated PaaS Model.  New Federated PaaS systems will emerge that can enable distributed cloud applications to be placed into a federated mesh architecture across many different clouds using an automated policy system. Automated policies will determine how the distributed cloud scales, how live-live copies of app components will be replicated across multiple cloud locations for high availability, and multiple layers of policy will check messages between app components for security and compliance. A federated cloud cloud will understand the location of the user through the GPS on their phone and uses their location as input to a load balancing algorithm, and like an amoeba it will shape the geographic distribution of the cloud to respond to the need for more resource or better performance. The Federated PaaS becomes the foundation for the next generation of SaaS mobile cloud services. The intelligence baked into the automated policy system of the Federated PaaS can move application components and their complementary storage fabric closer to the user for lower latency, better response times, and improved customer experience. This not only is cached content as found in the Akamai model, but could include rendering algorithms or analytics. The PaaS layer can also respond to events and enable dynamic changes to the cloud to protect the cloud user experience. This could include the ability to scale up (provision) additional compute or storage resources to respond to load. In the future the PaaS will serve as an Operational Support System (OSS) to make adjustments to the “flows” of cloud services across the Internet.
The last missing component to this evolution of the Intercloud is the network resource component of the Internet. In the data center, server capacity was being over-provisioned for peak loads. Cloud computing solved that problem with cloud scaling. Today the WAN connection and Internet network pipes are still being overprovisioned to handle peak traffic loads. Service providers over-provision their network capacity for unpredictable spikes in traffic loads. This is the next problem for cloud computing to solve. The Federated PaaS will become part of the next generation of Operational Support Systems (OSS) to not only federate application nodes across the Internet, but serve as a control plane for the end-to-end network connections (flows) between federated cloud nodes and to mobile end users of cloud services. In other words the Platform-as-a-Service (PaaS) layer of cloud computing will understand the application requirements for cloud services and provide additional control over the wide area network (WAN) connection between federated data centers, to branch offices, and to connected mobile users. The Federated PaaS system will be the control plane for the InterCloud. 

In the OpenFlow model, the Federated PaaS will become what is known as a “controller” for critical WAN network points.  These control points will typically be network entry points known as ingress or egress points.  These can be at the edge of the data center where new federated nodes are created in the InterCloud. They can also be at the other end of the connection at the edge of the last mile of user connectivity. Behind the mobile cell tower base stations, at aggregation points for fixed high speed broadband or where enterprise branch office connections enter the network. In Software Defined Networks which use the OpenFlow protocol, the controller interacts with an OpenFlow-enabled switch or router to identify packets that are associated with a “flow” (a connection) and perform operations on those packets. An OpenFlow operation may be to change the destination (IP Address of destination app server) of the flow or to reprioritize the TOS bits to give the flow higher priority in the processing queues of edge routers. OpenFlow can also be used to configure a L3 tunnel or GRE tunnel and then direct packets into the tunnel. The automated policy system of a federated PaaS will scale out and replicate application nodes across the InterCloud. When the PaaS provisions a host in a cloud provider as a new federated node, it will understand the functional requirements of the application component in the cloud node (storage, analytics, processing, ingest) and the connection (WAN) requirements. The PaaS will create a new cloud node, add it to the federated cloud, and then use OpenFLow to configure the connection (or flow) properties for that node. This could include building a secure tunnel for the cloud services to flow through. The PaaS as an Operational Support System (OSS) can also monitor the cloud and ensure that those connections are operating within the thresholds required for an exceptional user experience. If the WAN connection is not meeting the needs of the cloud application, the PaaS will be able to use OpenFlow to modify a flow at a critical point in the network either by changing its path or increasing the cloud flow’s packet priority. Another option is that the Federated PaaS may determine the cloud node is not in an optimal location and clone a copy in a different cloud somewhere else,  begin using that app component in the federated cloud mesh, then kill (scale down) the first node that is not performing well. The automated policy system of the PaaS will be a critical Operational Support System (OSS) and foundational layer of the cloud stack to enable the cloud to reconfigure and relocate to ensure a secure, reliable, available, and responsive user experience for cloud services. 

The next generation of cloud Software-as-a-Service (SaaS) applications will operate over a world of many clouds. Cloud SaaS applications will become more distributed as in the Service-Oriented Architecture (SOA) model to take advantage of the world of many clouds (the InterCloud). The Federated PaaS layer will sit underneath the distributed SaaS application in the cloud stack to ensure cloud scale, fault tolerance and high availability, and to manage secure and reliable network connections (cloud flow management). The Federated PaaS layer will become the control plane for software defined networks and leverage the OpenFlow protocol as an enabling technology for the next generation network. The automated policy system of the PaaS will orchestrate the federation of distributed cloud nodes, including the management of cloud flows across the network.  The PaaS as an OSS will monitor and respond to events such as threshold crossing alarms to make adjustments to cloud flows across the network or even relocate cloud nodes to locations with better connections to protect the end user experience of future cloud SaaS applications.