Sunday, December 18, 2011

Cyber-Attackers Successfully Exploiting Java Flaw in Outdated Software

Cyber-Attackers Successfully Exploiting Java Flaw in Outdated Software

SUMMARY: Here's more proof that people aren't regularly patching software: Most of the attacks in the first half of 2011 exploited Java bugs that Oracle had patched over a year...

Published: November 30

http://j.mp/tBuUwv

This article was sent from the eWEEK App.

SUMMARY: Here's more proof that people aren't regularly patching software: Most of the attacks in the first half of 2011 exploited Java bugs that Oracle had patched over a year ago.



Cyber-attackers continue to target vulnerabilities in Java, even the ones that Oracle has already patched, because end-user systems aren't being properly updated, Microsoft warned.

"Between one-third and one-half" of all attacks detected and blocked by Microsoft's security software from the beginning of July 2010 to the end of June 2011 were Java-based, Tim Rains, a director of Microsoft's Trustworthy Computing group, wrote Nov. 28 on the Microsoft Security blog. Microsoft's anti-malware technologies blocked more than 27.5 million Java exploits over a 12-month period, many of which had been patched at least a year ago, Rains said.

Microsoft researchers have noted in previous Security Intelligence Reports that attacks targeting Java exploits have been increasing, and they surpassed Adobe-related attacks in volume last year. The latest volume of the Microsoft Security Intelligence Report, volume 11, found that the most commonly observed type of exploits in the first half of 2011 targeted Oracle's Java Runtime Environment (JRE), Java Virtual Machine (JVM) and Java SE in the Java Development Kit (JDK).

"Attackers have been aggressively targeting vulnerabilities in Java because it is so ubiquitous," Rains said, noting that Oracle claims over 3 billion devices run Java.

The most commonly blocked attack in the first half of 2011 exploited a JRE bug discovered and patched in March 2010. The exploits first appeared during the fourth quarter of 2010, at least six months after the patch was released, and increased "tenfold" in the first quarter of 2011, according to Rains. The second most commonly blocked exploit relied on a JVM flaw that allowed an unsigned Java applet to gain elevated privileges outside the Java sandbox and exists in JVM 5 up to update 22 and in JVM 6 up to update 10. It was patched in December 2008. Others on the list included a JVM bug patched by Sun Microsystems in November 2009 and a different JRE flaw patched by Oracle in March 2010.

"Once attackers develop or buy the capability to exploit a vulnerability, they continue to use the exploit for years, presumably because they continue to get a positive return on investment," Rains said, noting that this tactic is not unique to Java flaws, but in "all prevalent software."

System administrators and users should regularly update Java and keep up with the updates, Rains said. Some environments may have systems running different versions of Java, as well.

Some security experts recommend not installing Java by default and limiting the installation to only those systems that actually require it. "Most people aren't using Java these days, and it reduces the attack surface for exploits delivered over the Internet," said Chester Wisniewski, a senior security adviser at Sophos. "Less software plugged" into the browser means less chances for an attack to succeed, he said.

Security analyst and writer Brian Krebs recently uncovered an instance of malware exploiting an already patched Java flaw, with the resulting exploit being bundled with a crimeware kit available for sale on criminal underground forums.

The new Java exploit is being distributed as a free add-on to existing owners of the BlackHole crimeware kit, or priced at $4,000 for new owners. A three-month license for the crimeware kit itself costs $700, and hosted servers running the malware toolkit are also available, according to the post on Krebs on Security.

Java exploits are "notoriously successful" when bundled with commercial exploit packs, according to Krebs. Cyber-attackers can use the BlackHole kit, which extensively uses Java flaws, to launch malicious Websites that can download malware on unsuspecting site visitors running an outdated version of Java, he said. Even though it is a relatively new malware toolkit, BlackHole has become one of the more popular exploit kits this year, according to security experts.

This particular vulnerability exists in the Java Runtime Environment Component in older versions of Oracle Java, namely Oracle Java SE JDK and JRE 7 and Java 6 Update 27 and earlier. Users with the latest version of Java, such as Java 6 Update 29 or Java 7 Update 1, are not affected. Oracle patched this flaw in mid-October with 19 other script engine bugs.




Five Key Enterprise Development Trends

Five Key Enterprise Development Trends

SUMMRY: eWEEK identifies major areas on which developers should concentrate. As we head into 2012, enterprise developers will need to focus on some major themes, including the emergence of HTML5,...

Published: December 5

http://j.mp/vUf9ZA

This article was sent from the eWEEK App.

SUMMRY: eWEEK identifies major areas on which developers should concentrate.


As we head into 2012, enterprise developers will need to focus on some major themes, including the emergence of HTML5, "big data" and analytics, and Agile Application Lifecycle Management (ALM). They should also continue to concentrate on Web, mobile and cloud development, and take advantage of advances in languages and integrated development environments (IDEs).

HTML5 is going like gangbusters. Microsoft has adopted HTML5 for Windows 8, Internet Explorer 9 and upcoming versions of the browser and other products. And there are indications that Microsoft may shelve future development of Silverlight, a development framework for building Web and mobile applications, after Silverlight 5 or a subsequent point release.

The onset of HTML5 also drove Adobe to halt its development of its Flash technology for mobile browsers.

"HTML5 is coming on strong as a standard, accelerated by the speed of change of hardware devices," said Al Hilwa, an analyst with IDC. "By 2013, we will reach a point where 90 percent of smartphones and tablets will sport HTML5-capable browsers."

However, Hilwa notes that it is important to remember that the need for a Flash browser plug-in continues on the desktop. "We don't expect 90 percent of desktop browsers to be capable of HTML5 until 2015," he said. "So the differentiation that Flash provides in high-end graphics and video protection continues, and Adobe will continue to invest in it."

Web-based development environments, such as the Eclipse Orion, Cloud9 IDE, eXo Cloud IDE and others, are becoming more and more popular. "Web-based tools will become more important as development moves into the cloud," said Mike Milinkovich, executive director of the Eclipse Foundation. "However, we should expect a new way of thinking about Web-based IDEs. Trying to fit something like Eclipse into a Web browser just won't scale. The nice thing about Orion is, it attempts to make the browser your IDE."

The big data and analytics craze will continue to grow due to the explosion of data coming from intelligent devices, social media and other sources. According to IDC, the market for intelligent systems will grow substantially in the next few years, from 800 million units today to more than 2.3 billion by 2015. Shipments of embedded devices already exceed those of cell phones and PCs, and IDC predicts the market for intelligent systems will soon represent a $520 billion industry.

"Data has become the new currency," said Kevin Dallas, Microsoft's Windows Embedded general manager. As proof of how hot big data has become, venture capital firm Accel Partners launched a $100 million big data fund at the recent Hadoop World 2011 conference.

Meanwhile, "One of the most important trends in 2012 will be the maturation of Java PaaS [platform as a service]," said Mik Kersten, CEO of Tasktop Technologies. "While the transition will be a long one, Oracle's Java Cloud culminates key announcements around PaaS offerings in 2011, and sends a signal that Java developers [should] start considering PaaS solutions as the deployment destination of new applications."

The Eclipse Foundation's Milinkovich said he believes the concept of Agile Application Lifecycle Management is becoming a reality. Developers are integrating new tools chains to support Agile development and a faster release process, he said.

"On the ALM side, a key trend to watch [in] 2012 is the open-source-powered tidal wave changing how developers work and collaborate," Tasktop's Kersten added.




Monday, November 14, 2011

Legal Issues with Cloud in UK.


Cloud computing did not exist when data protection regulations came in. John Roberts of Redstone explains how to keep within the law
The UK Data Protection Act (DPA) is often regarded as the world's leading law on protecting personal data. But many UK companies now adopting cloud services are not only putting data at risk, but also themselves, by breaching data protection laws. How do you comply with the DPA, whilst maintaining a cloud presence? When the UK government passed the DPA in 1998 it was heralded as the definitive way to guarantee personal data was protected. Over the following decade, refinements to the act ensured that personal data was not just secure, but more specifically, it was secure online. This worked well when data was held on-premise, within a company's own data centre, but the advent of cloud technology has changed all that.

What do we mean by cloud?

Just to be clear, in this context we're referring to 'cloud' as infrastructure as a service. Ask many cloud service providers (CSPs) where a specific piece of data is held, and it would take them a while to answer. In most instances the cloud does not recognise national boundaries. CSPs simply move data across their often globally dispersed infrastructure at will in the most efficient way for them. This means that the IT director no longer knows where his or her data is, nor are they able to comply with the DPA. With data being streamed and stored across national territories, it also runs the risk of falling foul of other countries' legislation. When George W Bush signed the US Patriot Act into law in 2001 following 9/11, no one could have predicted the data protection conflict that would occur between the UK and US as a result. The two acts lie in direct opposition of each other. The UK DPA prohibits organisations passing personal data on to another party, yet, the US Patriot Act expressly permits the US government to access and examine any data – personal or otherwise – that's held by a US company. Security has long been a real concern for IT directors considering cloud infrastructures but previous anxieties have focused on data loss rather than location – a legal requirement enforceable under the DPA. Location of data has to become a priority, considering the words of Microsoft UK MD Gordon Frazer this summer who admitted that the US Patriot Act took precedence over the DPA. Not only does this mean trouble for UK companies using cloud services where data is stored in the US, it also means that the data of US companies operating outside of its borders are also subject to this priority, affecting some of the world's largest CSPs – from Microsoft and Salesforce, to Google and Amazon. The EU, UK and other nations are debating the issue. The EU has negotiated a safe harbour agreement with the US to protect data. However, since most CSPs are unable to assure customers where data is located, the bigger question has to be: just whose responsibility is data storage when operating in the cloud? The Information Commissioner's Office (ICO) is responsible for enforcing the DPA, and its latest annual tracking survey found that one in four companies are still unaware of the need to comply with the DPA. While many companies may plead ignorance, we've found a more concerning trend. When it comes to the cloud many data owners believe data protection responsibility lies with the CSP, or more worryingly, are simply using the cloud as a way to abdicate responsibility for storing and protecting their data. This company apathy to data protection is widespread. We know this from experience. Rarely are we asked by prospective customers to ensure that data held within our cloud service is stored in the UK – compliance is simply not considered an issue when buying cloud services.

With power comes responsibility

Many cloud services are problematic because they provide a generic, one-size fits all solution. Yet as cloud services have evolved, alongside customer needs, more tailored solutions have appeared, including UK-specific, DPA (and PCI) compliant services. With these services 'control', a concern cited by many IT directors when considering cloud services initially, has been given back to the IT department. With that control, however comes the responsibility for data protection. The other failure occurs with the law itself. While the DPA provides stipulated requirements for the protection of data, it is enforced retrospectively not proactively. That means that companies are only prosecuted once a breach has occurred. The ICO has no power to audit private sector companies' compliance to ensure that a data breach doesn't occur in the first place. Having no audit control over the private sector makes it impossible to proactively regulate and enforce the DPA. It's generally accepted that the private sector generates the most data protection complaints. As a result, the information commissioner Christopher Graham, recently called for compulsory audit powers for the private sector. Data audits need to become a requirement within the financial and legal audit processes if companies are to be held accountable for data protection. We think that the solution may be simpler. What the industry (and companies operating in the cloud) needs to assist in compliance is a series of DPA standards. Comparable to ISO 9000, a simple checklist of standards would provide companies with a way to effectively measure themselves as part of any risk assessment or business continuity plan. We've seen how well they work for quality management, so now it's time to apply the same theory to the question of data protection. 
Source : eweek


Code Search for Open Source Holes

Tools such as Google Code Search can provide hackers with a wealth of information hidden in open source code, writes Eric Doyle

The downside of open source is its very openness. Hackers are using Open Source Intelligence (OSint) to find personal information and even passwords and usernames to plan their exploits.

Organisations like Anonymous and LulzSec have been using Google Code Search - a public beta in which Google let users search for open source code on the Internet - according to Stach & Lui, a penetration testing firm.  In Code Search, they can unearth information to assist them in their exploits, for instance finding passwords for cloud services which have been embedded in code, or configuration data for virtual private networks, or just vulnerabilities that lay the system open to other hacking ploys, such as SQL injection.

Google Hacking

The Google service is due to be switched off next year as part of the company's rationalisation of its research efforts with the closure of Google Labs but that does not mean that exposed code on the Internet will be safer. There are several sites which provide similar services.

 Google's BigTable is the repository of most things the company gleans from its searches, and searching it for nefarious purposes is known as Google Hacking.

A-Team, a white-hat hacking group which appears to have the sole purpose of exposing Anonymous and its various subgroups, wrote a highly critical, sneering condemnation of Google Hacking.

"LulzSec and Anonymous [are] believed to use Google Hacking as a primary means of identifying vulnerable targets," the group blogged in June this year. "Their releases[revelations] have nothing to do with their goals or their lulz [fun]. It's purely based on whatever they find with their 'google hacking' queries and then release it."

Mark Stockley, an independent Web consultant, wrote on the Naked Security blog, "While the findings provide a much-needed wake-up call to online businesses, admins and developers, they also offer a fascinating insight into the motivation of hacking collectives such as Anonymous and LulzSec...

"Rather than being motivated by politics or injustice, hacking groups may simply be targeting organisations because Google Code search has turned up a vulnerability too tempting to ignore, making them less political action groups, more malicious 21st century Wombles," he said.

The best protection is to ensure that nothing is included in code that is useful to a hacker. If it is unavoidable then the information should be stored separately and encrypted.

Colin Tankard, managing director of encryption and security specialist Digital Pathways, advised, "Obviously if the data is encrypted it protects that data wherever it goes as long as the key is never stored with the data. This adds extra control of who or what application is allowed access to the data. By applying encryption with access control organisations can define who or what is allowed access to data."

Source: eWeek 



Tuesday, October 25, 2011

eWeek Europe : Facebook Offers Developers HTML5 Resource Centre

Facebook has opened a HTML5 Resource Centre to help developers build and test HTML5 apps

To continue reading
http://www.eweekeurope.co.uk/news/facebook-offers-developers-html5-resource-centre-43310

Cloud To Drive Major IT Spend

Spending by public cloud service providers will grow at a sharp rate over the next few years, according to IDC
To continue reading: http://www.eweekeurope.co.uk/news/study-cloud-to-drive-major-it-spend-4334

XML Encryption flaw with no simple patch

Security researchers have cracked the major XML framework used to encrypt data in major web applications. Two researchers from Germany's Ruhr-University demonstrated a practical attackagainst XML's cipher block chaining module at the ACM Conference on Computer and Communications Security in Chicago on 19 October. The technique affects messages encrypted with any of the algorithms supported by the XML Encryption standard, including the popular AES and DES.

'No simple patch'

"We were able to decrypt data by sending modified ciphertexts to the server - by gathering information from the received error messages," the researchers said in a statement.XML, or "eXtensible Markup Langugage", is used for storing and transporting data and is widely used for web applications such as business communications, e-commerce, financial services, healthcare, and government and military infrastructure. Standardised in 2002 by the W3 Consortium, an Internet standards group, XML Encryption is widely used by Apache, Red Hat, IBM and Microsoft in their XML frameworks. "There is no simple patch for this problem", said Juraj Somorovsky, one of the researchers, adding, "We therefore propose to change the standard as soon as possible." Researchers proposed replacing the CBC module in XML Encryption with a mechanism focused on both message confidentiality and message integrity. Adopting a new approach and changing the standard, however, would likely affect existing deployments and create backwards compatibility issues with older applications, the researchers said. A potential attack vector involves sending bogus messages to a targeted system and then using the information returned by the system to crack the encryption. "We show that an adversary can decrypt a ciphertext by performing only 14 requests per plaintext byte on average," they said. "This poses a serious and truly practical security threat on all currently used implementations of XML Encryption."

Workarounds

The German team notified all affected XML framework providers, including Amazon.com, IBM, Microsoft, and Red Hat, via the W3C mailing list before releasing their paper. They've engaged in "intensive discussions on workarounds" with some of the affected organisations. Amazon.com acknowledged the issue on 20 October, and said it had fixed the related vulnerabilities in XML-based messaging protocol Simple Object Access Protocol (SOAP) in its Elastic Compute Cloud (EC2) infrastructure. The giant also checked to ensure no customers had been targeted by potential attackers. "The research showed that errors in SOAP parsing may have resulted in specially crafted SOAP requests with duplicate message elements and/or missing cryptographic signatures being processed," the company wrote in the Amazon Web Services security bulletin. "If this were to occur, an attacker who had access to an unencrypted SOAP message could potentially take actions as another valid user and perform invalid EC2 actions," according to the advisory. Amazon said it would be generally difficult for attackers to obtain a pre-signed SOAP request or a signed certificate, but admitted it was possible if the customer was sending SOAP requests over plain HTTP connections instead of the more secure HTTPS protocol. The researchers also disclosed cross-site scripting flaws that would have allowed attackers to obtain the certificate, according to Amazon. This is not the first time the CBC mode in encryption protocols was targeted. Two researchers last year developed a "padding oracle attack" to decrypt encrypted cookies for websites and hijack users' secure sessions. The technique affected the security of Microsoft's ASP.NET framework and forced an emergency patch from Microsoft to close the hole.

Source: eWeek




Thursday, October 6, 2011

The Obama administration says switching some federal computer networks to cloud computing will save taxpayers money but members of Congress are worrying about the security risks.

clearpxl

Cloud computing refers to computer networks that are hosted by outside vendors and are accessible over the Internet.

Until now, the federal government has kept all its networks within its own computer systems.

Last week, the Department of Homeland Security granted a five-year, $5 million contract to computer company CGI Federal Inc. to manage some of its public Web sites. They include DHS.gov, FEMA.gov and USCIS.gov.

Members of the House Homeland Security subcommittee on cybersecurity want to know whether computer hackers who have broken into other Web sites could hack the government Web sites hosted by private companies.

The subcommittee plans a hearing Thursday on the risks of cloud computing.

"In light of the administration's 'Cloud First Policy' and the announced transition by the Department of Homeland Security to cloud computing, my subcommittee will be examining how government information is being managed and secured in the cloud environment," said Daniel E. Lungren (R-CA), chairman of the cybersecurity subcommittee.

The Cloud First Policy refers to President Barack Obama's plan to switch government Web site management to private companies when it can be done at lower cost without security risks.

The Homeland Security Department contract last week was the first of many planned for federal agencies.

Computer networks that contain classified information or represent a public threat if they are hacked will be served only by the government's servers and systems under the Cloud First Policy.

"We also want to hear how the private sector is implementing this shared technology option, its cost savings and risk concerns," Lungren said.

Cloud computing offers its customers easier updates to Web sites, less maintenance and lower costs for equipment and personnel.

The controversies for the government include the potential for layoffs among its computer staff and whether private contractors can be trusted to properly manage the government networks.

Homeland Security Department spokesman Larry Orluskie said his agency's contract with CGI Federal "maintains requisite security for the government's needs and delivers best-in-class return on investment for the citizens of the United States."

CGI Federal said in a statement that its computer management service "contains all of the required enterprise-wide security" the government requires.

The House hearing Thursday could influence whether Obama's Cloud First Policy gets carried out.

Republicans, who hold a majority in the House, must eventually approve funding for the program.

Private companies confront the same issues as the government, but still are making a big push toward cloud computing.

Four out of five businesses plan to switch to cloud computing soon, according to a survey of more than 900 large companies announced this week by the business consulting firm of KPMG.

Ten percent of the companies surveyed reported they already moved their core information networks from internal computers to cloud computing.

A grocery mentioned by KPMG in its study reported it could maintain its inventory better and increase sales by linking its suppliers through a cloud computing ordering network.

Cloud computing is "quickly shifting from a competitive advantage to an operational necessity," said Steve Hill, KPMG's vice chairman of strategic investments.

The cloud computing industry is expected to generate $177 billion in revenue by 2015, compared with $89.4 billion this year, according to industry forecasts.



Read more: http://www.allheadlinenews.com/articles/90061978?Congress%20to%20examine%20Obama%26%23146%3Bs%20%26%23147%3BCloud%20First%20Policy%26%23148%3B%20for%20computers#ixzz1a0bH5wTk



Thursday, September 22, 2011

Article: Cloud computing has security advantages

Article: Google Wins Chance to Prove Cloud Security in Contract Lawsuit

Article: Cloud IAM catching on in the enterprise

Article: MIS-Asia - China to invest US$154 billion in cloud computing

Article: Stakes high for cloud contractors

With Salesforce.com, Amazon, Google, IBM, Microsoft, and CGI Federal all compete for a slice of cloud computing market in federal space. please see the following article.
Stakes high for cloud contractors
http://www.politico.com/news/stories/0911/63786.html


Monday, September 19, 2011

Substituting cyber reporting with continuous monitoring carries risks

An Obama administration decision to relax agency reporting rules for complying with cybersecurity mandates by instead requiring automated data feeds about threats could relegate risk management to a back-office function and leave senior executives out of the loop, some auditors say.

This year's instructions for adhering to the 2002 Federal Information Security Management Act, to the delight of some information technology managers, say that continuous monitoring will replace the current costly, time-consuming process of reauthorizing systems after upgrades or at least every three years.

"Continuous monitoring programs thus fulfill the three year security reauthorization requirement, so a separate reauthorization process is not necessary," states a response in the frequently asked questions section of the Sept. 14 Office of Management and Budget guidance. The Homeland Security Department, which supervises federal cybersecurity operations, authored the memo's instructions and the FAQ, including the question, "Is a security reauthorization still required every three years?"

Traditionally, reauthorizations have involved several steps of human analysis, where first, agency IT managers write a cybersecurity plan, then an outside security professional certifies or evaluates the controls in the plan and briefs the authorizing official -- a secretary or other senior executive -- on the findings. That senior official, by approving the plan, assumes responsibility for risks associated with the system.

With continuous monitoring, software and sensors are checking in near real time the system's most important safeguards, such as antivirus scan reports and remote access logs.

By switching from reauthorizations to continuous monitoring, "there's a potential that accountability could be removed from the equation," said Rick Dakin, chief executive officer of Coalfire, an IT compliance firm that performs FISMA risk assessments. "I think it lets [agencies] substitute budgets in a tight budget climate, but I think it leaves the [department] secretary off the hook."

He said automated surveillance should augment a comprehensive security review but not supersede it. "This program is very beneficial and should continue," said Dakin, a past president of the Denver chapter of InfraGard, an FBI affiliate.

The nature of the monitoring is more technical, however, and does not focus as much on physical controls, staff training, process controls and other governance elements of a baseline cyber program, he said.

"Do not let our national cyber interest be relegated to a helpdesk function," Dakin said. At this period of growing risk, we need to get our senior executives more involved in cybersecurity and ongoing governance of those programs . . . not less."

But other IT security experts say the current procedure for reauthorizing systems demands excessive paperwork and meaningless examinations that prevent managers from acting on threats.

"It's sort of been proved that the analytical process is not nearly detailed enough to provide accuracy with regards to security," said John Gilligan, previously a chief information officer at the Air Force and Energy Department and now a private IT consultant.

The practice is more of a display that managers are going through the motions rather than a precise assessment of security posture, he added.

Gilligan, also a member of the Obama-Biden transition team that helped formulate the administration's IT policies, expects the new guidance will nudge agencies to quickly roll out continuous monitoring programs so that they do not have to endure reauthorization hassles. Most agencies have the technology to track indicators, but some have not established a means of tying together the machinery for a holistic view of security status departmentwide, he said.

Gilligan acknowledged, however, that he would be surprised if the administration does not also require an independent security team to assess the data output. "The tools are only so good," he said. "The human beings would still need to evaluate the results of tools. The guidance needs to also emphasize that it's also using the tools and providing actions based on what's happening."

The OMB memo states, "Agencies are expected to conduct ongoing authorizations of information systems through the implementation of continuous monitoring programs . . . In an effort to implement a more dynamic, risk-based security authorization process, agencies should follow the guidance in [National Institute of Standards and Technology] Special Publication 800-37, Revision 1."

That guidance, said NIST fellow Ron Ross, directs agency security and risk management professionals to analyze the incoming surveillance data in a way that senior leaders can understand.

"It doesn't mean that the authorization process is completely dead after the first time," he said on Friday. "The senior leaders are going to be involved more frequently. Ongoing authorization means ongoing acceptance of risk."

Continuous monitoring allows technicians and leaders to keep pace with the time and tempo of quickly evolving threats, Ross added. "It can make the authorization process a lot leaner and meaner," he said.

DHS officials on Friday said a well-planned continuous monitoring program will provide a window into the current state of systems and assets, enabling situational awareness within an IT enterprise. Automated data feeds will measure the effectiveness of security controls and help prioritize remedies better, they added. The information allows authorizing officials to make decisions based on live systems and networks, rather than merely on architectural diagrams.

 


Thursday, September 15, 2011

Key Findings from Damballa First Half 2011 Threat Report

The Damballa First Half 2011 Threat Report looks at Internet crime trends with a specific focus on criminal command-and-control (C&C) activity over the first six months of 2011.

Download the report

Key Findings Include:

Mobile/Android Threats Growing

  • The number of hijacked Android devices engaging in 'live' communications with criminal operators grew at a significant rate.
  • Having mobile malware contact the criminal operator and establish two-way Internet communication now makes the mobile market as susceptible to criminal breach activity as desktop devices.

Top 10 Most Abused Top Level Domains Represent 90% of All Live C&C Activity

  • Top Level Domains (TLD) .com, .info, .net, .org, and .biz are among the top ten most abused by criminals.
  • The TLD ".in" (India) ranked as the fifth most popular TLD for C&C.

SpyEye-Powered Botnets Jump to Number One

  • Only three of the top ten largest botnets for the first half of 2011 appeared in the "Damballa Top 10 Botnets for 2010 Threat Report."
  • OneStreetTroop, the Damballa reference to a botnet operation reliant on crimeware generated by the popular SpyEye do-it-yourself (DIY) construction set, climbed from tenth position in 2010 to first position for the first half of 2011.
  • Eight out of the top ten largest botnets utilize popular "off-the-shelf" DIY crimeware construction kits.

Monday, September 12, 2011

Top 5 Tools for Virtualization Security

In my last blog post, I listed a few security tools for the Cloud. I left out the virtualization security and has planned to have another post to list a few cool tools for the virtualization security. As the audience of this blog must know, the majority of Cloud environment leverages virtualization for the elasticity and dynamic scaling of the services although virtualization is not a precondition for the Cloud. This blog post lists 5 top tools for virtualization security.


1:  VMware (http://www.vmware.com) offers a free tool and two packaged commercial products for the virtualization security.

  • ·         The free tool is VMware's Compliance Checker Tool and it is a fully-functional product that provides detailed compliance checks (such as FISMA, and PCI/DSS etc) against the VMware vSphere Hardening Guidelines. You can print Compliance Checker reports and run compliance checks across multiple ESX and ESXi servers at once.
  • ·         VMware also offers a suite of vShield App with bundled price of $300/per VM. Here is the summary of the functionalities:

o   VMware vShield App: Protects applications in the virtual datacenter against network-based threats, essentially it is a virtual firewall and can filter the network traffic between VMs.

o   VMware vShield App with Data Security: this is new feature in vShield 5.0 and it can discover sensitive data from VMs and isolate the VMs with sensitive data (such as PII information) into a isolated security zone. A nice enhancement in deed in the Trusted Cloud and nice add-on for Data Loss Prevention on the Cloud.

o   VMware vShield Edge: Enhances protection for the virtual datacenter perimeter

o   VMware vShield Endpoint: Improves performance by offloading key antivirus and anti-malware functions to a security virtual machine, eliminating the antivirus agent footprint (AV Storm) in virtual machines

o   VMware vShield Manager: Security management framework included with all vShield products

o   VMware vShield Bundle: Includes all vShield products  vShield App with Data Security, vShield Edge, vShield Endpoint and vShield Manager, cost is  $ 300/per VM

  • ·         VMware vCenter Configuration Manager:  provides auto compliance check and continuous compliance with out-of-the box templates and toolkits and thus provides enhanced security. Cost is $800/vm.

2: Catbird (http://www2.catbird.com/) offers vSecurity, vCompliance, vSecurity Cloud Edition and has win "10 Virtualization Vendors to Watch" in 2010 by ComputerWorld among other awards.

o   Catbird vSecurity: vSecurity consist of two elements: A virtual appliance, deployed inside each VMware or Xen host (NOT on each virtual machine) and a Catbird Control Center typically deployed in the Security Operations Center (SOC). A Catbird appliance is the eyes and ears of the virtual network, delivering the security protection from inside the virtual host. This applicance reports back to the Control Center, where the management and expert system reside. The Catbird Control Center provides a single enterprise-wide view of the security and compliance state of the virtual infrastructure. The Control Center is responsible for policy-based analytics and compliance workflow and reporting.

o   Catbird vCompliance:  vCompliance monitors and audits controls required by the leading regulatory standards organizations and supports the widest array of common security frameworks. vCompliance includes default policies for SOX, HIPAA, DIACAP and PCI; each policy is built upon Catbird controls which map to the appropriate compliance framework.

o   vSecurity Cloud Edition:  Cloud Edition features Integrating Catbird's comprehensive suite of services, including vulnerability monitoring, IPS/IDS, firewalling via TrustZones, Network Access Control (NAC), policy enforcement and many other critical features managed via a multi-tenant portal and has the following features:

o   24x7 vulnerability management with a fully compliant scanner that is automatically correlated with other virtual machine attributes to provide an accurate assessment of known defects against a specific and customizable compliance framework.

o   NAC-based enforcement for continuous monitoring of the virtual machine population, real-time inventory management, and the most accurate real-time VM catalog and virtual machine sprawl prevention

o   A multi-tenant management portal that provides compliance intelligence aggregation, management and reporting across physical, virtual, private and public clouds from a single dashboard, while ensuring the privacy of customer or departmental data.

3: HyTrust (http://www.hytrust.com/) appliance provides access control, authentication and authorization, policy management, security configuration management and auditable log aggregation for virtualized environment. HyTrust is tightly integrated with VMware and can be managed through a vCenter tab.


4: CloudPassage (http://www.cloudpassage.com/):  CloudPassage's Halo platform is offered as a security Software-as-a-Service. The major components of the Halo platform include:

o   Halo Daemon: The Halo Daemon is a very lightweight (~2 Mb) and well-protected software component that runs as a service on each cloud server. The Halo Daemon monitors important server security factors, e.g. IP addressing, installed software, running processes and open network ports. The Halo Daemon provides information to the Halo Grid as needed, and responds to commands from the Halo Grid to take actions such as updating iptables firewall rules.

o   Halo Grid: The Halo Grid is a powerful and sophisticated elastic compute cloud provides sophisticated analytics that evaluate data collected by the Halo Daemon, making decisions on exposures and compliance concerns to be reported and updates to security parameters such as iptables policies. The Halo Grid does the "heavy lifting" on behalf of the Halo Daemons, ensuring that customers' server resources and performance are preserved.

o   Halo Portal: The Halo Portal is the single pane of glass used to manage all Halo product capabilities. Policy configuration, review of compliance status, evaluation of reported exposures and even generation of Halo Daemon installation scripts are all provided through the Halo Portal.

5: Trend Micro(http://us.trendmicro.com): Trend Micro's Deep Security 8 offers anti-malware protection, firewall capabilities, intrusion prevention, Web application protection, integrity monitoring and log inspection for virtualized environment.


 It can be integrated with Trend Micro's SecureCloud 2, which provides encryption and data protection for cloud deployments. With this integration, it is possible for Deep Security to check the security profile of a system accessing encrypted content on SecureCloud and prevent access if the accessing system is lacking in security protections or has been infected by malware. Pricing for Deep Security 8 startes at $1,000 per server, with volume discounts available. Deep Security 8 is expected to ship by the end of the year 2011.

Friday, September 9, 2011

Sample Security tools for the Cloud Computing Environment

I listed the sample essential Security tools for the Cloud Computing Environment. I welcome any comments.

Security Controls
Sample Tools
Identity and Access Management (IAM)
IAM is on the top of list due to its crucial importance to any organization’s IT asset. IAM is the lock to the front door of business data and assets. Poorly defined and implemented IAM can negatively impact productivity and overall security of organization. Centralized and Enterprise wide IAM with Identity Federation and Extension to the Cloud is the best industrial practice. Good tools including
·         Symplied suite of IAM products,
·         Ping Identity,
·         CA, Oracle, IBM and Microsoft IAM suite of Products, etc
The most innovative products are from Symplified, not from big and old companies such as Oracle or IBM.
Security Event Management tools (SIEM)
Due to the requirements of continuous monitoring, SIEM knowledge become important.
Sample tools including
·         Arc Sight,
·         Q1Labs, etc
Encryption
With the Cloud Computing become main stream, Encryption knowledge and experience is more relevant due to more data move to the cloud. Understanding of FIPS 140-2 requirements and some strong encryption such as AES, 3DES is necessary for the data security in the cloud.
Anti Virus, Network IDS/IPS, and other security monitoring tools
Organization will need to understand basic deployment model and configuration and administration of these tools. Sample tools including
Sample Anti Virus tools including
·         McAfee,
·         Symantec,
·         Trend Micro,
·         Webroot,
·         Norton,
·         AVG etc.
Sample network IDS/IPS tools including
·         Barracuda,
·         Checkpoint,
·         CISCO IPS,
·         eEye,
·         Juniper’s IDP,
·         McAfee’s NSM,
·         Radware’s IDS,
·         Sourcefire’s ETM,
·         IBM Proventia IPS,
·         Watchguard,
·         TippingPoint,
·         Corero, etc
Enterprise Forensics Tools
Forensics tool is needed for the Cloud Security professionals to aid in Forensics investigation and litigation process. The following are sample tools:

·         EnCase Enterprise,
·         ProDiscover,
·         Forensic,
·         EnCase,
·         Sleuth,
·         dtSearch,
·         Paraben, etc

Logging and Auditing tools such as
Centralized log and event correlation  with analytic capability is essential for fraud and vulnerability detection and investigation, sample tools including:
·         Sensage,
·         Splunk etc
Data Leakage Prevention tools
Proactive tools for preventing data loss is become important in the cloud, sample tools including
·         Vontu,
·         Orchestria ,
·         Verdasys, etc
Vulnerability management and penetration testing program.
A good vulnerability management tool would include capabilities for asset management, vulnerability assessment, configuration management, patch management, remediation, reporting, and monitoring. In realty, the tool only provides part of the above functionality. Cloud Service Provider will need a combination/integration of those tools to get best results
Sample tools including
·         McAfee's Foundstone Enterprise(www.mcafee.com),
·         StillSecure (www.stillsecure.com),
·         eEye Digital Security (www.eEye.com),
·         Symantec/Bindview (www.bindview.com),
·         Attachmate/NetIQ (www.netiq.com), etc

Infrastructure and/or application vulnerability scanning toolsets.
The following are sample tools/vendors. Some tools can be installed in premises or used in the cloud.
·         Qualys,
·         Cenzic,
·         Fortify,
·         Nessus etc
Application Security Assessment
Sample tools includes
·         BurpeSuite,
·         Paros,
·         HP WebInspect,
·         IBM Rational AppScan,
·         Cenzic Hailstorm etc
DR tools
Sample tools including
·         VMWare SiteRecovery Manager,
·         SunGard,
·         Barracuda Backup Service,
·         Double-Take Software etc.