Tuesday, May 31, 2011

BMO Financial Group selects CGI cloud computing services to launch a comprehensive social media hub

The following press release can also be found at:
http://www.cgi.com/en/BMO-Financial-Group-selects-CGI-cloud-computing-services-launch-comprehensive-social-media-hub

CGI Group Inc. (TSX: GIB.A) (NYSE: GIB), a leading provider of
information technology and business process services, today announced
that BMO Financial Group has chosen CGI's cloud computing services to
help launch a comprehensive social media Hub.

The site will feature a comprehensive set of multimedia tools such as
blogs, games and YouTube integration, a content management system for
quickly and easily updating content on the site, customer relationship
management tools, including e-mail registration and updates, and the
capacity to rapidly activate new services.

In addition, CGI is providing BMO Financial Group with cloud services
to support the launch of new mobile applications for its customers.

"Working with CGI, we developed the BMO SmartSteps for Parents, an
interactive, online hub to help parents educate children about money
management. The resource is free and available to all Canadians and
part of our commitment to Making money make sense®," said Bal
Sahjpaul, Head of eChannel, BMO Financial Group. "We're excited about
our move to the cloud and how it will help us deliver exciting new
services to our customers faster."

"As a leading cloud computing provider, CGI delivers enterprise cloud
solutions that address the performance, security and governance
challenges large organizations face," said Eva Maglis, President,
Global Infrastructure Services, CGI. "Our value is in integrating
cloud computing capabilities with more traditional services to provide
clients with robust solutions customized to meet their specific
business needs and enhance their business performance," added Doug
McCuaig, President, Canada, CGI.

The online community is available at www.bmo.com/smartparents.

CGI's end-to-end cloud computing offerings include cloud consulting
services that create a strategy based on client-specific priorities
and opportunities. CGI helps clients evaluate options including public
clouds, private clouds, hybrid clouds, IaaS, Platform as a Service
(PaaS) and SaaS offerings. Designed to meet the most demanding
commercial and government requirements, CGI's cloud computing
offerings drive innovation, flexibility and savings, while providing
for the security, availability and performance its clients need.

About CGI
Founded in 1976, CGI Group Inc. is one of the largest independent
information technology and business process services firms in the
world. CGI and its affiliated companies employ approximately 31,000
professionals. CGI provides end-to-end IT and business process
services to clients worldwide from offices and centres of excellence
in the United States, Canada, Europe and Asia Pacific. As at March 31,
CGI's annualized revenue was approximately C$4.5 billion and its order
backlog was approximately C$12.6 billion. CGI shares are listed on the
TSX (GIB.A) and the NYSE (GIB) and are included in both the Dow Jones
Sustainability Index and the FTSE4Good Index. Website: www.cgi.com.

Sunday, May 29, 2011

No exit strategy for cloud ?


Federal Agencies Lack Cloud Exit Strategy

Ninety percent of federal IT managers recently surveyed either say their agencies don't have or report being unaware of whether their agencies have a cloud computing exit strategy, according to a survey commissioned by Quest Software Public Sector.
Agencies need a cloud exit strategy if they want to move their data or change cloud providers, government and industry experts say. To avoid vendor lock-in, there should be mechanisms for data exchange that encourage portability across platforms, they add.
……..
"I would agree that exit strategies don't exist," said Kevin Jackson, director of cloud services at IT consulting firm NJVC and a co-author of a book on applying cloud computing in the government.
"I would also add that entrance strategies are hard to come by as well, Jackson said, adding, most agencies are just trying to respond to the cloud-first policy," an Office of Management and Budget directive that requires federal agencies to move three applications to the cloud within the next 12 to 18 months.

Article: Mobile Payment JV ISIS Eager For Apple, Google, Sprint To Join

Next time you pay for coffee, you can just use your phone. 

Article: What They Know - Mobile - WSJ

The following article gives a detailed account of which app tracks what data in both iPhone and Andriod Phone. 

What They Know - Mobile - WSJ
http://blogs.wsj.com/wtk-mobile/


Article: Who Cares Where I Am, Anyway? An Update on Mobile Phone Location Tracking


Who Cares Where I Am, Anyway? An Update on Mobile Phone Location Tracking
http://www.mobileactive.org/mobile-phone-location-tracking


Article: Mobile Security Risks: A Primer for Activists, Journalists and Rights Defenders


Mobile Security Risks: A Primer for Activists, Journalists and Rights Defenders
http://www.mobileactive.org/howtos/mobile-security-risks


Two big lies about Cloud Security


Bernard Golden, CEO of consulting firm HyperStratus has written the following article and it certainly pointed out one of the big cloud security issue that is the responsibilities and liabilities of the security between Cloud Service Provider (CSP) and Cloud Consumer (CC) Without well thought and well scoped SLA document, a single security breaches could cause length litigation between CSP and CC. 


Survey after survey note that security is the biggest concern potential users have with respect to public cloud computing. Here, for example, is a survey from April 2010, indicating that 45 percent of respondents felt the risks of cloud computing outweigh its benefits. CA and the Ponemon Institute conducted a surveyand found similar concerns. But they also found that deployment had occurred despite these worries. And similar surveys and results continue to be published, indicating the mistrust about security persists.

Most of the concerns voiced about cloud computing relate to the public variant, of course. IT practitioners throughout the world consistently raise the same issues about using a public cloud service provider (CSP). For example, this week I am in Taiwan and yesterday gave an address to the Taiwan Cloud SIG. Over 250 people attended, and, predictably enough, the first question addressed to me was, "Is public cloud computing secure enough, and shouldn't I use a private cloud to avoid any security concerns?" People everywhere, it seems, feel that public CSPs are not to be trusted.

However, framing the cloud security discussion as a "public cloud insecure, private cloud secure" formula indicates an overly simplistic characterization. Put simply there are two big lies (or, more charitably, two fundamental misapprehensions) in this viewpoint, both rooted in the radical changes this new mode of computing forces on security products and practices.

Cloud Security Lie #1

The first big lie is that private cloud computing is, by definition, secure merely by way of the fact that it is deployed within the boundaries of a company's own data center. This misunderstanding arises from the fact that cloud computing contains two key differences from traditional computing: virtualization and dynamism.

The first difference is that cloud computing's technological foundation is based on the presence of a hypervisor, which has the effect of insulating computing (and the accompanying security threats) from one of the traditional tools of security: examining network traffic for inappropriate or malicious packets. Because virtual machines residing on the same server can communicate completely via traffic within the hypervisor, packets can be sent from one machine to another without ever hitting a physical network, which is where security appliances are typically installed to examine traffic.

Crucially, this means that if one virtual machine is compromised, it can send dangerous traffic to another without the typical organizational protective measures even being involved. In other words, one insecure application can communicate attacks to another without the organization's security measures ever having a chance to come into play. Just because an organization's apps reside inside a private cloud does not protect it against this security issue.

Of course, one might point out that this issue is present with vanilla virtualization, without any aspect of cloud computing being involved. That observation is correct. Cloud computing represents the marriage of virtualization with automation, and it's in this second element that another security shortcoming of private clouds emerges.

Cloud computing applications benefit from this automation to achieve agility and elasticity--the ability to respond to changing application conditions by moving virtual machines quickly and by spinning up additional virtual machines to manage changing load patterns. This means that new instances come online within just a few minutes without any manual interaction. This implies that any necessary software installation or configuration must also be automated so that when the new instance joins the existing application pool it can immediately be used as a resource.

It also implies that any required security software must, likewise, be automatically installed and configured without human interaction. Unfortunately, many organizations rely on security personnel or system administrators to manually install and configure necessary security components--often as a second step after the rest of the machine's software components are installed and configured.

In other words, many organizations have a mismatch between their security practices and the reality of what a cloud requires. Assuming that a private cloud is, ipso facto, secure, is incorrect. Until your security and infrastructure practices align along automated instantiation, you have a vulnerability.

Moreover, it's critical to get them aligned. Otherwise, you face the likelihood that your application automation will outstrip your security practices, which is not a good situation. For sure, one would not like to be in the position of trying to explain why the supposedly-secure private cloud ended up exposing a vulnerability because the automation characteristics of cloud computing had not been extended through all parts of the software infrastructure.

So, the first big lie about cloud computing is that private clouds are inherently secure. What is the second?

Cloud Security Lie #2

The second lie about cloud computing security relates to assumptions about public cloud security; specifically, the assumption that security in public cloud computing rests solely with the CSP. The reality is that security in a service provider world is a responsibility shared between the provider and the user, with the former responsible for security in the infrastructure up through the interface point between application and hosting environment, and the user responsible for security with respect to interfacing with the environment, and importantly, within the application itself.

Failing to configure the application properly with respect to the environment security interface or failing to take appropriate application-level security precautions exposes the user to issues for which no provider can possibly be expected to take responsibility.

Let me provide an example. One company we worked with had placed its core application in Amazon Web Services (AWS). Unfortunately, it had not implemented appropriate security practices with respect to how it used AWS security mechanisms, nor with simple application design issues.

Amazon provides what is, in effect, a virtual machine-level firewall (called a Security Group) which one configures to allow packets to access specific ports. The best practice with respect to Security Groups is to partition them, so that very fine-grained port access is available per virtual machine. This ensures that only traffic appropriate for that type of machine goes to an instance. For example, web server virtual machines are configured to allow traffic on port 80 into the instance, while database virtual machines are configured to disallow traffic on port 80 into the instance. This blocks attacks on database instances (containing crucial application data) from the outside using web traffic.

To construct a secure application, one must use Security Groups properly. This organization had not. It used one Security Group for all traffic to all instances, which meant that every type of instance was exposed to any type of traffic destined for any instance. Clearly, a poor use of AWS security mechanisms.

Regarding the organization's application itself, it had implemented poor security practices. Instead of partitioning application code among different types of machines, it had loaded all application code into a single instance, which meant the same instance that received traffic for its corporate website also had code containing proprietary algorithms running on it as well.

The important fact about this situation: If this organization assumed that all security responsibility lay with the CSP (Amazon Web Services, in this case), it would be extremely negligent, because it had not taken important steps to address security issues for which no CSP could be responsible. This is what shared responsibility implies--both parties have to step up to the security aspects in their control, and failing to do so means the application is not going to be secure. Even if the CSP does everything correctly for portions of the cloud application within its control, if the application owner fails to implement its security responsibility correctly, the application is going to be insecure.

I have been in meetings with security personnel discussing security about public CSPs, who refused to consider their company's responsibility in these environments, insisting on redirecting every security topic back to concerns about the CSP's responsibility.

This struck me, frankly, as reckless, as it insinuated a refusal to seriously grapple with the necessary work of creating as secure a public CSP-based application as possible. It was as if the very attitude that all security responsibility lay with the CSP insulated the security person, and by extension, his company, from any liability for security failures in an application running in a CSP environment. It may not come as a surprise that the individual in question was a staunch advocate of private clouds, asserting their far superior inherent security.

The reality is that organizations are increasingly going to deploy applications in public CSP environments. It is vital that security groups step forward to ensure their organizations take every step possible to implement applications that are as secure as possible, and that means what steps the organization itself needs to take in that regard.

Security is, so to speak, the third rail of cloud computing. It is constantly cited as an inherent benefit of private clouds and a fundamental shortcoming of public cloud computing. Actually, the truth is far more ambiguous than these positions imply. Asserting the putative security shortcomings of public cloud environments without seriously considering how to mitigate them seems irresponsible and evidence of a belief that assertion implies dismissal with no further need to investigate mitigation techniques.

A poorly managed and configured private cloud application can be quite vulnerable, and a properly managed and configured public cloud application can achieve very good security. Characterizing the situation as black and white is simplistic and does a disservice to the discussion.

Far more productive in both environments is to query what actions must be taken to achieve as secure an application as possible, within the constraints of time, budget, and risk tolerance. Security is never a question of black or white, but rather a question of how light a shade of gray is possible, given the particulars of a specific environment and application. Failing to acknowledge that does a disservice to the topic and to how best to ensure an organization's infrastructure is as efficient and cost-effective as possible.



Thursday, May 26, 2011

New Whitehouse cyber security proposal can give DHS new power to regulate private industry

"The president's plan gives the Department of Homeland Security
unfettered authority to regulate private industry," Bob Goodlatte, a
Virginia Republican and chairman of a House Judiciary Committee panel
on the Internet, said today at a hearing on cybersecurity. "Do the
American people really want their regulatory agencies turned into
quasi-fiefdoms?"
The administration's proposal released May 12 calls for Homeland
Security to work with industry to find vulnerabilities in critical
infrastructure such as electrical grids and financial networks. The
department would define what companies would qualify as "critical
infrastructure" and therefore be subject to more oversight.

"The regulatory process is a slow one, whereas the escalating cyber
threats our country faces are extremely dynamic problems," Goodlatte
said. "Cybersecurity threats and online technologies change quickly --
so quickly that any regulations for cybersecurity could be outdated
before they are finalized."
Congress needs to create incentives for the private sector to do more
to protect itself from cyber attacks, Goodlatte said. He's currently
writing legislation to address his concerns.
U.S. lawmakers introduced about 50 cybersecurity measures in the last
session of Congress. Those measures include at least eight bills that
seek to boost security at energy and utility companies.
The administration's proposal would jump-start efforts in Congress to
update U.S. laws in response to the increased threat of cyber attacks
capable of crippling business and government operations.
The urgency of advancing a cybersecurity bill has been heightened by
recent assaults, including last month's attack on networks operated by
Sony Corp and the data bleach at RSA.

The Senate's Sergeant at Arms reported last year that computer
systems of Congress and executive branch agencies are probed or
attacked 1.8 billion times per month, costing about $8 billion
annually.

As the computing become more mobile, social, and most applications are
migrating to the cloud, introduce new bill by the congress with this
realty in mind will certainly help to enhance our nation's security.
IMHO, "critical infrastructure" should include travel business, press,
widely used social websites such as facebook, twitter, linkedin, all
big IT companies and cloud providers such as Google, Microsoft, Amazon
in addition to utilities, financial sector, telecommunications etc.
I welcome any comments.

Wednesday, May 25, 2011

Google Silently Patches Android Authentication Flaw - Security - News & Reviews - eWeek.com - eWeek Mobile

As smart phone such as phones based on Android and IOS operating system become more widely used, the security holes like this authentication flaw will become a big target. Automatic push the fix to the end user will be needed to avoid zero day attack.

http://mobile.eweek.com/c/a/Security/Google-Silently-Patches-Android-Authentication-Flaw-837349/

Mobile Device Data Losses Pose Rising Security Risk: Survey - Security - News & Reviews - eWeek.com - eWeek Mobile

For mobile platform, it is become very important now to assess the data loss prevention (DLP) program before wide deployment of mobile enterprise apps.


http://mobile.eweek.com/c/a/Security/Mobile-Device-Data-Losses-Pose-Rising-Security-Risk-Survey-515021/

10 Biggest Data Breaches of 2011 So Far - Security - News & Reviews - eWeek.com - eWeek Mobile

The following article from eweek discussed the data breaches from RSA, Sony and other big companies.

http://mobile.eweek.com/c/a/Security/10-Biggest-Data-Breaches-of-2011-So-Far-175567/

Skype Ends Support For Open Source Digium Asterisk VOIP PBX - VOIP and Telephony - News & Reviews - eWeek.com - eWeek Mobile

Not sure if it has to do with Microsoft.

IMHO: it is not good for open source community.


http://mobile.eweek.com/c/a/VOIP-and-Telephony/Skype-Ends-Support-For-Open-Source-Digium-Asterisk-VOIP-PBX-254184/

Friday, May 6, 2011

Security remains the biggest hurdle for agencies moving operations to the cloud, federal IT officials say

By JOSEPH MARKS 05/04/2011

The greatest hurdle to moving vital government data and programs into
the cloud is federal executives' confidence in outside security
systems, a panel of federal information technology leaders said
Wednesday.

One big component of producing that confidence level is getting the
Federal Risk and Authorization Management Program, or FedRAMP, up and
running, they said.

Agency technology executives spoke at a conference sponsored by
TechAmerica, an industry group.

FedRAMP is aimed at creating a standardized governmentwide review of
private sector information technology so individual companies'
offerings won't have to be reviewed and approved by dozens of
different departments and agencies.

The program was unveiled in November, but immediately ran into trouble
when technology companies complained the one-size-fits-all
requirements didn't jibe with the diversity of programs in the federal
government.

The Obama administration has said it will consider relaxing the
requirements and expects to get the program up and running this
summer.

"If you want to get it right, FedRAMP matters," said Mark Day, chief
technology officer at the Housing and Urban Development Department.
"We're looking to [the General Services Administration, which oversees
FedRAMP] to give us a lot of help in this area."

Even with the FedRAMP process in place, the panelists said, there's a
natural aversion to handing over sensitive data to an outside agency
whose security you can never guarantee as well as your own.

"The same things we're doing for ourselves, that's what we'd want as
we look at opportunities to move more than our public facing
[services, such as Web pages]," State Department Chief Information
Officer Susan Swart said. "We're just taking a very conservative look.
Until we feel comfortable that we can get that level of security,
[perhaps] by adding on to what FedRAMP provides, we won't be moving."

The panelists agreed, saying they'd prefer to move most services to a
federal-only cloud for security reasons, but even that requires a
change of culture.

"There's a comfort zone issue we have to address," Day said. "Private
and public might better be viewed as ours and shared. If you want to
talk about the culture issue, no matter where you sit, if I share with
someone else who's bigger than me, there's a discomfort there, because
I've lost some control."

The Obama administration's 25-point plan to reform federal IT
management, published in December 2010, calls on departments and
agencies to make the cloud their first storage option for new programs
and to identify several programs currently stored in data centers for
transition to the cloud.

Federal CIO Vivek Kundra has said transitioning large amounts of
federal data to the cloud could save the government millions of
dollars. Computing clouds are essentially large networks of servers.
Users can take as much space from them as they need and pay only for
what they use, which may vary from month to month.

The conference panelists also praised TechStat, a series of
face-to-face question-and-answer sessions Kundra launched in early
2010, during which IT project and program managers must either justify
cost overruns and missed deadlines or their projects will be canceled
or redesigned.

As a result of the rigors imposed by the TechStat process, HUD's IT
leaders have pared down the number of major projects they attempt at
one time from more than 30 to around seven, Day said.

Most agencies also have launched internal versions of the TechStat
process and the rigors of knowing they'll have to justify their
programs to senior management has imposed more discipline on project
managers, the panelists said.

One audience member, who said she was with the contractor MSB
Associates, asked whether the government would make TechStat
transcripts and other information available to industry so the private
sector could learn from government mistakes.

Agriculture Department CIO Chris Hill said some of that information is
available through the federal Chief Information Officers Council's
best practices information page. But that page, so far, includes only
a handful of project summaries, typically fewer than 10 pages in
length.

Stay up-to-date with federal technology news alerts and analysis -
sign up for Nextgov's email newsletters.