|Main Website >>Real Estate >>Blog >> Tag: colocation|
Continuing Security Threats and Security Protection Procrastination
Wednesday, November 30 2011 | 09:42 AM
EVP - Data Center Solutions,
JONES LANG LASALLE, INC.
|In my last post, I mentioned a presentation given at the 7×24 Exchange International Fall Conference by Kevin Kealy, Security Architect at AT&T regarding the lack of security measures protecting control systems as well as the popularity of control systems as the new target for hacking. Two of the topics on which Kevin touched during his presentation were the Stuxnet virus and the Son of Stuxnet, DUQU.
Interestingly enough, in preparing for this posting, I did a little research to see if anything new had been discovered about the DUQU virus and found on the Symantec site (http://www.symantec.com/connect/w32-duqu_status-updates_installer-zero-day-exploit) a notification that CrySys, the group that initially discovered the original DUQU binaries, has since located an installed for the DUQU threat. Until now, no one had been able to recover the installed for the threat and, therefore, there was no understanding of how it was infecting systems. We now know that the installer file is a Microsoft Word document (.doc) that “exploits a previously unknown kernel vulnerability that allows code execution.” Symantec and Microsoft are working toward issuing a patch and an advisory.
What’s really scary here is that, similar to the Stuxnet virus, this virus was created so it definitively targets the intended recipient and its shell code ensured that it would only be installed during an eight-day window in August 2011. The virus only has a shelf life of 36 days after which it becomes almost undetectable. The installer that was identified was the only one found, but there may be other methods that were used. Fortunately, most security software vendors have already detected and are blocking the main DUQU files, somewhat preventing an attack. However, once DUQU is able to penetrate an organization, through the zero-day exploit, the attackers can command it to spread to other computers, many of which were not even connected to the Internet by using a file sharing C&C protocol with another compromised computer that had the ability to connect to the Internet.
As of November 3, 2011, six possible organizations in eight different countries were confirmed to have been contaminated.
This leads me to my next point… There’s no time like the present to secure your IT environment. While I was reading an article this morning titled, “Top 10 Dumb Computer Security Notions and Myths”, written by Fahmida Y. Rashid (http://www.eweek.com/c/a/Security/Top-10-Dumb-Computer-Security-Notions-and-Myths-740587/?kc=EWKNLEDP11282011A), it occurred to me that too many have become complacent about security issues. This article highlights a keynote speech given by Charles Pfleeger (Pfleeger Consulting Group) to a meeting that was jointly held by Kaspersky Lab and NYU Polytechnic University in New York City.
In light of virus and intrusions, whether your company utilizes the cloud, virtualized environment, or conventional assets, security is imperative. Mr. Pfleeger outlines the following ten ideas and myths that should be heeded.
#1 – We’ll do security later. Security should never be an afterthought. It should be designed in from the beginning;
#2 – We’ll do privacy later. Compliance issues should outweigh speed to market and privacy issues strike at the heart of compliance;
#3 – Encryption is enough. Encryption is certainly important as practically every data breach has been unencrypted or under-encrypted, but architecture is equally important to ensure that the network is secure;
#4 – One tool to defend them all. A one-size-fits-all approach doesn’t work. Security solutions are very specialized and should be customized to each different environment and application;
#5 – Security must be perfect. As with everything else, balance is important. In this case, even if the solution isn’t perfect, a solution must be deployed, so the discussion becomes one of the cost-benefit equation between the level of protection and the cost of the solution;
#6 – Security is easy… DIY Security. Unless you really know what you’re doing, leave the design and implementation of the tool to a professional that has experience;
#7 – Find and Patch is sufficient. Really? Of course it’s important to continually be testing your systems, but this isn’t a replacement for having security by design. As Mr. Pfleeger states, “True security is making sure the common issues are not in the application in the first place and addressing subtle, more complex problems that are discovered down the road;
#8 – We aren’t a target. Everyone is a target. If you store any kind of sensitive or propriety data, financial information, or have control systems operating your business, you are definitely a target;
#9 – No one knows about it. Some people mistakenly assume that the software their enterprise is running is obscure and, therefore, is not subject to attack. This is not true. Many attacks are easily prevented, but many times overlooked by developers and this includes the most common attack vector, cross-site scripting and SQL injection;
#10 – We just need to train the users. Despite the fact that security breaches occur when a user click on infected documents or viruses, that doesn’t address more sophisticated intrusions that we’re now seeing proliferate.
The above focuses on digital threats to the IT environment, but there are other physical security threats that exist, even in a secure data center facility. Those include:
- Power & Cooling problems or interruptions;
- Human error or malice;
- Fire, or other casualty events;
- Water leaks; or
- Air quality.
These threats are constantly being monitored, but can still cause problems and interruptions if the underlying design was not initially well-conceived. Additionally, some serious threats, which may cause problems and for which certain data centers may not have designed in adequate monitoring is poor humidity control. All of the threats mentioned can and should be monitored. Over the past year, we’ve seen some unique methods for addressing human error and malice that also address inventory control. One such solution uses fixed cameras on top of each rack that feeds to the NOC (Network Operations Center), is recorded and constantly monitored. When work is being done on that particular rack, the camera captures all of the activity.
Furthermore, all of the data that is collected from monitoring sources can be aggregated at certain points that are distributed throughout the data center. This eliminates the risks associated with single points of failure, which should be avoided at all costs.
Addressing digital and physical security threats should be a current and prime directive for every enterprise. Finding the right tools and solutions need to be included in strategic planning and regularly implemented
0 Comment | Add Comment(s) | Colocation, Security, Digital_Security, Physical_Security, Data_Center_Security, Cloud, Virtualization, Viruses, Virus, Computer_Virus, |
IMN Data Center Conference 11/9 – 11/10/2011
Wednesday, November 30 2011 | 08:34 AM
EVP - Data Center Solutions,
JONES LANG LASALLE, INC.
|Yesterday and today, I attended the second annual conference put on by the Information Management Network related to Data Centers, Real Estate and Investing. The event was quite well attended and draws a very interesting cross-section of professionals from the investment community, colocation providers, enterprise users, real estate developers, wholesale data center operators and others.
The opening remarks were given by Jeffrey Moerdler, Esq of Mintz Levin (www.mintz.com), who provided perspective on where the data center industry has been, the evolution of technology and what is driving the continued changing landscape in the data center environment.
Chris Crosby, formerly of Digital Realty and now leading his new firm, Compass Data Centers (www.compassdatacenters.com), presented on the various data center models in an era of convergence, how the lines between the models are blurring, how profitability and capital expense vary from model-to-model and trending that is leading many to believe that over the next 24 months there will continue to be a significant reduction of traditional IT being hosted in-house leading to more outsourcing and migration to cloud-applications.
The Presidents Panel moderated by Marty Friedman of DH Capital (www.dhcapital.com) included Todd Aaron of Sentinel Data Centers (www.sentinel datacenters.com), Avner Papouchado of Server Farm Realty, Inc. (www.serverfarmrealty.com), Pete Marin of T5-Mission Critical Facilities (www.t5-mcf.com), Jim Trout of Vantage Data Centers (www.vantagedatacenters.com) and Tony Wanger of i/o (www.io.com). Each of the CEOs had interesting comments from different perspectives. Mr. Wanger noted that the data center industry as a whole has failed to create a standardized base of analytics related to supply and demand. Mr. Aaron suggested that supply and demand is regionally based. Mr. Trout spoke about the advantages of climate and power in Eastern Washington. Mr. Marin highlighted that most enterprise users, though desirous of higher Tier rated data center space, reject the pricing associated with it and don’t really want to pay for 2N builds. Mr. Papouchado’s insight was related to the fact that the mores sophisticated the end-user, the clearer the definition of the project and the easier it is to actually close a deal. A broad discussion ensued related to geography and the benefits of one location over another, but I have to ask, isn’t identifying a data center location really a result of building a business case? Doesn’t (or shouldn’t) the business case trump everything else?
The panel titled, “What are the Coolest Parts of Cooling you need to know?” moderated by Anton Self of Bastion Host (www.bastionhost.com) and included Gary Cudmore of Deerns America (http://www.deerns.com/contact/united_states_of_america/?cid=71), Fletcher Kittredge of GWI (http://www.gwi.net/), Bruce Myatt of M+W Group (www.mwgroup.net), Tarif Abboushi of NTT America (www.nttamerica.com) and Jim Kennedy of RagingWire Data Centers (www.ragingwire.com). The first (and most controversial) question posed by Mr. Self, asked who thought that within our lifetimes we would witness the end of the use of mechanical cooling in data centers. Everyone on the panel except for Jim Kennedy agreed. Mr. Kennedy reacted that in some parts of the world, using air economization, or free air cooling, just isn’t possible. I would agree in that as long as there is a need because of latency, redundancy or otherwise, to have multiple data centers in geographically diverse locations and then current server technology either doesn’t allow for extremely high environmental temperatures or servers still generate excessive heat, additional cooling will be needed.
One of the most interesting panel presentations focused on Colocation and Service Level Agreements. The panel moderated by Shawn Mills of Greenhouse Data (http://www.greenhousedata.com/) located in Wyoming that is powered by a wind farm. The panel included Barry Novick of Blackrock, Inc. (www.blackrock.com), Jeffrey Moerdler (previously mentioned above) and Everett Thompson of Wired Real Estate Group (www.wiredre.com). The positions and perspectives varied greatly with Mr. Moerdler taking a very middle ground. Mr. Novick’s opening comment was that his top 2 requirements in a data center are keeping the lights on and cooling the data center. Mr. Thompson’s rash comment that he relies on attorneys to negotiate the SLA and that they’re worthless anyway brought some well-directed comments from Mr. Moerdler to somewhat soften the glib remark. As expected from a thoughtful expert, Mr. Moerdler offered a list of extremely important issues and distinctions to be addressed in thoroughly negotiating a colocation service level agreement. Those included security, access control, responsiveness to unscheduled access requests, delivery of additional power circuits, uptime, maintenance, testing and repair procedures and schedules, self-help, web access to the NOC for maintenance repairs, notifications/certifications, compliance, outages, termination rights and more. Listening to this man is always enlightening. The bottom line as agreed by the entire panel is that the SLA is not there to only extract credits and compensation from the operator or service provider, but rather, as Mr. Moerdler so eloquently stated, ‘to act as a guideline for building operations and as a dis-incentive to the operator to avoid and prevent outages and bad practices.’ I couldn’t agree more.
0 Comment | Add Comment(s) | colocation, Data_Center_Developm, data_centers, Service_Level_Agreem, SLA, SLAs, |