Wednesday, April 29, 2009

Swine flu vulnerability

Yes, it is contagious and affecting technology too…mostly spam but also domain registration for starters.
McAfee is reporting that 1 of 50 spam emails contain junk messages from compromised hosts, prompting to go to site for cure, updates, etc. If not to create havoc, then make money, right. There is report the variant forms of the “swine flu” names are being registered for domain names are on a rise and a sell-off of the same.

While your inbox in busy enough, I bet your company’s Emergency Management and BCP teams have woken up as well, flooding your inbox (for everyone to reply and forward;). Best recommendation is probably to stay the course, stick to your plans and adapt accordingly based on confirmed understanding and in a manageable way.

Many official sites so here’s just a few advisory links and a health map
http://www.fdic.gov/news/news/financial/2008/fil08006.html
http://www.fdic.gov/news/news/financial/2006/fil06025.pdf

Tuesday, April 28, 2009

Network IPS/IDS selection criteria (part I)

Choosing the right technology solution can be a daunting task particularly when meeting the wish list of requirements does not always equal the cheapest buy. So when it comes to Intrusion Detection/Prevention System, let’s look at some consideration points to aid in the investment decision.

To continue business as usual while providing that needed layer of defense, performance degradation can often be the drawback. Understanding and testing the inspection throughputis essential to ensure sufficient processing of any given network segment; as well as the speed in which the IPS/IDS is able to analyze and react on the compounding signature list and vulnerability exposure volume. Perhaps the most critical and fundamental component is the quality of signatures and how customizable and reputational quality of the input (including zero day inclusion). It does hold true in nearly all cases that the output is only as good as the input; and with IPS/IDS that means many more false positive being the downside. Understanding your “normal” traffic along with best fit in terms of statistical and behavioral considerations can save in both time and money.

Putting it all together, the best approach is integration and correlation with other security tools and logs to deliver appropriate level of confidence in the results. Managing all these components would sometimes open the discussion for outsourcing. Managed IPS/IDS services can augment an organization head count as well as skill set…

Finally, it’s always beneficial to do a financial viability of vendor and their strategic appetite. Some IPS/IDS vendors are offering supplemental support for NAC enforcement, DLP integration, and rate limiting by prioritizing traffic via pre-defined security criteria and protocol/services type.

If you drink the Gartner kool-aid then the choice is TippingPoint, McAfee, Sourcefire and Juniper Networks; followed closely by Cisco and IBM. And also ponder SANs review

Monday, April 27, 2009

Cloud Computing security

Systems were once considered secure yet no matter how secure you think you are one day, it gets less secure each day (with new vulnerabilities, etc.) and face it; growth is directly proportional to threat. The more data you store, the more system/hardware you use to store the data, thus, increasing the number of potential attack vectors or points of failure/exposure and inherit risk. So whether you considering SAAS or PAAS (software/platform as a service), you may be putting all your eggs in a single data center basket yet it’s in virtual space where everyone is storing there data….and cyber criminals seeing pot of gold.


I often wonder how cloud providers decouple and preserve commingling of data, connections and exchanges at a price that is still cost beneficial than traditional and proven methodologies. And, if feasible to do so, then at who's cost?
Ever tired auditing a (managed) service provider at will and being allowed to inspect all security controls including privileged user access related to the storage and processing of your data from ingress to egress points (i.e. the entire “cloud”)? Once you actually figure this out and concludereasonable assurance in leaving your data with someone else (and the control effectiveness), who is really responsible when something happens in the eyes of the law. Hence, jurisdiction, regulatory compliance and due diligence, and liability (flexible enough to meet your business and technical needs day-in and day-out).

Finally, who really benefits from cloud service offerings? You leave your data there and they are able to massage the data, analyze trends/behaviors; and even if direct revenue is not generated (yet), think about the competitive advantage that can result!

So is it hype or strategic advantage? What is the long-term viability for the current cloud services offering?

Gartner and quadrants did you ask, http://www.gartner.com/DisplayDocument?id=685308

Sunday, April 26, 2009

Input validation

...is the key to application security. Think about, if coders/developments wouldvalidate all input (good, bad, and the non-normal) then those would-be hackers and crackers would simply move on to easier pray or your defense in depth counter measures would preemptively alert you to the issue. That means nearly eliminating ~75% of hacks targeted at web applications! It’s that simple. Get developers to write self-defending code, input validation being key, thereby integration SDLC with security in mind; and then of course the layered security of IPS, (application) firewalls, and multi-tiered architecture.

Stopping web application from accepting malformed data negates the most prevalent attack vectors resulting from security breaches today. By constraining, rejecting, and sanitizing input, business applications would only accept known good inputs and deny unknown or unforeseen values (malicious or not). Inclusion of client validation to provide additional measure of controls, results in security applications. A significant reduction in exploits are excepted including the well known XSS, SQL, Buffer overflow, DoS, XML injection, and directory traversals.

For the latest in application security new, turn to a number of sites including http://www.xiom.com and http://www.owasp.org

Monday, April 20, 2009

DNS recursion, a thing of the past (not exactly)

Not all servers can be the root-server so DNS recursive servers become responsible for obtaining queries and managing the IP addresses of the Internet (your own or the world wide web) infrastructure. And there lies the issue with vulnerabilities in recursive or caching DNS server code called cache poisoning, the fake answer is thought to be authoritative thus the cache storied is poisoned. As example, exploiting susceptible recursive or authoritative DNS servers can lure users/visitors to a fraudulent site intended of the intended destination. Similarly, DOS is feasible (where client flooding request to a single IP), resource hijacking resulting in degraded performance, or, unnecessary load on the root-server.


Of course this wouldn’t happen if you just setup your DNS servers to non-recursive, right?
You could always disable UDP53 at the router level to rid recursion entirely but where’s the flexibility or support for large providers, for example? And, at a server level where zone transfers and BIND is implemented...perhaps both, DNS servers at the infrastructure with ACLs for DNS specific DNS servers and modifying static and DHCP assignments to refer to the right hosts. Otherwise consider Unicast RPF or BCP38, or just know your traffic to know end....

To aid in verifying your own DNS recursive query setup, http://recursive.iana.org/

Thursday, April 16, 2009

Security Breach, 2008 Verizon report released

Retail (31%), Financial services (30%), Food and Beverages (14%), then Manufacturing (6%) industries represent the most breaches—involving: 285 Million records compromised in 90 confirmed breaches, 74% from external sources (37,000 records/breach) yet highest median still caused by insiders make up of end-users and IT Admins equally (100,000 records/breach) and Partners (27,000/breach), 91% via organized criminal groups, topped by significant errors resulting in 67% of the breaches while 64% from hacking, third party discovered 69% of the breaches, 81% of the victims were not PCI compliant, and 20% of the cases involved more than one breach. In addition, 13% of breached organizations involved merger and acquisitionscompanies, and breaches by sourced IPs are from East Europe at 22%, East Asia at 18%, and 15% at North America. And, what I found equally interesting is the attach pathways result from Remote Access and Management then Web Application of 22% and 21%, respectively; yet 27% vs. 79% of records are breached.

How about those stats from Verizon--based on (some 500) cases they have been involved with…so how much of the stats/distribution would change based on all breaches reported and tracked by others.

With those stats in mind, simply don’t retain the data, right. When that ideal scenario isn’t possible then retain only what is absolutely required, then protect the keys of the kingdom to no end. Secure credentials, validate all input along with a SDLC process (for XSS and SQL injection, for example), and eliminate errors in coding as well as ACLs. And remember while hackers/crackers are getting more sophisticated, the attack difficulty still remain relatively low (so they just know when to hit ya); but when attacked, the sophisticated/more complex attack result in more damage.

And, a read on some of the notable legal cases, turn to this link:
http://www.lawyersandsettlements.com/search.html?keywords=security+breach

Thursday, April 9, 2009

PCI, a small piece on the downstream folks

It never ceases to amaze me how interpretive the DSS (Data Security Standards) requirements can really be (in real world scenarios). As a former player in the related realm and QSA spectrum, I continuously come across “it depends” when implementing controls that satisfy specific requirements. Today’s opinion is just on downstream service providers. As the part ofrequirement #12 states, you should be holding your service providers to the to the DSS requirements and as such security related to the cardholder data. However, an interpretation can be made that holding them responsible can simply (though noting is really that simple) means an agreement or contract (which they have to sign upon onboarding or upon renewal) claiming such adherence.

Now the key is adhering to the DSS requirements which mean you only need to validate the service provider is on track for compliance (not necessarily certification just yet). Thus, if you are the downsteam provider, then perhaps time can be leveraged as you head towards the end zone of compliance yet not certified. Course you can argue the two are synonymous and a good organization would just conform to all the required controls because our budget is endless, right.

Oh by the way, check out the Clarification section, listing more refined statements for implementation requirements.

Know of a better reference; let us all know- https://www.pcisecuritystandards.org/ and
https://www.pcisecuritystandards.org/pdfs/OS_PCI_Lifecycle.pdf