Friday, May 29, 2009

PCI 6.6 clarification - requirement past due

While PCI DSS requirements has been set for some time, it’s the clarification documentation and written interpretation that matters…and I’ve said it before, your QSA is key to supporting your interpretation/implementation. After all, it’s their credibility and they are the ones to answer to VISA, PCI, etc.

So, here’s the 6.6 clarification just in case you haven’t figured out the code review and application firewall topic (though the due date was June 30, 2008):

Taken straight from PCI standards:
Option 1 is Application Code Review--which can be done via 1 of the 4 options:

1. Manual review of application source code 2. Proper use of automated application source code analyzer (scanning) tools 
3. Manual web application security vulnerability assessment 
4. Proper use of automated web application security vulnerability assessment (scanning) tools

In other words, either a manual (qualified staff or independent firm) or tool-based inspection/injection of non-standard inputs; then analysis of the results—but the key being measures to protect against common types of malicious input. I stress this because interpretation of this is subject to vary as well.

Option 2: Application Firewall--and this is in addition to all other requirements including firewall, logging, etc.

Should go without saying that all other requirements are still requirements—including the compounding requirements that relate:
6.3 – SDLC and secure software development process
11.3.2 – application-layer penetration testing to ensure applications don’t have vulnerability
6.5 – common vulnerability testing which is primary OWASP including: input validation, least privileges, default deny, data sanitization, security policy design, quality assurance techniques, and once again, defense in depth.

Tuesday, May 26, 2009

ready – set – go (HIPAA countdown)

With funding to approach the 800 BILLION DOLLARS, American Reinvestment and Recovery Act (ARRA) is imposing compliance timelines...so basically you have till February 17, 2010 beforeHIPAA (and Centers for Medicare & Medicaid Services) audits:

  • February 2009 – ARRA Civil monetary penalties are placed
  • April 2009 – HHS (Health and Human Services) guidance on Securing EHR (Electronic Health Record) published
  • August 2009 – HHS and Federal Trade Commission (FTC) interim security breach notification to be released
  • December 2009 – HHS to release adaption of initial prioritized set of standards
  • January 2010 – Deadline for complying with accounting disclosure rules
  • February 2010 – HHS will begin auditing (of HIPAA entities) including requirements for implementation of Business Associate Agreements; as well as enforcement of rights of electronic access of records
Like California breach legislation SB1386, it will be federal law regarding PHI data breach notification; and tagged with a criminal offense pursuant to provisions related to the Social Security Act.
To patients, this means we are entitled to be notified of PHI disclosure and have the option of prohibiting disclosure of PHI to insurers and other Business Associates if treatment is paid out-of-pocket. To covered entities and business associates, this means rendering the PHI indecipherable or unusable resulting in an optional safe harbor from the new data security breach notification requirement. Unreadable PHI primarily translates to encryption and/or destruction either “at rest” or “in motion”; and destruction pertains to shredding either paper or electronic media; and both references National Institute of Standards and Technology (NIST) for clarifications.

That said, if your data is “PHI secured”, then its actually exempt from breach notification requirements (not to mention HHS cases involving less than 500 people affected). Things that make you go...hummm

Some parallelism mentioned with other existing security and breach requirements and even implementation of the requirements draws similarities with PCI compliance and risk-based approach—see Security Rule. Nothing new there but a line is starting to be drawn in the sand much like PCI...as well as the extended grace period for compliant.

...a continuation of my prior post on HIPAA and HITECH; as well as some light reading onAmerican Recovery and Reinvestment Act of 2009; better yet more on this later Medicare Section 111. Moreover, reducing the compliance efforts can result from integrating and standardizing the number of claims and payment systems or simply (but not always practical) reducing the sourced data.

Wednesday, May 20, 2009

Agile Security – CheckPoint Road Show

The maturity of an organization will often exemplify the pursuit of efficiency. But does that necessarily translate to being better, or, what does that say about ones innovation?


At a recent CheckPoint event the presentation revolved around building security agility in today’s IT and business technology. As Gen-X becomes a thing of the past and Gen-Y now influence business as much as our own personal lives, both technologist and business owners must adapt to the always connected society of Gen-Ys. Think about it…today’s high school and perhaps younger years are texting as a primary means of dialog, building resumes and conducting business on Facebook, and not to mention the power of information sharing at a press of a button. Oh, ya. That’s exactly what office professionals are doing today…from Blackberry email to iPhones/Twitter and some form of business approved P2P or FTP and slidesharing. These medium allow us to collaborate at a global level which is revolutionizing the now future. So, how do we continue to become leaders, movers of innovation and strategic by nature?

Joel Snyder
 rendered 4 characteristics of agile security in his keynote presentation—which was a refreshing topic in comparison to the usual technology events. You and your organization must be agile to succeed, hence, the ability to be flexible and adaptive and not just good at what you are doing.
• Design defense in dept
• Yield to standards based approach (with multiple/independent vendors)
• Build perimeterless network to gain greater control
• Cease from being event-driven
First, the implementation of layers of security allows you to be more flexible and ultimately more relaxed as you are assured the various components are protecting your coveted information—from the anti-virus at your desktop, IPS, WAF (Web Application Firewall); NAC (Network Access Control) and the VPN or encryption.
Next, adopting standardized approach or solution sets become critical when the future landscape changes and altering components, not the whole solution, is required. In parallel, the selection of vendor or product can be influenced by the “preferred” and/or incumbent and therefore, the effort (or lack thereof) put forth might just suffice. While the perfect solution is not always what’s needed nor always selected; the point is to be vendor agnostics and docile to the varying solution which ultimately should be interchangeable and integration receptive for the evolving strategy.
Also, having a centralized point for all delivery or activity, whether that be the old days of mainframe to the recent single network hub-and-spoke scenario, is over. Growth commands the ability to make alterations with limited impact to all else related. Therefore, having multiple delivery points or extending the perimeter to more manageable component will not only allow you to be more flexible but will enable you to better adapt and be more nimble to change.
Finally, we all strive to be proactive; and not waiting for an event prior to soliciting a reaction is the key. Advances is technology will occur at a rate perhaps faster than you may be able to keep up with. But having a strategic approach and forethought is critical to anticipating the future or at least the ability to adapt and anticipate. For example, who would have thought SSL-VPN would be the choice for today’s business; yet the technology was around (just not as efficient and easy to install as IPSec-VPN).

A rejuvenating perspective on justifying the information security agenda, yet the business challenges remain in finding the “right” solution that meets today’s demands with the needs and consideration of tomorrow …

Monday, May 18, 2009

Apple patch Tuesday

No typo, its MAC’s OS X 10.5.7 turn to apply patches not just performance enhancing but security patches to correct 47 security issues (from PHP and Safari to Adobe and Flash Player updates). Updating various components and applications will require you to restart your MAC...and this is on top of already 90+ vulnerability related to code and 3rd-party applications (Security Update 2008-02 with Tiger/Leopard) already released couple months ago.


Like other critical vulnerabilities, some attributed to potential arbitrary code execution (and is some instance of malware infection)—related to Apache, ClamAV, Flash Player, and Adobe [Security Update 2008-003]…infinite loop enumeration request with AFP Server, heap buffer overflow with CoreText, memory corruption for movie/codec files, and unprivileged local access to the 'Download' folder.

So while Apple has conducted a press release to advocate the patching and forthright position to security and public awareness, they offer no further comments related to prolonged release...
Security Update 2009-003
Security Update 2009-002
Security Update 2009-001
On a concurrent note, Adobe Acrobat is starting to notice flaws (second this year) and other vulnerabilities including the latest zero day related to Javascript, allowing arbitrary code execution or remote attackers DOS (Denial of Service) through annotation and OpenAction entry via JavaScript code.

Friday, May 15, 2009

Interpretive DSS implementation

It’s no surprise the ultimate decision in getting that stamp of approval AKA the Report on Compliance (ROC) is in the hands of your QSA (Qualified Security Assessor). And while they are your primary voice or point of contact into VISA, you can hold direct conversations with the PCI committee/representatives (specifically to discuss DSS compliance…yet they’ll just assert the QSA and his/her organization are liable for acceptance & interpretation leading to a ROC). While Visa and PCI can set guidelines and requirements, etc. your QSA is the gatekeeper towards getting your company’s name on the approved list.

So why is it then that the interpretation of the requirements left up to the QSAs—given the sheer number of “security boutiques” with QSAs opinions and practices that vary (i.e. how conservative is yours)? Moreover, why is the PCI requirements left to one’s “qualified” interpretation? Is it any different than SOx as it pertains to financially significant systems/controls; or how different is it from SAS70 based scope for assurance? Though there is one difference in these compliance reports in that I don’t recall any BIG4 or other CPA firms still conducting ROC attestation (i.e. just the side business of PCI compliance not certification).

Of course, everyone should be PCI compliant for credit card numbers [I don’t need any more mysteriously green fees popping up] but what about all the other PII data contained within your 4 walls or consultants cubes; as well as the business partners you share them with…
So, let’s visit some variations in the implementation of these requirements. Oh, by the way, (in getting an edge) try getting your competitor’s scope for their ROC…good luck.

Requirement 3.3 by definition requires the masking of the first 6 AND the last 4 of the PAN(payment account number), leaving only the middle portions displayed. Then, real world and reality chimes in. As a function of your “business” and requirements of your customers, displaying the first half of the PAN is actually necessary for absolute comparison in cases of disputes, for example. Provide this compelling argument to your QSA and yes, they’ll agree the existing practice (provided consistent and justifiable) will be accepted. Your alternative is complete overhaul of your existing application and processing models (not to mention the technical reconfiguration and programming that’s involved). Is this meeting the underlying objective PCI set forth; you tell me?

When you need time to stand still for a little while—particularly if you are not the Merchant or Service Provider Visa or acquiring bank directly being hold accountable, push back on being fined by the upstream providers by leveraging existing contracts/agreements. Have the acquiring bank or provider share the burden of compliance/implementation cost…so in turn they can quickly meeting the vendor requirement section. Basically, as a downstream vendor of card numbers, you’re slid under the VISA radar so Merchants/Providers, in fulfilling their own certification, have to put reliance on existing contract/SOW/agreement (signed perhaps prior to the heightened PCI awareness days). Sure you may have to pay “credits” which is most often less expensive (in the short term) than complete remediation. Just keep in mind the risky business and repercussions of an actual security breach down-/up-steam or sideways use of PAN (should it occur prior to compliance).

Do share your interpretation of specific requirements and the spin on its implementation.

Tuesday, May 12, 2009

SecureChicago - Forensics

ISC(2)’s SecureChicago most recent topic was on forensics. Though the day long session was focused on bag-and-tag, physically securing/preserving the data and the surrounding environment, vendors participating stretched there product/service offering to match the same. Of course, some of the more renown vendors such as Encase was not present, others did cover related forensics topics including application security. Yes, application ties to data breach seems to be synonymous these days and represented in any security forum.


A clear theme in the morning discussion was the delineation of investigation work (i.e. Incident Response) and forensics. Clearly two similar topics, yet very different/disparate practices when done appropriately and recognition of the two should be understood in the real world (particularly in the legal sense). Standard of proof, evidentiary concerns, and admissibility comes to mind in a forensic lifecycle of acquiring, authentication and analyzing methodology. Computer forensics has only been around for about 25 years compared to original forensics of author attribution. But the principle is the same as Locard Exchange Principle states the contact of two things will result in an exchange and therefore a trace of evidence.

And in the eyes of the legal system “chain of custody” not checklist is also a terminology worth noting. Chain of custody is the documentation of the what, where, who, and how evidence is processed in a repeated manner (by which can be redone with effectively the same results), preserving integrity. Checklist on the other hand can be more detrimental to a case since each investigation vary and purposefully skipping a step (regardless of applicability) may pose questions are integrity and ultimately toward a reasonable doubt.
The use of technology can be the silver bullet in a case; however, the lack of it’s understand and more importantly the inability to properly present the evidence has also proven to be it’s gotta. Judges and even a jury of peers are mostly like not going to have technical background or is an ISC(2) member, therefore, presentation is another key factor (being conscience of technical jargon and industry acronyms).

I digress. To wrap up when doing cyber forensics, consider the following: knowledge of your company policy (don’t operate in a vacuum), document is everything; ensure repeatable and verifiable examination process, don’t exceed your knowledge (e.g. there are about 85 different operating systems and who’s an expert at more than one); and understand the purpose/scope (criminal, civil, regulatory or administrative investigations). Also, on the technical note consider having more than one tool, use write blockers when conducting (image) analysis, have more than one copy using MD5 or SHA256 hash, Windows registry for data source including (SID, swap space, slack file/RAM space, ADS (alternate data stream), printer EMFs, and index.dat. Finally from a global prospectus, remember that laws varies in addition to extradition laws, if they even exist.

In the end, “work for the truth” no matter which side you find yourself in forensics or anti-forensics. To help out in legal jargon and other things, here’s some useful links:
Federal Rules of Evidence
Searching and Seizing Computers and Obtaining Electronic Evidence in Criminal Investigations
Federal Wiretap Act
Computer Records and the Federal Rules of Evidence
Electronic Communications Privacy Act of 1986
Forensic Boot Disk
Netwitness

Monday, May 11, 2009

HIPAA eyes and teeth with latest Virginia PMP breach

Eyes now set on hackers who broke into a Virginia Prescription Monitoring Program(PMP) web site deleting over 8 million patient records and over 35 million prescriptionrecords. And to clearly convey the hackers intent, a random note was posted on the web site last week and locked the site with a password. The going price is $10 Million dollars for the return of the healthcare records; and apparently the website is till inaccessible (but taken off line soon after April 30th when discovered). If you recall just over a year ago Express Scripts had a similar healthcare data extortion attack. Also, recall CVS Caremark Corp $2.25 million settlement of federal investigation for not properly disposing patient information; and then having to implement appropriate security for all locations and to have external auditors/assessors evaluate compliance for 3 years. BTW, a persistent trend in situation of non-compliance or the resulting affects and penalties or wrong doing.


Makes you think if the stimulus bill provided enough safeguard for your digital healthcare records. The new rule states notification to people impacted by a breach (though in cases of over 500), strict enforcement and penalties, and authorization of State Attorneys General to place civil action to perpetrators. Further, the recently enacted Health Information Technology for Economic and Clinical Health Act (HITECH) under American Reinvestment and Recovery Act(ARRA) is to hold business associate responsible (where applicable) for complying with HIPAA. As such business associates would be subject to civil and criminal penalties (not solely covered entities) for noncompliance…from security officer appointment to policy, training, and risk assessment implementation. Then again, without the formal guidance issued within HIPAA privacy and security sections, enforceability let along effectiveness is a mere compliance talking point.

On a side note, physicians will now be required to comply with both HIPAA privacy and security rules without additional stimulus aid…and that would pertain to all associates/partners assisting physicians of protected healthcare information. Wonder how these expenses will be passed down…patients perhaps?

Regardless, while HITECH will make mandatory monetary penalties, demonstration/proof ofwillful neglect of compliance duties will be necessary.

So where is all the stimulus dollars going related to this topic? In the neighborhood of $31 billion in the next 5 years will go to healthcare infrastructure which will largely flow from Medicare and Medicaid incentives to both physicians and hospitals for safeguarding electronic health records (EHR). So as you’re read articles indicating HIPAA’s new teeth on security; however, its bit might not be as tight.

Tune into wikileaks and National Institute of Health and ARRA

Friday, May 8, 2009

Commoditized PEN testing services

The vendor selection process always seems to intrigue me particularly regarding the maturation of penetration testing. Ultimately decisions are based on $$$ and perhaps underlying business driver or Executive owner behind the curtain. But the process, time and effort spent is still...spent. In the commoditized sector of penetration testing, what are the differentiators—of the top vendors pitching their well rooted staff each of over 10+ years of penetration testing experience, renowned authors on the very subject, and list of guppies that have used them for whatever reasons in the past.


Think about it...as I have been on both sides of the fence; what is really produced whether it be black-box or white-box or fuchsia-box method used. And, consider the break point when conducting the PEN test is a competitive advantage and when it becomes a disadvantage or pose more risk to the organization.

Let see--first step in any PEN test is reconnaissance via network scanning and sometimes social engineering to identify hosts/targets. Once the topology is laid out, the services are probed for published vulnerabilities or know exposures; and enumeration commences. A trial-and-error, banging away at each possible attack vector, exploiting the host with nothing more than free tools (some I’ve mentioned already or listed in my favorite site from scanning—angryIP, nmap; and cracking—brutus, SQLdict, pwddump; sniffing—by wireshark and netstumbler; and utilities—netcat, ldp.exe, vncviewer. And, more commercialized WebInspect, AppSec, but also nessus and metasploit); otherwise just buy an expensive tool to do all OSI layers, with customization options and fancy reporting.
The judgement and expectation of the attack methods is best decided prior to to minimize impact and business-centric approach. Additionally crafty techniques (and one can argue the value decisioning) may also include spoofing, key-loggers, zone transferring, and of course the XSS and SQL injection. While being able to snoop around is gratifying for some perpetrators or perhaps launching DDOS attacks, etc, the best payoff is domain administrative access. From there they’re owned without them knowing (at least for a period of time).
From the vendor side, they’ll want to also look at your InfoSec policies and security strategy as well to align their findings but mostly to being the work in selling you more services based on the information they gathered including, risk management analysis and as many compliance services you’ll sign up for. Oh, here's an insider tip: whether it be your own staff or hired vendor that conducts PEN testing, ensure/validate that they have remove all traces of there efforts including the domain admin account they used to own the system.

In the end you get a long list of hosts, some that had vulnerabilities and hopefully a smaller list of host actually compromised (pending how far you’ve told the vendor to go). But that’s where the rubber hits the road as they say. Depending on the amount of time you’re allowed (or will to paid for) the “testing” and the risk you pose to your environment during the process, is really based on how good the PEN tester is….and not to mention how much you’d like to learn about the environment. Meaning, once the keys to the kingdom is acquired how much do you really need to know about other attack vectors [thinking exploratory and discoverable evidence not to mention cost perspective]. You get what you pay for but face it; what company must absolutely need (or actually want) to know about every security deficiency. Hence, isn’t your in-house techie good enough or would your cheapest PEN vendor proposal suffice (outside of actual regulatory requirement of 3rd-party requirements)? As a point to conclude or perhaps start of a separate discussion is zero-day testing which is basically/mostly web-based anyway, right? Notice I really didn't discuss Ethical Hacking which others would say is the same...

Wednesday, May 6, 2009

Virtualization, aggressive adaptation

Virtualization is here and like it or not, your organization is implementing (regardless of your security practice model or current maturity level). Perhaps its just cool technology; but then again consider the business advantages related to server consolidation, disaster recovery or high availability, cost containment, and perhaps efficiency too. A (centralized) server-set that acts like any other configured for on the fly fault tolerant while possibly reducing network traffic switching; and finally some studies/implementation report nearly 600% ROI in just a year (sounds a bit high but anything’s possible).


So leverage your existing information security policies, include a couple other key control considerations and let it ride.

First, lock down the O/S like any critical system from physical environment to patching and IDS/anti-malware to start protecting the physical and virtual asset. Disable any unused services (e.g. rely on SSH instead of native IV Console) and provide extra safeguard around system files including .vmx files and appropriate logging of access, etc. And, disable unnecessary server functions, prevent unauthorized devices not connected as well as removal of connected. Beginning with these safeguards will limit security exposures including “hyperjacking” or the ability for attackers to compromise the entire virtualized environment.

Access privileges levels should always be reviewed in any implementation, limiting Guest and other account or operating system functionality to achieve the least privileges model. Appropriate file permissions and proper integration of LDAP or NIS directory services is essential in narrowing the attack surface. If all this sounds familiar…it should. The fundamental core of these practices relate to the information security protection of any mission critical company computing resources.

The next layer of defense is at the networking-side component. The system firewall should be enabled to limit TCP/UDP ports and services; and network segmentation/isolation should be configured (hopefully more than simply Layer 2 VLAN). MAC address spoofing or the option to accept request from another destination MAC address other than the effective should be disabled. Additionally, ensuring forged transmission setting is configured such that comparison of MAC address matches the effective sourced. And, if SAN (Storage Area Network) is used, ensure LUN masking or zoning in exercised, reducing zone visibility and selectively presentation only necessary storage information. Other consideration include: not using nonpersistent disk so offline access/vulnerability is not feasible, disable virtual disk modification preventing elevated privilege exposures, not allowing promiscuous mode on network interfaces so that packets are not read across the network among other virtual network.

Lastly, when it comes to logging, implement what you would normally require of critical assets including activity tracking particularly for web access to VMWare, connection/authentication records, console and error messages, availability of agent and interrupts; and secure storage of log files (e.g. in ESXi 3.5 files hosd.log, message, and vpxalog). When combined with file integrity (e.g. TripWire) monitoring, you’re starting to build onto your SIM/SEM platform.

For additional security controls, enable certificate-based encryption when legitimacy of the root certificate authority and client/certs are required. And, weigh the risk with compliance strategy.

So I ask you, who owns virtual security in your organization? And, given your existing technology integration and existing change management effectiveness, how do you best monitor your ESX environment and how is it audited?

Monday, May 4, 2009

IPS/IDS (part 2 or 2)

...a vendor touting an award winning network IDS solution with a flair on forensics. Admittedly, the combo (if the sales pitch held steadfast) would fit nicely in a security professional’s tool bet, limiting number of vendor products, its promised integration with SIM (Security Information Management) solutions; and did you say DLP (Data Leakage Prevention) too.


Well, an IDS solution it was not. Sure it did signature based detection (in-line via span ports) and that’s pretty much it. The appliance does not champion any IDS packet anomaly, behavioral, nor Artificial Intelligence (neural/neurons) recognition. A pure match on signature and a couple custom scripts written by you for additional alerting and you'd be good to go. For correlation, exporting and anything else…you need to sign-up for the mothership offering that includes a proprietary database collection engine, allowing you to capture every packet (payload and all) in your network provided you place a sensor in all the segments you want to monitor. Like any other sniffer trace, you can view the capture in binary or hex format (depending on how you’d like to fall asleep). The solution does offer a GUI interface for management and configuration; but given a large environment you do the math…up to a cool hundred in total sensors with terabytes of data (in days) indexed in a database. However, since its proprietary you need to keep that storage or archive on-line somewhere to make sure of it (since the cataloging and indexes reside within the proprietary database) so pricing didn’t come up probably for a good reason.

But you’ll be armed with volumes of data to assemble and extrapolate information (usually post-incident) to your hearts content—so that’s the forensics side; but you will need to rely on your own FTE or forensics staff [standing ideal] to perform the analysis.
Now, with this feature-set, imagine the topic on electronic data discovery and privacy/compliance with this type/volume of information. Consider the “auitability” and preservation and category of documentary evidence; let along admissibility and validity.

This product screams BUY ME! An acquisition by a bigger fish spells $$$ for the company’s owners/investors and integration with a product that can leverage the captured data would be phenomenal (for something other than an just an IDS offering).

Friday, May 1, 2009

Routing…the forgotten security trusted model

Higher layer security has been the buzz for some time and how can you contend with numbers depicting web application attacks rank up to 75% of security issues. But what about the new fad everyone’s been jumping on board with MPLS, carrier Ethernet variations, or your carrier cloud implementation. The root of all is packaged into MPLS but really the BGP protocol base. I would almost make the analogy of MPLS/BGP to HTTP unlike HTTPS/SSL…to begin the debate

Sure providers are aware of the possibilities and continue to enhance the protocol with, for example,MPLS FI (Forwarding Infrastructure) but it was found to be exploitable with crafty coding as well.

And, like all other vulnerabilities, it can be discussed further at forum such as Black Hat Europeand ERNW

The one good thing about all this though, is the hacker needs to get into your network first (go figure) before being able to modify the bits and bytes which can turn your routing tables into mesh/mess.

Simply said it works like this, BGP is based on established trust and while MD5 can buy you a level of key security (but come on it’s only MD5 so a super computer not needed to crack); so packets leave with a forwarding label and egress provider edge route with VPN destination identities; thus intercept and use command line tools (mpls_tun) and bang way. You can change label information and reroute packets to authentication servers and malicious DNS, etc. And, did you say transparency models…then next up let’s talk Layer 2 exploits.

On a related note, carrier-based offering of VPN, either EVPN or EBP VPN is traditionally not encrypted though your traffic is tunnelled through the provider's network routers. Whereas, an implementation of IVPN or IP-VPN is not only tunnelled but encrypted often through an appliance or firewall (unlike a traditional Layer 3 router with EVPN).

RFC3031 MPLS | RFC 4364 BGP/MPLS IP VPN | RFC 2547 BGP/MPLS VPN