Monday, November 9, 2009

Firewall review exercise, a security tradeoff

Many compliance efforts include a firewall review and doing it effectively will have cost and risk at odds. Reviewing each rule entry by verifying and justifying the “actual” business purpose/requirement can be a mind numbing experience since most IT and/or Security teams don’t own, validate/test (during SDLC phases), nor responsible for port/services usage …


As a result, allocate a FTE (Full-time employee) to chase down the culprit or just understanding of the ports and services open for any given interfaces/segments. Then, you’ll probably need to perform some level of remediation or at least negotiation of what to allow and how (i.e. in an acceptable DMZ or tier-architecture model).

So, the alternate becomes outsourced services provider and the cost will be a chunk though they are more inapt to solve the understanding of why ports/services are allowed to being with (and were not reference just port 23 or FTP scenarios). Perhaps the happy median is to utilize tools that allow you certain advantages…

The list is long but the three models that serve this discussion I will reference is AthenaTufin andAlgosec…in order of capability/feature and in increasing price. Athena will by you the raw analysis and output needed to make actionable decisions rule-sets that pose risks and correction recommendations. A step above is the proliferation of Tufin appliances to give you more granular analysis, reporting, and customizations. This solution will also encompass start of a correlational approach/model…leading to Algosec. With a full package for optimized management of firewall as well also other network devices. Alogsec rounds out the solution for device and event management capability with a twist of notification and proactive management.

Tuesday, October 6, 2009

New wireless security recipe

Anti-wireless signal:

1/2 aluminum-iron oxide chopped into very small particles
1/2 paint brand/color of your liking
Mix; and paint your walls and ceiling, and apply outdoor as well to seal your entire house

It's that easy...the metal works on the same radio frequency (and upwards to 100GHz) as your Wi-Fi so signals aren't allowed to pass through the pigments. A passive approach to wireless security or at least an avenue for leakage protection--being considered/used in the UK and Asia. The rub is in the actual cost of this type of paint versus actual leakage protection as well as actual implementation of Wi-Fi built in security funtionality.

Wednesday, September 2, 2009

Security tools are cool.

Yes, there isn’t a problem that you can’t solve with a tool…at least from a Pre-Sale’s point of view. But you always hear about the decision dilemma and analysis that goes into purchasing a tool—and yes you’ll need to consider the IT/Business strategic, vendor longevity, supportability and manageability, and of course integration factor. Enough said, let’s just look at a cool solution as it stands today (because tomorrow, technology would have already changed)


Content and URL filtering: Go with Websense as the overall leader in this space and add a little BlueCoat for enforcement and you can’t go wrong (or just pure BlueCoat as competing best overall solution). Then, round out the top in this space with IronPort solution
Cloud did you say…go with Zscaler, they just seem to be everywhere

Firewall: Stick with Cisco overall but better trend setter is Juniper as well as CheckPoint R70—for creativity/vision (over Cisco).
And, related subset of tool check out Algosec then Tufin for management/audit

DLP: [industry beloved term -of-the-year] you’ll need to check out Websense again with Port Authority and joust with Symantec’s Vontu. But perhaps also a little HP if they can make Fortify work (or just buy them, right)

WAF: [another watercooler conversation] web and XML firewalls and for this go with Imperva or Breach but no strong push here….any thoughts here

IDS/IPS: [ever say die] Snort equals SourceFire so go with what works but if you believe the hype then go with Tipping Point and have it manage itself…hummmm

SIM/SEM/SIEM: [not forgotten] ArcSight because you have the $$$ to do so; followed by RSA because everyone has an RSA component otherwise look into LogLogic (proven) and Splunk (cheap/ease of use)

and yes the infamous NAC solution, I covered in a prior post

Tuesday, August 11, 2009

Endpoint protection - which to select

While MSFT and CSCO duke it out in many sections, what about the other giants, McAfee and Symantec?

Pick your flavor or realistically what your real business requirements are and you can find a vendor that is most suited to your needs or at least how much you’re willing to pay. But it comes to control at the host themselves, the once anti-virus contender have much more to offer. Through scientific research (which really means experiences in the industry) the challenge appears to be with reporting and this is more prevalent with Symantec SEPM vs. McAfee’s ePO product line. While third-party products can supplement and off-set SEP 11, ePolicy Orchestrator serves up better reporting and flexibility as well as better supportability of OS X and Linux hosts. Sorry, you’ll have to wait till Q1 2010 for SEPM.

The larger the deployment it would seem that additional reporting and tracking is necessary to the point that separate SQL databases are being installed to track and report on specific client criteria, agentless host and specific/immediate malware notifications.

You pick the flavor but at the time of this writing, advantage McAfee. So, that mean advantage Cisco too, then?
And to make it more interesting for MX Logic and eset

Thursday, July 30, 2009

NAC landscape: tech pros and cons snapshot

Many organizations have failed…implementing a full NAC solution because of its oversimplification by the market space. As a result, poor deployment has been experienced; but that should stop you. Recognize your organization’s true needs and implement in a phased adaptation model where technology is optimized and not the silver bullet.


NAC (Network Access Control) is by its very nature (or initial) is a restrictive control at the network layer based on the identity or credentials of the to-be identified host. Some key points to consider when investigating NAC solutions:
Scope of coverage, scalability, degree of protection and control, interoperability, and $$$ (licensing and resources).
Standards that fit best—Microsoft NAP, Cisco NAC, Network Access Control (TNG), or IETF NEA Working Group (NEA).
Model approach—client-based, network from 802.1x to DHCP and inline, or hybid.
-Client-based NAC important considerations: best visibility of logs yet more difficult to rollout and manage; thin client best but still requiring a client and so rollout diversity can be an issue (yet control is closest to the user)
-NAP-based provides strong pre-admission and standards embraced, though endpoint client policy may be lacking in comparison—with limited O/S support; and overall third-party integration is a must.
-802.1x NAC is the most vendor agnostic although prepare for infrastructure upgrades, link-level authorization but limited endpoint posture assessment/comprehensiveness of identity
-In-line Network based can be a VLAN/VACL rollout road-block but strong pre-admission check and can be application aware
-In-line NAC has no agent component but inline scanning intensive, VLAN nightmare but transparent to users with threat control
-Hybid, well if you haven’t noticed some of the overlap…then not sure how to make the water even more murky

With all the combination blending together the future is bright. At least we should expect vendors to integrate solution particularly since wireless networks (and mobile devices) are becoming the normal….so implement NAC in some faction and throw in 2-factor authentication to make things more interesting/secure. When in doubt just turn to DLP solution, right.
Outside of the dominate MS and Cisco, ever heard of Bradford Networks?

Thursday, July 9, 2009

Security Management of end-points tool

End-point security solution comes in many flavors and every vendor has its spin. But how’s one that you can drop in relatively cheap (at least as cheap I’ve seen lately) and get cool reports on the health [anti-virus, firewall, patch] of your Windows PCs/Laptops? Of course they support MAC and UNIX flavors but just didn’t have enough time…they should extended the meeting into (a free) lunch ;)


So, here’s the sales pitch and you tell me if it gets any cooler. Scan all your hosts (totals reaching in the 50K neighborhood) within in minutes (provided a light scan is done vs. full throttle) and get instantaneous results/graphs based exceptions or host list of non-compliance. The claim is low level scans at the API level so quick and dirty yet anything from registry setting and software/hardware inventory is acquire from a client-less based solution. The scan scheduling can be configured to your hearts content and appears to work off of either a pre-populated IP pool, input from DNS, or a ping sweep. What happens if you go stealth including disabling ICMP reply…hummm?
Included in the package is even a remediation module which allows you to enforce, for example, registry settings that your GPO would otherwise do a so-so job of enforcement—though it seems customizable enough such that Windows users can manually change the setting (but enough to create havoc when the scan/enforcement cycle s through again). Thus, user-defined configuration assurance--with blacklisting for those disruptive/unapproved (Corporate) software packages and collaboration tools like Instant Messaging, LimeWare, Kazaa...
A solution customizable to report on compliance with your company polices, O/S standards, and regulatory standards. Point-and-click as one said….for the most part. Oh, and it has a energy management component that will save you $$$. All this for a price of approx. $20 per host…shamWOW

Tuesday, June 23, 2009

INFOSEC Program - My academia overview presentation

Information Security Program high-level slides
A repeatable process and customizable/adaptable for any organization.








Friday, June 19, 2009

Enterprise Risk Conference - campIT

The conference included presentations and panel discussions; as well as voluntary items from the audience—framework for a repeatable risk assessment, how to involve business and get executive level understanding, end point security for wireless, budget cuts-how to distribute load and a governance model for best practices, collaboration of security risks; and data ownership or classification.


Some key takeaways included:
  • The Malware and threat landscape has seen exponential growth particularly over the last couple years, thus, risk becomes proportionally larger, requiring more resources and time/effort to just reactively combat
  • Five steps to managing IT risk are: risk awareness, business impact quantification, solution design, IT and business alignment, and then build & management of solution
  • The basis for information security is policy documentation and implementation that leverage the organization’s culture, identification of stakeholders, leveraging regulatory and compliance standards, and also partnering with Internal and External auditors
  • Metrics that matter resides with an automated and repeatable metric (re)produceable as well as owner identification. With that, smarter decisions can be made, regulatory readiness can be conveyed, measure of effectiveness of risk management, and visibility into deviation and weaknesses. The consensus was free tools (as a starting point) can be your jumping off point such as Splunk
  • Cloud computing does have advantages for certain situations such as anti-virus solution to leverage the general communities security vulnerabilities identification and resonse; but SIM/SIEM solution would not be preferred since you’ll be pushing very large (log) data into the cloud; and, cost reduction may not always be the case when factoring in the security control and visibility to your information and/or traffic
  • Security gaps made known to auditors can advance any agenda; and partnership with the business and particular financial departments will propel the risk management framework and practice [getting them to really understand risk in dollars & productivity means funding]
You’ve just been enlightened...at least to a small degree so future post will consist a deeper drive on some of these key topics. But contemplate this security analogy [as mentioned in the presentation], how good would your security be if a member did not adhere to policies?
I caught on to an analogy that would be perfect for a security awareness article. A group of individuals on a boat where one person decides to drill a hole under his own seat...thereby allowing water to start seeping into the boat. While this individual may be working within one's own confinements, the end result is clear--the boat sinks with everyone on board. Of course, the real world is just that simple (to recoginze or for that matter identify/remediate), right? So awareness, business integration and appropriate sponsorship is the important trifecta of any successful security practice and implementation (as in the well known people, process, and technology).

To conclude, the vendors/sponsors provided trinkets as usual and noteworthy conversation from colleagues that run the same circles (yet varying sectors). But I must note one item that I found to be contradictive to my own experiences (at least in today's environment). That is the BITS shared assessment movement and current industry acceptance. While the overall premise is sound, I would disagree that an organization having completed the exercise would merely present the relative sections in response to an external audit and that to suffice. Has the industry really accepted a single version of a questionnaire? What about customized business and technical parameters and controls that needs to be asked/answered? While I would asert that providing a SAS 70 Type II saves you some effort in having to answer all components, I dought the industry as a whole is ready for a single, all emcomosing questionnaire. I can't wait for that time (but how would risk assessors make that extra cash).

Tuesday, June 16, 2009

Mobile phone privacy

Coming June 18th, Connectivity will be offering a directory phone service listing of 15 million UK mobile phones (nearly 1/3 of the population). Apparently these numbers were freely obtained during normal course of business or through surveys which folks provided their mobile numbers. For a $1.20 charge, you can subscribe to a voice automated service that will connect you to these mobile device via supplying name and town; and an option to leave a voice mail in absence of the call connecting (yet claims that the actual number will not be supplied).

Of course, a corresponding SMS web service will be offered as well—with call back for matches.
Oh; but there is an opt-out option though it may take up to a month to enact. Is anything safe from spam or unwanted calls anymore? Wonder how they will certify teenagers/minors using mobile devices are protected?
A former consultant of Connectivity who is now with Privacy International has expressed concern about the way in which these numbers were collected and is now destined to be used.

The point being don’t provide any more information than you absolutely have too even if they say it’s “required” and certainly don’t volunteer any data in surveys or otherwise.

This is sure to add concerns to the already threatened and often violated Data Protection Act...which some numbers show about 18% of company are not sure if they have illegally release personal information to third parties and/or failed to information securely.

The 8 principles of DPA from http://www.ee.ic.ac.uk/dpa/principles.html:
When processing personal information the following 8 principles must be complied with and data must:
1. be obtained and processed fairly and lawfully and shall not be processed unless certain conditions are met.
2. be obtained for a specified and lawful purpose and shall not be processed in any manner incompatible with that purpose
3. be adequate, relevant and not excessive for those purposes
4. be accurate and kept up to date
5. not be kept for longer than is necessary for that purpose
6. be kept safe from unauthorised access, accidental loss or destruction
7. not be transferred to a country outside the European Economic Area, unless that country has equivalent levels of protection for personal data.

So, here’s a site that claims to do the opt-outs Telephone Preference Service (TPS)

Monday, June 8, 2009

T-mobile hacked again?

Apparently an anonymous hacker posted information related to T-mobile servers on Saturday and claimed that they had customer confidential information and financial records and proprietary operating data…then set out to put the information to the highest bidder. The wireless giant is once again in the news…in 2005 Nicholas Jacobsen was charged with unauthorized access to their network when a U.S. Secret Service agent uncovered he had 16 Million U.S. subscribers data. He later pleaded guilty to a single felony charge of intentionally access protected computer and recklessly causing damages (spanning over 2 years worth). Since then, in 2006 SSN of about 45,000 customers were lost and in 2008 disc lost of 17 Million customers.


Coincidentally, Deutsche Telekom, the T-Mobile parent, wants to dump the business based on its recent profit warning citing a 21% down in UK revenue and a weakness in the U.S. But then again their preparing for 4G with download capability of 7.2Mbps—so you can hack at light speed ;)
Related incidents? The hack just a hoax? Perhaps, but what brand damage has occurred already? What compliance or regulatory efforts have they not complied with; or, better yet, compliant and still breached? Think breached customer will get their notification any faster now, compensating for your personal data lost?
Another story sure to unfold

Wednesday, June 3, 2009

PCI lawsuit

In the case Merrick Bank Corporation v. Savvis, Inc., the bank is taking on the QSA firm who certified the processor, CardSystems Solutions Inc.


It was only a matter of time...about 3 years ago, large CPA firms (including the practice I once lead) dropped out of this boutique certification service offering. Can you say smart move, or, just a keen sense of detailed analysis--considering how do you really attest to all the systems (holding card numbers) are compliant even at a point-in-time when you could have hundreds of POS (Point of Sales) and backend servers and networks internetworked. We all have SAS 70’s for what they’re worth but it’s an assurance of the controls, chosen by the company being audited and the tested control as prescribed. And, yes, auditing is about sampling; and due diligence as defined in Sarbanes-Oxley; but PCI pledges certification then post the list of company on a website.

The past meets the present: In 2005, 40 million credit cards of all brands exposed by the payment-card processor was a result of a vulnerability in the processor’s card systems—resulting, among many other things, a ball park figure of $16 Million incurred by Merrick Bank (an aquiring bank of about 125,000 merchants). So, four years later, Merrick [and I must add they have always been on top of the PCI requirements and due diligence from a best practices as well as contractual obligation] has filed a lawsuit with Savvis for negligence regarding their audit of CardSystems who was Visa Cardholder Information Security Program (CISP); predecessor to DSS and ROC as we know today.

Of course, proof means everything and good lawyers can be very convincing but the certification process is sure to be analyzed along with the jurisdictionintent, and actual Negligence andNegligent Misrepresentation (Count 1 and 2, respectively) at the time of incident.

I talked about downstream impact before so what does this actually mean to all of us?
  • Immediate extensive scrutiny and testing by QSAs prior to issuing ROCs
  • More man hours (and additional charges) for any PCI assessment and audit leading to certification
  • But the real news is perhaps PCI will issue agressive standards; not just clarification still subject to QSA interpretations
Industry standards, certifications , and regulatory/governmental compliance is headed toward accountability not just toward the company but the individuals asserting compliancy. Like executives for financial reporting; and HIPAA now for Business Associates, but will Auditors be assigned responsibility (when they are merely testing the controls)?

Definitely a story to follow from the U.S. District Court of Eastern District of Missouri which will continue to evolve the information security and legal synergy....and further support company spend on IT for the "sake" of PCI.

Monday, June 1, 2009

Web Application Firewall (layered security) optional requirement

spin off of my most recent post...

I think we all can conclude that (all things considered) code review is the best method, either or both static analysis at compile-time or dynamic analysis at actual run-time. But what about if an immediate (without necessarily monkeying with the code) is required; and in consideration of long term strategy... That’s when WAF might just be most appropriate. The trade off is when WAF actually needs to be put in BLOCKING mode since all know vulnerabilities and fixes have been applied; thus, no business interrupt would be noticed from the application.

Alright, I’m not dispelling the importance/benefit/necessity of code review at all but let’s take a quick peak solely at WAF and Dynamic Profiling technology. The theory is a behavior based approach; unlike a static/manual fix code review under application normal cycles. While developers can write code to protect against vulnerabilities and present day attack methods, it will not necessarily secure the application for tomorrow’s threats. Gee-whiz am I going off on an IDS/IPS and vulnerability/PEN testing tangent?
The premise is alike in that understanding the path/source leads to better code reviewing or better detection.

As it turns out selecting a WAF is much like selecting an IPS vendor (for in-line as well as out-of-band monitoring) yet less like a firewall product. So, here’s what you need to know:
Throughput and latency [actually how much, not if] is on the top of requirements; but deployment flexibility including software or appliance-based; and in virtual hosting environments plays particular importance. WAF’s runs on the following modes:
Passive – for just listening
Bridge – sending TCP resets when malicious activity is detected
Router – like server not really recommended as a routing function
Reverse-proxy – most common operating at Layer 4-7
Embedded – within the application (typically for very small/non-complex deployment)

So, look into SSL accelerators and webcache integration features or HTML compression other than sound-but-simple policy rules.
But the squeeze spot on attack characteristics that any WAF’s should be able to tackle include: input validation and invalid request, injection flaws, buffer overflows, cross site scripting (GET and POST), broken authentication, cookie poisoning, forceful browsing, parameter tampering, and of course SQL injection.

This is particularly significant if the WAF is asked to (hopefully in limited cases where packets are encrypted over HTTPS/SSL) inspect packers SSL termination and decryption is a option often enabled. Remember, a good WAF architecture will also consider both REQUEST and REPLY vulnerabilities.

Conclusion, WAF is complementary to all other application security practices/processes… PCI 6.6 actually puts the two in comparison of a single solution when in fact, layered defense would tell you otherwise. So, I would speculate that once the industry adapts one of the solution [WAF or code review] it won’t be long till the other is also a requirement (not just best practice). Oh, and if you thought IDS/IPS implementation was a cake-walk, you’re in for a treat—but do purge through.

Friday, May 29, 2009

PCI 6.6 clarification - requirement past due

While PCI DSS requirements has been set for some time, it’s the clarification documentation and written interpretation that matters…and I’ve said it before, your QSA is key to supporting your interpretation/implementation. After all, it’s their credibility and they are the ones to answer to VISA, PCI, etc.

So, here’s the 6.6 clarification just in case you haven’t figured out the code review and application firewall topic (though the due date was June 30, 2008):

Taken straight from PCI standards:
Option 1 is Application Code Review--which can be done via 1 of the 4 options:

1. Manual review of application source code 2. Proper use of automated application source code analyzer (scanning) tools 
3. Manual web application security vulnerability assessment 
4. Proper use of automated web application security vulnerability assessment (scanning) tools

In other words, either a manual (qualified staff or independent firm) or tool-based inspection/injection of non-standard inputs; then analysis of the results—but the key being measures to protect against common types of malicious input. I stress this because interpretation of this is subject to vary as well.

Option 2: Application Firewall--and this is in addition to all other requirements including firewall, logging, etc.

Should go without saying that all other requirements are still requirements—including the compounding requirements that relate:
6.3 – SDLC and secure software development process
11.3.2 – application-layer penetration testing to ensure applications don’t have vulnerability
6.5 – common vulnerability testing which is primary OWASP including: input validation, least privileges, default deny, data sanitization, security policy design, quality assurance techniques, and once again, defense in depth.

Tuesday, May 26, 2009

ready – set – go (HIPAA countdown)

With funding to approach the 800 BILLION DOLLARS, American Reinvestment and Recovery Act (ARRA) is imposing compliance timelines...so basically you have till February 17, 2010 beforeHIPAA (and Centers for Medicare & Medicaid Services) audits:

  • February 2009 – ARRA Civil monetary penalties are placed
  • April 2009 – HHS (Health and Human Services) guidance on Securing EHR (Electronic Health Record) published
  • August 2009 – HHS and Federal Trade Commission (FTC) interim security breach notification to be released
  • December 2009 – HHS to release adaption of initial prioritized set of standards
  • January 2010 – Deadline for complying with accounting disclosure rules
  • February 2010 – HHS will begin auditing (of HIPAA entities) including requirements for implementation of Business Associate Agreements; as well as enforcement of rights of electronic access of records
Like California breach legislation SB1386, it will be federal law regarding PHI data breach notification; and tagged with a criminal offense pursuant to provisions related to the Social Security Act.
To patients, this means we are entitled to be notified of PHI disclosure and have the option of prohibiting disclosure of PHI to insurers and other Business Associates if treatment is paid out-of-pocket. To covered entities and business associates, this means rendering the PHI indecipherable or unusable resulting in an optional safe harbor from the new data security breach notification requirement. Unreadable PHI primarily translates to encryption and/or destruction either “at rest” or “in motion”; and destruction pertains to shredding either paper or electronic media; and both references National Institute of Standards and Technology (NIST) for clarifications.

That said, if your data is “PHI secured”, then its actually exempt from breach notification requirements (not to mention HHS cases involving less than 500 people affected). Things that make you go...hummm

Some parallelism mentioned with other existing security and breach requirements and even implementation of the requirements draws similarities with PCI compliance and risk-based approach—see Security Rule. Nothing new there but a line is starting to be drawn in the sand much like PCI...as well as the extended grace period for compliant.

...a continuation of my prior post on HIPAA and HITECH; as well as some light reading onAmerican Recovery and Reinvestment Act of 2009; better yet more on this later Medicare Section 111. Moreover, reducing the compliance efforts can result from integrating and standardizing the number of claims and payment systems or simply (but not always practical) reducing the sourced data.

Wednesday, May 20, 2009

Agile Security – CheckPoint Road Show

The maturity of an organization will often exemplify the pursuit of efficiency. But does that necessarily translate to being better, or, what does that say about ones innovation?


At a recent CheckPoint event the presentation revolved around building security agility in today’s IT and business technology. As Gen-X becomes a thing of the past and Gen-Y now influence business as much as our own personal lives, both technologist and business owners must adapt to the always connected society of Gen-Ys. Think about it…today’s high school and perhaps younger years are texting as a primary means of dialog, building resumes and conducting business on Facebook, and not to mention the power of information sharing at a press of a button. Oh, ya. That’s exactly what office professionals are doing today…from Blackberry email to iPhones/Twitter and some form of business approved P2P or FTP and slidesharing. These medium allow us to collaborate at a global level which is revolutionizing the now future. So, how do we continue to become leaders, movers of innovation and strategic by nature?

Joel Snyder
 rendered 4 characteristics of agile security in his keynote presentation—which was a refreshing topic in comparison to the usual technology events. You and your organization must be agile to succeed, hence, the ability to be flexible and adaptive and not just good at what you are doing.
• Design defense in dept
• Yield to standards based approach (with multiple/independent vendors)
• Build perimeterless network to gain greater control
• Cease from being event-driven
First, the implementation of layers of security allows you to be more flexible and ultimately more relaxed as you are assured the various components are protecting your coveted information—from the anti-virus at your desktop, IPS, WAF (Web Application Firewall); NAC (Network Access Control) and the VPN or encryption.
Next, adopting standardized approach or solution sets become critical when the future landscape changes and altering components, not the whole solution, is required. In parallel, the selection of vendor or product can be influenced by the “preferred” and/or incumbent and therefore, the effort (or lack thereof) put forth might just suffice. While the perfect solution is not always what’s needed nor always selected; the point is to be vendor agnostics and docile to the varying solution which ultimately should be interchangeable and integration receptive for the evolving strategy.
Also, having a centralized point for all delivery or activity, whether that be the old days of mainframe to the recent single network hub-and-spoke scenario, is over. Growth commands the ability to make alterations with limited impact to all else related. Therefore, having multiple delivery points or extending the perimeter to more manageable component will not only allow you to be more flexible but will enable you to better adapt and be more nimble to change.
Finally, we all strive to be proactive; and not waiting for an event prior to soliciting a reaction is the key. Advances is technology will occur at a rate perhaps faster than you may be able to keep up with. But having a strategic approach and forethought is critical to anticipating the future or at least the ability to adapt and anticipate. For example, who would have thought SSL-VPN would be the choice for today’s business; yet the technology was around (just not as efficient and easy to install as IPSec-VPN).

A rejuvenating perspective on justifying the information security agenda, yet the business challenges remain in finding the “right” solution that meets today’s demands with the needs and consideration of tomorrow …

Monday, May 18, 2009

Apple patch Tuesday

No typo, its MAC’s OS X 10.5.7 turn to apply patches not just performance enhancing but security patches to correct 47 security issues (from PHP and Safari to Adobe and Flash Player updates). Updating various components and applications will require you to restart your MAC...and this is on top of already 90+ vulnerability related to code and 3rd-party applications (Security Update 2008-02 with Tiger/Leopard) already released couple months ago.


Like other critical vulnerabilities, some attributed to potential arbitrary code execution (and is some instance of malware infection)—related to Apache, ClamAV, Flash Player, and Adobe [Security Update 2008-003]…infinite loop enumeration request with AFP Server, heap buffer overflow with CoreText, memory corruption for movie/codec files, and unprivileged local access to the 'Download' folder.

So while Apple has conducted a press release to advocate the patching and forthright position to security and public awareness, they offer no further comments related to prolonged release...
Security Update 2009-003
Security Update 2009-002
Security Update 2009-001
On a concurrent note, Adobe Acrobat is starting to notice flaws (second this year) and other vulnerabilities including the latest zero day related to Javascript, allowing arbitrary code execution or remote attackers DOS (Denial of Service) through annotation and OpenAction entry via JavaScript code.

Friday, May 15, 2009

Interpretive DSS implementation

It’s no surprise the ultimate decision in getting that stamp of approval AKA the Report on Compliance (ROC) is in the hands of your QSA (Qualified Security Assessor). And while they are your primary voice or point of contact into VISA, you can hold direct conversations with the PCI committee/representatives (specifically to discuss DSS compliance…yet they’ll just assert the QSA and his/her organization are liable for acceptance & interpretation leading to a ROC). While Visa and PCI can set guidelines and requirements, etc. your QSA is the gatekeeper towards getting your company’s name on the approved list.

So why is it then that the interpretation of the requirements left up to the QSAs—given the sheer number of “security boutiques” with QSAs opinions and practices that vary (i.e. how conservative is yours)? Moreover, why is the PCI requirements left to one’s “qualified” interpretation? Is it any different than SOx as it pertains to financially significant systems/controls; or how different is it from SAS70 based scope for assurance? Though there is one difference in these compliance reports in that I don’t recall any BIG4 or other CPA firms still conducting ROC attestation (i.e. just the side business of PCI compliance not certification).

Of course, everyone should be PCI compliant for credit card numbers [I don’t need any more mysteriously green fees popping up] but what about all the other PII data contained within your 4 walls or consultants cubes; as well as the business partners you share them with…
So, let’s visit some variations in the implementation of these requirements. Oh, by the way, (in getting an edge) try getting your competitor’s scope for their ROC…good luck.

Requirement 3.3 by definition requires the masking of the first 6 AND the last 4 of the PAN(payment account number), leaving only the middle portions displayed. Then, real world and reality chimes in. As a function of your “business” and requirements of your customers, displaying the first half of the PAN is actually necessary for absolute comparison in cases of disputes, for example. Provide this compelling argument to your QSA and yes, they’ll agree the existing practice (provided consistent and justifiable) will be accepted. Your alternative is complete overhaul of your existing application and processing models (not to mention the technical reconfiguration and programming that’s involved). Is this meeting the underlying objective PCI set forth; you tell me?

When you need time to stand still for a little while—particularly if you are not the Merchant or Service Provider Visa or acquiring bank directly being hold accountable, push back on being fined by the upstream providers by leveraging existing contracts/agreements. Have the acquiring bank or provider share the burden of compliance/implementation cost…so in turn they can quickly meeting the vendor requirement section. Basically, as a downstream vendor of card numbers, you’re slid under the VISA radar so Merchants/Providers, in fulfilling their own certification, have to put reliance on existing contract/SOW/agreement (signed perhaps prior to the heightened PCI awareness days). Sure you may have to pay “credits” which is most often less expensive (in the short term) than complete remediation. Just keep in mind the risky business and repercussions of an actual security breach down-/up-steam or sideways use of PAN (should it occur prior to compliance).

Do share your interpretation of specific requirements and the spin on its implementation.

Tuesday, May 12, 2009

SecureChicago - Forensics

ISC(2)’s SecureChicago most recent topic was on forensics. Though the day long session was focused on bag-and-tag, physically securing/preserving the data and the surrounding environment, vendors participating stretched there product/service offering to match the same. Of course, some of the more renown vendors such as Encase was not present, others did cover related forensics topics including application security. Yes, application ties to data breach seems to be synonymous these days and represented in any security forum.


A clear theme in the morning discussion was the delineation of investigation work (i.e. Incident Response) and forensics. Clearly two similar topics, yet very different/disparate practices when done appropriately and recognition of the two should be understood in the real world (particularly in the legal sense). Standard of proof, evidentiary concerns, and admissibility comes to mind in a forensic lifecycle of acquiring, authentication and analyzing methodology. Computer forensics has only been around for about 25 years compared to original forensics of author attribution. But the principle is the same as Locard Exchange Principle states the contact of two things will result in an exchange and therefore a trace of evidence.

And in the eyes of the legal system “chain of custody” not checklist is also a terminology worth noting. Chain of custody is the documentation of the what, where, who, and how evidence is processed in a repeated manner (by which can be redone with effectively the same results), preserving integrity. Checklist on the other hand can be more detrimental to a case since each investigation vary and purposefully skipping a step (regardless of applicability) may pose questions are integrity and ultimately toward a reasonable doubt.
The use of technology can be the silver bullet in a case; however, the lack of it’s understand and more importantly the inability to properly present the evidence has also proven to be it’s gotta. Judges and even a jury of peers are mostly like not going to have technical background or is an ISC(2) member, therefore, presentation is another key factor (being conscience of technical jargon and industry acronyms).

I digress. To wrap up when doing cyber forensics, consider the following: knowledge of your company policy (don’t operate in a vacuum), document is everything; ensure repeatable and verifiable examination process, don’t exceed your knowledge (e.g. there are about 85 different operating systems and who’s an expert at more than one); and understand the purpose/scope (criminal, civil, regulatory or administrative investigations). Also, on the technical note consider having more than one tool, use write blockers when conducting (image) analysis, have more than one copy using MD5 or SHA256 hash, Windows registry for data source including (SID, swap space, slack file/RAM space, ADS (alternate data stream), printer EMFs, and index.dat. Finally from a global prospectus, remember that laws varies in addition to extradition laws, if they even exist.

In the end, “work for the truth” no matter which side you find yourself in forensics or anti-forensics. To help out in legal jargon and other things, here’s some useful links:
Federal Rules of Evidence
Searching and Seizing Computers and Obtaining Electronic Evidence in Criminal Investigations
Federal Wiretap Act
Computer Records and the Federal Rules of Evidence
Electronic Communications Privacy Act of 1986
Forensic Boot Disk
Netwitness

Monday, May 11, 2009

HIPAA eyes and teeth with latest Virginia PMP breach

Eyes now set on hackers who broke into a Virginia Prescription Monitoring Program(PMP) web site deleting over 8 million patient records and over 35 million prescriptionrecords. And to clearly convey the hackers intent, a random note was posted on the web site last week and locked the site with a password. The going price is $10 Million dollars for the return of the healthcare records; and apparently the website is till inaccessible (but taken off line soon after April 30th when discovered). If you recall just over a year ago Express Scripts had a similar healthcare data extortion attack. Also, recall CVS Caremark Corp $2.25 million settlement of federal investigation for not properly disposing patient information; and then having to implement appropriate security for all locations and to have external auditors/assessors evaluate compliance for 3 years. BTW, a persistent trend in situation of non-compliance or the resulting affects and penalties or wrong doing.


Makes you think if the stimulus bill provided enough safeguard for your digital healthcare records. The new rule states notification to people impacted by a breach (though in cases of over 500), strict enforcement and penalties, and authorization of State Attorneys General to place civil action to perpetrators. Further, the recently enacted Health Information Technology for Economic and Clinical Health Act (HITECH) under American Reinvestment and Recovery Act(ARRA) is to hold business associate responsible (where applicable) for complying with HIPAA. As such business associates would be subject to civil and criminal penalties (not solely covered entities) for noncompliance…from security officer appointment to policy, training, and risk assessment implementation. Then again, without the formal guidance issued within HIPAA privacy and security sections, enforceability let along effectiveness is a mere compliance talking point.

On a side note, physicians will now be required to comply with both HIPAA privacy and security rules without additional stimulus aid…and that would pertain to all associates/partners assisting physicians of protected healthcare information. Wonder how these expenses will be passed down…patients perhaps?

Regardless, while HITECH will make mandatory monetary penalties, demonstration/proof ofwillful neglect of compliance duties will be necessary.

So where is all the stimulus dollars going related to this topic? In the neighborhood of $31 billion in the next 5 years will go to healthcare infrastructure which will largely flow from Medicare and Medicaid incentives to both physicians and hospitals for safeguarding electronic health records (EHR). So as you’re read articles indicating HIPAA’s new teeth on security; however, its bit might not be as tight.

Tune into wikileaks and National Institute of Health and ARRA

Friday, May 8, 2009

Commoditized PEN testing services

The vendor selection process always seems to intrigue me particularly regarding the maturation of penetration testing. Ultimately decisions are based on $$$ and perhaps underlying business driver or Executive owner behind the curtain. But the process, time and effort spent is still...spent. In the commoditized sector of penetration testing, what are the differentiators—of the top vendors pitching their well rooted staff each of over 10+ years of penetration testing experience, renowned authors on the very subject, and list of guppies that have used them for whatever reasons in the past.


Think about it...as I have been on both sides of the fence; what is really produced whether it be black-box or white-box or fuchsia-box method used. And, consider the break point when conducting the PEN test is a competitive advantage and when it becomes a disadvantage or pose more risk to the organization.

Let see--first step in any PEN test is reconnaissance via network scanning and sometimes social engineering to identify hosts/targets. Once the topology is laid out, the services are probed for published vulnerabilities or know exposures; and enumeration commences. A trial-and-error, banging away at each possible attack vector, exploiting the host with nothing more than free tools (some I’ve mentioned already or listed in my favorite site from scanning—angryIP, nmap; and cracking—brutus, SQLdict, pwddump; sniffing—by wireshark and netstumbler; and utilities—netcat, ldp.exe, vncviewer. And, more commercialized WebInspect, AppSec, but also nessus and metasploit); otherwise just buy an expensive tool to do all OSI layers, with customization options and fancy reporting.
The judgement and expectation of the attack methods is best decided prior to to minimize impact and business-centric approach. Additionally crafty techniques (and one can argue the value decisioning) may also include spoofing, key-loggers, zone transferring, and of course the XSS and SQL injection. While being able to snoop around is gratifying for some perpetrators or perhaps launching DDOS attacks, etc, the best payoff is domain administrative access. From there they’re owned without them knowing (at least for a period of time).
From the vendor side, they’ll want to also look at your InfoSec policies and security strategy as well to align their findings but mostly to being the work in selling you more services based on the information they gathered including, risk management analysis and as many compliance services you’ll sign up for. Oh, here's an insider tip: whether it be your own staff or hired vendor that conducts PEN testing, ensure/validate that they have remove all traces of there efforts including the domain admin account they used to own the system.

In the end you get a long list of hosts, some that had vulnerabilities and hopefully a smaller list of host actually compromised (pending how far you’ve told the vendor to go). But that’s where the rubber hits the road as they say. Depending on the amount of time you’re allowed (or will to paid for) the “testing” and the risk you pose to your environment during the process, is really based on how good the PEN tester is….and not to mention how much you’d like to learn about the environment. Meaning, once the keys to the kingdom is acquired how much do you really need to know about other attack vectors [thinking exploratory and discoverable evidence not to mention cost perspective]. You get what you pay for but face it; what company must absolutely need (or actually want) to know about every security deficiency. Hence, isn’t your in-house techie good enough or would your cheapest PEN vendor proposal suffice (outside of actual regulatory requirement of 3rd-party requirements)? As a point to conclude or perhaps start of a separate discussion is zero-day testing which is basically/mostly web-based anyway, right? Notice I really didn't discuss Ethical Hacking which others would say is the same...

Wednesday, May 6, 2009

Virtualization, aggressive adaptation

Virtualization is here and like it or not, your organization is implementing (regardless of your security practice model or current maturity level). Perhaps its just cool technology; but then again consider the business advantages related to server consolidation, disaster recovery or high availability, cost containment, and perhaps efficiency too. A (centralized) server-set that acts like any other configured for on the fly fault tolerant while possibly reducing network traffic switching; and finally some studies/implementation report nearly 600% ROI in just a year (sounds a bit high but anything’s possible).


So leverage your existing information security policies, include a couple other key control considerations and let it ride.

First, lock down the O/S like any critical system from physical environment to patching and IDS/anti-malware to start protecting the physical and virtual asset. Disable any unused services (e.g. rely on SSH instead of native IV Console) and provide extra safeguard around system files including .vmx files and appropriate logging of access, etc. And, disable unnecessary server functions, prevent unauthorized devices not connected as well as removal of connected. Beginning with these safeguards will limit security exposures including “hyperjacking” or the ability for attackers to compromise the entire virtualized environment.

Access privileges levels should always be reviewed in any implementation, limiting Guest and other account or operating system functionality to achieve the least privileges model. Appropriate file permissions and proper integration of LDAP or NIS directory services is essential in narrowing the attack surface. If all this sounds familiar…it should. The fundamental core of these practices relate to the information security protection of any mission critical company computing resources.

The next layer of defense is at the networking-side component. The system firewall should be enabled to limit TCP/UDP ports and services; and network segmentation/isolation should be configured (hopefully more than simply Layer 2 VLAN). MAC address spoofing or the option to accept request from another destination MAC address other than the effective should be disabled. Additionally, ensuring forged transmission setting is configured such that comparison of MAC address matches the effective sourced. And, if SAN (Storage Area Network) is used, ensure LUN masking or zoning in exercised, reducing zone visibility and selectively presentation only necessary storage information. Other consideration include: not using nonpersistent disk so offline access/vulnerability is not feasible, disable virtual disk modification preventing elevated privilege exposures, not allowing promiscuous mode on network interfaces so that packets are not read across the network among other virtual network.

Lastly, when it comes to logging, implement what you would normally require of critical assets including activity tracking particularly for web access to VMWare, connection/authentication records, console and error messages, availability of agent and interrupts; and secure storage of log files (e.g. in ESXi 3.5 files hosd.log, message, and vpxalog). When combined with file integrity (e.g. TripWire) monitoring, you’re starting to build onto your SIM/SEM platform.

For additional security controls, enable certificate-based encryption when legitimacy of the root certificate authority and client/certs are required. And, weigh the risk with compliance strategy.

So I ask you, who owns virtual security in your organization? And, given your existing technology integration and existing change management effectiveness, how do you best monitor your ESX environment and how is it audited?

Monday, May 4, 2009

IPS/IDS (part 2 or 2)

...a vendor touting an award winning network IDS solution with a flair on forensics. Admittedly, the combo (if the sales pitch held steadfast) would fit nicely in a security professional’s tool bet, limiting number of vendor products, its promised integration with SIM (Security Information Management) solutions; and did you say DLP (Data Leakage Prevention) too.


Well, an IDS solution it was not. Sure it did signature based detection (in-line via span ports) and that’s pretty much it. The appliance does not champion any IDS packet anomaly, behavioral, nor Artificial Intelligence (neural/neurons) recognition. A pure match on signature and a couple custom scripts written by you for additional alerting and you'd be good to go. For correlation, exporting and anything else…you need to sign-up for the mothership offering that includes a proprietary database collection engine, allowing you to capture every packet (payload and all) in your network provided you place a sensor in all the segments you want to monitor. Like any other sniffer trace, you can view the capture in binary or hex format (depending on how you’d like to fall asleep). The solution does offer a GUI interface for management and configuration; but given a large environment you do the math…up to a cool hundred in total sensors with terabytes of data (in days) indexed in a database. However, since its proprietary you need to keep that storage or archive on-line somewhere to make sure of it (since the cataloging and indexes reside within the proprietary database) so pricing didn’t come up probably for a good reason.

But you’ll be armed with volumes of data to assemble and extrapolate information (usually post-incident) to your hearts content—so that’s the forensics side; but you will need to rely on your own FTE or forensics staff [standing ideal] to perform the analysis.
Now, with this feature-set, imagine the topic on electronic data discovery and privacy/compliance with this type/volume of information. Consider the “auitability” and preservation and category of documentary evidence; let along admissibility and validity.

This product screams BUY ME! An acquisition by a bigger fish spells $$$ for the company’s owners/investors and integration with a product that can leverage the captured data would be phenomenal (for something other than an just an IDS offering).

Friday, May 1, 2009

Routing…the forgotten security trusted model

Higher layer security has been the buzz for some time and how can you contend with numbers depicting web application attacks rank up to 75% of security issues. But what about the new fad everyone’s been jumping on board with MPLS, carrier Ethernet variations, or your carrier cloud implementation. The root of all is packaged into MPLS but really the BGP protocol base. I would almost make the analogy of MPLS/BGP to HTTP unlike HTTPS/SSL…to begin the debate

Sure providers are aware of the possibilities and continue to enhance the protocol with, for example,MPLS FI (Forwarding Infrastructure) but it was found to be exploitable with crafty coding as well.

And, like all other vulnerabilities, it can be discussed further at forum such as Black Hat Europeand ERNW

The one good thing about all this though, is the hacker needs to get into your network first (go figure) before being able to modify the bits and bytes which can turn your routing tables into mesh/mess.

Simply said it works like this, BGP is based on established trust and while MD5 can buy you a level of key security (but come on it’s only MD5 so a super computer not needed to crack); so packets leave with a forwarding label and egress provider edge route with VPN destination identities; thus intercept and use command line tools (mpls_tun) and bang way. You can change label information and reroute packets to authentication servers and malicious DNS, etc. And, did you say transparency models…then next up let’s talk Layer 2 exploits.

On a related note, carrier-based offering of VPN, either EVPN or EBP VPN is traditionally not encrypted though your traffic is tunnelled through the provider's network routers. Whereas, an implementation of IVPN or IP-VPN is not only tunnelled but encrypted often through an appliance or firewall (unlike a traditional Layer 3 router with EVPN).

RFC3031 MPLS | RFC 4364 BGP/MPLS IP VPN | RFC 2547 BGP/MPLS VPN

Wednesday, April 29, 2009

Swine flu vulnerability

Yes, it is contagious and affecting technology too…mostly spam but also domain registration for starters.
McAfee is reporting that 1 of 50 spam emails contain junk messages from compromised hosts, prompting to go to site for cure, updates, etc. If not to create havoc, then make money, right. There is report the variant forms of the “swine flu” names are being registered for domain names are on a rise and a sell-off of the same.

While your inbox in busy enough, I bet your company’s Emergency Management and BCP teams have woken up as well, flooding your inbox (for everyone to reply and forward;). Best recommendation is probably to stay the course, stick to your plans and adapt accordingly based on confirmed understanding and in a manageable way.

Many official sites so here’s just a few advisory links and a health map
http://www.fdic.gov/news/news/financial/2008/fil08006.html
http://www.fdic.gov/news/news/financial/2006/fil06025.pdf

Tuesday, April 28, 2009

Network IPS/IDS selection criteria (part I)

Choosing the right technology solution can be a daunting task particularly when meeting the wish list of requirements does not always equal the cheapest buy. So when it comes to Intrusion Detection/Prevention System, let’s look at some consideration points to aid in the investment decision.

To continue business as usual while providing that needed layer of defense, performance degradation can often be the drawback. Understanding and testing the inspection throughputis essential to ensure sufficient processing of any given network segment; as well as the speed in which the IPS/IDS is able to analyze and react on the compounding signature list and vulnerability exposure volume. Perhaps the most critical and fundamental component is the quality of signatures and how customizable and reputational quality of the input (including zero day inclusion). It does hold true in nearly all cases that the output is only as good as the input; and with IPS/IDS that means many more false positive being the downside. Understanding your “normal” traffic along with best fit in terms of statistical and behavioral considerations can save in both time and money.

Putting it all together, the best approach is integration and correlation with other security tools and logs to deliver appropriate level of confidence in the results. Managing all these components would sometimes open the discussion for outsourcing. Managed IPS/IDS services can augment an organization head count as well as skill set…

Finally, it’s always beneficial to do a financial viability of vendor and their strategic appetite. Some IPS/IDS vendors are offering supplemental support for NAC enforcement, DLP integration, and rate limiting by prioritizing traffic via pre-defined security criteria and protocol/services type.

If you drink the Gartner kool-aid then the choice is TippingPoint, McAfee, Sourcefire and Juniper Networks; followed closely by Cisco and IBM. And also ponder SANs review