Archive for the ‘Commentary’ Category
The password-based authentication model is plagued by weaknesses in theory and, as demonstrated by countless hacked accounts, in practice as well. The time for ubiquitous two-factor authentication and password managers is now.
Authentication in computing – the process by which the identity of users is verified – has long relied on passwords as the primary (and often the only) mechanism for account holders to identify themselves. Even the most casual computer users are familiar with the process: when you power on your device or visit certain websites, you often need to enter credentials (i.e., usernames and passwords) to access your files and utilize your account capabilities, and you assume that you are the only one who knows your passwords. In an ideal world, knowledge of passwords would be restricted to the rightful account holders and therefore the entry of valid credentials would be assumed to verify the identity of the user in question. As such, this authentication model seems simple and reliable enough. However, thousands of individuals have had their money and identities stolen, credit cards used, private files accessed, and private emails viewed because the reliability of password authentication failed them. We are going to examine why the practice of logging in to computers and websites with passwords is prone to violation, how these weaknesses are exploited, and what can be done to lower the risks facing our user accounts.
The technical standards that govern how the Internet and modern computer networks operate are debated and approved by a number of organizations. These organizations exist to ensure the proper functionality and long term feasibility of network transmission methods. IT professionals should be familiar with these organizations, how they operate, and what their specific roles and responsibilities are. After all, it is clearly within our professional purviews to intimately know the standards which dictate how the Internet’s core technologies work. For example, detailed knowledge of IPv4 (and very soon, IPv6) is a must for today’s system and network administrators. But who determines how the IP protocol operates? Who sets the standards regarding networking technologies? Read on to find out.
Stuxnet, Duqu, and Flame have gained notoriety as some of the most damaging and devious forms of malware. First appearing in 2010, 2011, and 2012 respectively, these three worms have caused fear in the information security industry and panic among the administrators of infected hosts. Before analyzing their workings and unique characteristics, here is a review of malware in general and a summary of some noteworthy examples of destructive viruses from years past.
If you, as an information security professional, are tasked with maintaining the cyber defenses of an information system (IS), this is a responsibility that you cannot carry out in a haphazard manner. Given the complexity of modern computer networks, a standardized approach to IT security is necessary to ensure that all facets of the IS are protected to the utmost. As with network connectivity troubleshooting, it is simply better to follow a plan of defined steps rather than attempt to achieve your goal in an unorganized way.
As you are aware, threats to the security posture of an IS come in many forms. Unpatched software, default software settings, unnecessary software installations, weak user account policies, porous physical access control, and the absence of effective emergency response plans can all be exploited by human attackers, malicious software (malware), or unfavorable (possibly disastrous) circumstances. All of these vulnerabilities (weaknesses which could be exploited by adversaries to compromise the security posture of an IS) are what you try to eliminate in the field of information security (also known as information assurance, or IA).
To help prevent occurrences of unauthorized IS access or data breach, a systematic methodology for identifying and remediating security weaknesses is required. Vulnerability management, when implemented in such a precise and thorough manner, becomes a vulnerability management program (VMP).
Benefits of a vulnerability management program
The main aim of any VMP is to ensure that current vulnerabilities within an IS are identified, evaluated, and resolved in a timely and cost-effective manner. This goal is achieved by successfully carrying out the following steps:
- Accurately identify vulnerabilities in the overall network infrastructure;
- Monitor and verify the remediation of the vulnerabilities;
- Examine the root causes of the vulnerabilities; and
- Modify standards, policies, and processes to fix those root causes to reduce the occurrence of future vulnerabilities.
A properly functioning VMP also brings about the following desirable results:
- Prevents the loss and/or unauthorized modification of sensitive data;
- Maintains client and partner confidence in the enterprise and upholds its reputation by preventing embarrassing incidents;
- Demonstrates compliance with legal regulations and industry best practices, and consequently enables the IS to better pass audits and certification & accreditation efforts.
As an effective VMP matures, it becomes increasingly efficient and streamlined while the quantity and severity of discovered issues decrease. In other words, the CIA operational standards are strengthened and the overall resiliency of the IT infrastructure is increased. “CIA” in the information security field stands for:
- Confidentiality – the prevention of unauthorized data access.
- Integrity – the maintenance of data in a trusted state.
- Availability – the ease of IS access and operation for authorized parties.
In February 2011, the loosely knit collective of hacktivists known as Anonymous successfully compromised the corporate network of HBGary Federal (HBG Fed), a company that provided information security services to the federal government of the United States. This attack brought down the HBG Fed website, compromised the Twitter and LinkedIn accounts of HBG Fed CEO Aaron Barr, and resulted in the public release of thousands of internal documents and emails.
Before proceeding you may want to familiarize yourself with the history of the Anonymous hacker group.
Storm brewing – the prelude to the attack
The internal documents disseminated to the public by Anonymous reveal much about the nature of HBG Fed’s business operations before “the incident”. HBG Fed was engaged in several anti-hacker projects that were aimed at disrupting and discouraging Anonymous-style hacktivism. Based on their own internal files, here is a breakdown of HBG Fed’s efforts at fighting Anonymous, similarly motivated Internet activists, and individuals deemed to be antagonistic to their clients.
In June of 2010 the FBI publicized the arrests of ten individuals who had been working as covert agents for the Russian government. Although the nature of any sensitive information passed to their Russian handlers remains unclear to the public (as does their ability to even gain access sensitive or classified government information), what is known are the communication methodologies that the spies utilized with their associates, as well as the mistakes they made that blew the cover of their operation.
Commentators have criticized the spies’ apparent carelessness and lack of precautionary measures taken to remain off the FBI’s radar. ABC News ran a story quoting ex-KGB members who called the spy ring “laughable amateurs”. Arstechnica calls them “dumb”; Slate calls them “dopes”. However, the fact remains that that U.S. authorities only discovered the spy network as a result of a tip-off received from a Russian traitor. Had the traitor (named as Alexander Poteyev) not alerted the FBI about the spies’ activities, it’s likely that they would still be in operation today.
Nevertheless, what interests me from an ethical hacking standpoint is 1) how the spies operated and passed information and 2) what they did or neglected to do that blew their cover.