Sun 29 May, 2011
This blog’s hiatus will be coming to an end shortly, once arrangements have been made to recast it as ‘Brown Hat Security’
Watch this space for more news.
Sun 29 May, 2011
This blog’s hiatus will be coming to an end shortly, once arrangements have been made to recast it as ‘Brown Hat Security’
Watch this space for more news.
Thu 21 Apr, 2011
Some folks may remember the HBGary debacle a short while ago, when HBGary Federal (a wholly-owned subsidiary of HBGary, specializing in government contracts) got themselves cracked by Anonymous after specifically calling them out. The parent company, HBGary, have published an open letter making certain claims, which Ars Technica has examined.
There’s little surprise in the letter–it’s mostly a reiteration of previous claims about the leaked emails having been ‘altered’ and about how HBGary Federal was a completely separate company with no actual connection to the parent organization other than ownership. Ars does a good job in dissecting these claims and pointing out which ones hold water and which ones don’t.
Publishing this letter in the first place was likely a bad idea, though. Anyone who has the least bit of knowledge about Anonymous–which the head of HBGary Federal claimed to have–knows that resurrecting attention to a controversy causes the phenomenon known as “lulz” to occur. For those unfamiliar with the term, it’s a sort of measurement of attention-worthiness of a particular topic or entity, based on the quality and quantity of reaction to be gained from any interaction with them. Leaking the HBGary Federal documents produced an extensive amount of this–it gained mainstream media attention and increased the visibility of Anonymous in the public eye. The Scientology protests were the same–shedding light on the known-bad Scientology organization’s policies and procedures with public protests (and their characteristic purple prose) caused extreme consternation amongst the organization and brought public attention to Anonymous.
Now, HBGary has, essentially, done the same thing that HBGary Federal did–call out Anonymous’ activities, claim to be invulnerable to their attentions, and bring public attention to Anonymous’ interactions with them. This is the sort of thing that tends to be deemed “lulzy” by Anonymous, and generally tends to bring certain actions.
Westboro Baptist attempted to take advantage of this phenomenon by releasing a fake press release (that claimed to be on behalf of Anonymous) claiming a war against them followed by a press release under their own aegis calling out Anonymous. The Anonymous collective rather quickly determined the illegitimacy of the first release–there may be no central organization, but there is a fairly distinct style, which Westboro did not emulate perfectly–and (correctly, as it turned out) determined that there were likely specific intentions to trap Anonymim who attempted to DDoS or otherwise infiltrate the servers provided via honeypot servers.
As it happened, when Westboro pushed the issue, they were rather promptly taken down–just as HBGary is likely to be. Westboro, like HBGary, made the key mistakes of assuming Anonymous is entirely disaffected teenagers with a modicum of computer skills and a coherent organization with limited membership. These assumptions miss certain key points–Anonymous is, in essence, a nom de guerre that can be taken on by any person or entity, as is evidenced by th3j3st3r’s participation in the actions against Westboro following his specific attacks against Wikileaks; while not strictly an Anonymous action (as he did claim credit for it), the action against Westboro was compatible with Anonymous’ goals and views.
What HBGary fails to realize is that, by seeking to defend themselves against the ‘blog-o-sphere’, they’ve inadvertently invoked the Streisand effect and drawn specific attention to what they want to keep quiet. Whether this release has produced enough ‘lulz’–that is, attention to the incident as a cause worthy of working on–remains to be seen, but if they do manage to get away with it without significant infiltration and exposure of more embarassing secrets, they should count themselves lucky.
Anonymous’ actions cannot be predicted specifically, but it’s fairly obvious that calling them out is, as a comedian recently opined, tantamount to inserting one’s genitals into a hornet’s nest–a bad idea, and likely to cause embarassing, painful problems.
Wed 20 Apr, 2011
Via Slashdot, tirania.org points out some weaknesses in the terms of service for Dropbox, a popular filestore for those who want to operate ‘in the cloud’ and store and transfer files online.
One of the difficulties for those who want to make the case for performing business operations in the cloud is the possibility that non-authorized persons might be able to see the data stored and processed externally. The revisions in the security policy for Dropbox highlight some of these difficulties–to wit, Dropbox, despite its claims of using fairly strong encryptions to protect user data, has advised that they will turn over any data stored on their system to the government on request.
Needless to say, if the data was stored securely enough to prevent the Dropbox employees from accessing it, they would not be able to do this.
It is likely that the change in policy is the result of pressure by government agencies–police, etc.–to turn over data related to some sort of official actions such as prosecution of a crime, but it is still concerning for any users of this or other services who wish to ensure that their business data (which may contain secret processes or formulas, for instance) doesn’t go walkabout and end up in some other place than where it ought to be.
There is a simple, though somewhat less easy to implement, solution to ensure that the data remains confidential regardless of external conditions–encrypting the files before transmitting them definitionally reduces the possibility of disclosure from external services. A careful sysadmin ought to be able to build such a provision into their backup script, though this makes use of cloud services for anything other than incremental backups rather more clumsy and less convenient than they might be otherwise.
In order for cloud computing to be more widely adopted, though, security and privacy policies need to be made clear to the users of the services, including exactly who is able to access the data (nobody other than the user, preferably) under what conditions and for what purpose. Policies alone will not enforce this, mind; policies rely on the weakest link in any security chain–the users–to maintain any kind of security.
Tue 19 Apr, 2011
On their “Naked Security” blog at Sophos (which, by the bye, tends to be blocked by the more paranoid sorts of web filter due to the presence of a keyword in the title that’s often used for other purposes), the authors recommend that Facebook implement three relatively simple practises in order to vastly increase the amount of security available to the users.
From a security standpoint, the recommendations are eminently reasonable: when changing privacy policies, allow the users to opt-in to the new expanded ‘sharing’ roles, rather than having to go in and opt out. Vet app developers more carefully to ensure only legitimate and ethical developers are allowed access to user information. Force secure connections, rather than making a ‘best effort’.
These recommendations would go miles towards protecting the userbase of facebook. By ensuring secured connections to protect against eavesdroppers, non-savvy users connecting from corporate or public networks that might be sniffed wouldn’t end up giving away all their information. Vetting app developers to ensure that they’re publishing legitimate applications would help to prevent the dozens of scam applications that serve only to spread malware and the like. Keeping users at their previous privacy settings until such time as they opt in to broader sharing agreements is a standard common-sense approach to security.
And Facebook will do none of this on their own.
Facebook is, first and foremost, a money-making enterprise. They have found a very effective and useful (for them) model to collect and publish users’ demographic information, and get people invested into ensuring that their information is as accurate and up-to-date as possible. They make their revenue from selling this information, so it’s in their best interests to ensure that as much of that information is available to the people who provide them with their money as possible.
Forcing secure connections will interfere with their customer base connecting to them on certain networks–corporate networks that have badly-configured proxies, for instance. This results in less availability of the site to the users, and less information shared–not good for their corporate model. Facebook wants users to access their page regardless of security settings, meaning that they will never insist on a secured connection if that will interfere with access to the page.
Vetting app developers, as well, is not likely to happen in any sort of serious manner. There’s no incentive for Facebook to block their revenue stream by making the barriers to entry any higher than filters out the most blatant and obvious scam artists; that would be an interruption of their revenue stream. If vetting happens, it will almost certainly be due to specific regulation insisting on it from an outside agency, and will be to either the absolute minimum level required or–if the marketing department wishes to use the opportunity to boost the brand–to a slightly higher level. It almost certainly will not be to any level that would filter out a significant number of scam artists.
Finally, Zuckerberg’s opinions on privacy aside, privacy settings will almost certainly remain ‘opt-out’ unless there is a significant rewrite of the codebase and a complete revision of the development process. As it stands, code is deployed on a segment of the live userbase for testing; changing this would require either the use of simulated traffic on a simulated development database or requesting users to opt-in to testing the new features specifically. Further, maintaining multiple privacy systems simultaneously would impact performance of the codebase (given that any request to a user’s information would have to determine which privacy setting system to go through) and would not automatically bring over ‘stale’ profiles; this further impacts revenues as those paying for the demographic information are not going to be interested in inconsistent results.
While Sophos’ recommendations are eminently reasonable (and could be implemented without too much actual cost–changing the privacy settings to be more atomic and making further revisions dependng on database timestamps or the like would not impact performance that much; allowing users to opt-in to beta testing would likely be met with enthusiastic response if it was presented properly; and vetting developers properly, while it would impact revenues in the short term, could be written off as a marketing campaign) they’re not likely to be implemented anytime soon. Facebook’s current model works well for them (given the amount of money that they’ve been raking in) and they have no current incentive to change this, unless there is significant outcry and a tangible reduction in userbase–which, given the investment that many people have made in putting the bulk of their social life into the facebook domain, is not very likely to happen at all.
Fri 15 Apr, 2011
PGP and other associated PKI systems rely on a so-called ‘web of trust’ model in order to verify that two parties previously unknown to each other should be able to trust each other’s cryptographic signatures as valid. If there are multiple continuous chains joining these two parties involving mutually trusted intermediaries, then each party knows that the other can be trusted to be who they say they are.
A potential problem is that, were someone’s credentials stolen, their signature could be forged on untrustworthy credentials, thus compromising the web. Further, if one or more people were tricked into signing keys that were for entities other than who they said they were, the web could be compromised even further.
PGP and similiar PKI infrastructures have, built-in, a mechanism for accounting for these possibilities: certificate revocation. Revoking the trust in a key can invalidate it to the rest of the web, and allows for damages of that kind to be mitigated.
A potential improvement to this system of trust and revocation would revolve around greatly reducing the usual length of use for an individual key; semi-automatically changing keys on, say, a weekly basis (and allowing for automatic replacement of known-trusted keys by their next iterations) would allow for both verification of use on the key (given user intervention in its renewal, it would verify that the user has the necessary credentials for the renewal method and that they’re actively participating) and also would allow for user revocation of ‘stale’ or otherwise undesired credentials. During the refresh process, the user can choose whether to continue to sign and endorse or stop signing others’ credentials; this helps the dynamics of the web by helping to cull out untrusted persons more rapidly.
Further, providing the option for an anti-signature, to specifically call out a signature as untrusted to others, would significantly mitigate the effect of malicious persons such as spammers gaining access to a web of trust. Allowing for this anti-signature could also form the basis of a sliding trust scale, with trust and anti-trust counting against each other and allowing for unconnected persons to see that a particular account may or may not be trustworthy. Paths connecting persons would be deprecated by paths containing antisignatures; determining whether or not to trust someone with a significant number of antisignatures would be assisted by allowing short comments with the antisignatures similar to twitter messages (i.e. “this person is a spammer” or “this person kicks puppies”–if you had no objection to spamming or puppy-kicking, then you could choose to ignore certain antisignatures).
Distributing this credentialing system on a P2P basis would greatly assist the scaling, and if organizations bought into it, it would form a convenient adjunct to the SSL certification system, given that the SSL certificates could be signed into the web of trust as valid or invalid, thus significantly mitigating the effects of any erroneous or fraudulent certificates being issued by certifying authorities.
Further study is needed in order to determine how to automatically cull spam networks–applying network topology math here to determine which sub-webs are, essentially, self-signing would assist this greatly.
Thu 14 Apr, 2011
A traditional drink from India, Lassi has a lovely cooling taste excellent for hot days and hot foods. The one ingredient that may be difficult to find is rose water; try a middle-eastern or other ethnic market.
2 c plain yogurt
1/2 c cold water
6 or 8 ice cubes
3 tbs fine caster or powdered sugar
a small splash of rose water
(opt: 1/2 tsp cardamom)
Blend ingredients together until smooth. Those who frequent smoothie bars will recognize the texture desired immediately.
Optional variants include mango (requiring mango nectar and milk, and taking out the water) and the traditional salty version (with more spices and salt rather than the sugar).
Wed 13 Apr, 2011
SEO–”search engine optimization”–is a set of tools and practices used by both legitimate webmasters trying to jockey for position in the rankings of search engines and by spammers attempting to push up, temporarily or otherwise, fly-by-night domains selling dubious products. In some less-served areas, SEO-promoted domains may outnumber legitimate domains by many to one; combine this with redirections, link farms, and the like, and it becomes much more difficult to determine a legitimate source for any products, services, or information that you might happen to need.
Traditionally, the way in which this has been dealt with by blog and forum owners is to filter or delete spam posts. This particular blog is frequented by any number of spammers who attempt to skew ratings with contentless comments linked to suspicious domains hawking–apparently, as I’m not stupid enough to follow those links–anything from viagra to discount financial instruments to “discount” rolexes.
Another approach has become apparent, though, especially with the advent of certain attempts by search engines of note to restrict the efforts of these “blackhat SEO tactics”–an approach that can be labeled SEP, for Search Engine Pessimization.
‘Traditional’ optimization techniques seek to raise the ranking of a page through exploiting the algorithms used to generate the pagerank–generally assumed to involve links from external pages, keywords in links pointing to the page, and several other factors; Google is, naturally, reticent about the exact specifications of their algorithms, especially given the amount of money people gain from trying to guess at them.
Blackhat SEO–those tactics which have been developed to exploit these rankings–take advantage of these techniques by exaggerating them, trying to imitate legitimate traffic through spamming fora with spurious links, farming keyworded domains and cross-linking them in giant link farms, etc. Many of these techniques have been detected, and some have been well publicized by Google and other search engine companies as no longer effective because they are automatically detected and deprecated by the engine.
The attack venue here should be obvious: to pessimize an undesirable domain, promote it through the use of known-bad ‘optimization’ techniques.
These techniques can be determined by looking through complaints filed about ‘arbitrary’ reductions in pagerank and the like; by determining the cause of the reduction, a list of methods and tactics can be quickly developed that will serve to indicate to the automated algorithms that a site is attempting to ‘game the system’ and gain unjust ranking, triggering the part of the algorithm that punishes those sites.
An interesting supplementary tactic is that of googlebombing. Googlebombing is the practice of using certain SEO techniques to associate a given keyword–usually one with amusing connotations–with a particular page; one of the more publicized ones involved former president Bush and the word ‘failure’. Associating unsavory words or those involving illegal activities with a domain could, in this day and age, result in the domain’s siezure by the authorities–a win-win situation, as every domain siezed for the wrong reasons weakens the case of this rather nonsensical practise, and it also neuters the spammers by disabling a venue by which they could potentially make money.
Maximum benefit, though, would likely result from linking in some way the domain to be pessimized with known-bad domains–those serving malware, those known to be scams, etc.–thus alerting the search engines’ spiders of a potentially harmful connection as soon as possible. Using extensions to report spam is also effective.
There is always a race between those innovating ways to make search and the like more relevant to the user and those seeking to exploit it for their own reasons; manipulating search engines to punish spammers is one way in which anyone who dislikes spam can fight back.
Tue 12 Apr, 2011
The spokesman for an outfit that puts out retro games, Good Old Games, has been quoted saying that DRM drives gamers to piracy. The argument being made here is that the massive hassle involved in installing and maintaining the ‘authorized’ conditions to play the game legitimately form too high a barrier and that, at least for single-player or other local system games, it’s easier to install a cracked version of the game than to try to maintain the ‘authorized’ version.
Mr. Kukawski also makes an interesting point in his statement–much of the DRM that game manufacturers (and the manufacurers of other media) insist on installing could well count as malicious software. Anyone recalling the Sony rootkit debacle–where Sony attempted to enforce DRM by crippling the functionality of the users’ systems without their consent–will easily see what happens when a company goes ‘too far’ in their quest to maintain control over content.
However, piracy is more or less inevitable, and by spending so much time and money ‘securing’ your media against theft, you’re more or less guaranteeing that others will take an interest in breaking it.
Insisting on the installation of malicious software–software that spies on the system’s activities to see if “forbidden” programs are installed or running, that “phones home” to a remote server, that cripples the functionality of your system unless certain conditions are met–is a tactic that even the mafia would hesitate at enforcing.
DRM only serves to hurt legitimate users.
Pirates will pirate the media no matter what–if it’s sufficiently interesting to them, then it will be cracked and released on the internet, regardless of the technical obstacles put in the way. The only pirate inconvenienced by the DRM is the first one–and if these media companies paid any attention to the psychology of the typical warez-head, they’d realize that by putting technical obstructions in the way, they’re only serving to glorify the exploits of those pirates who make available the content in the first place.
One might be excused for thinking that the pirate ecosystem was being deliberately cultivated, in a sort of perverse mockery of the open-source software world–many eyes finding and correcting ‘bugs’. Ubisoft has rather blatantly stolen (in a perversely ironic way) the work of various pirates on a number of occasions, including as a method of circumventing onerous DRM put in place by an online marketplace.
Time and time again, DRM has served only to inconvenience the legitimate paying customers, who are painted as thieves and bandits by the media companies who take their money with one hand while denying them the functionality that they have paid for with the other. This is not being limited to software, either; Intel has announced that their new line of ‘Sandy Bridge’ chips will have DRM built into them, attempting to enforce these ‘digital rights’ at the hardware level for streaming media. This will only serve to hurt legitimate users; pirates will, in all likelihood, take about a week to circumvent these protections and the whole game will start again.
Content providers should take notice that this DRM nonsense does not work, and look at the example of studios who have made this realization. Cut out DRM licensing and R&D from the budget, and accept piracy as part of the cost of doing business–write off the ‘losses’ from the advertising budget, given that piracy can lead to more sales. Fighting the flow does nothing but waste energy; using the flow to your advantage helps you grow and thrive.
Mon 11 Apr, 2011
The Register has an excellent (if somewhat snarky, though that is the milieu of the publication) article on some of the recent breaks to SSL’s effectiveness as a platform for secure communications.
The basic problem behind this round of vulnerabilities is the reliability of the certifying organizations. After the recent breakin to Comodo that resulted in the issuance of several fraudulent certificates, more attention has been paid to this particular flaw in the way in which traffic is secured.
SSL works by means of PKI: public key infrastructure. This means that a large string of data, known as a ‘key’, is sent out into the public for use by anyone wishing to communicate with the server. The key has a certain mathematical relationship with another key stored on the server (known as the private key); when data is encrypted with one, it can be decrypted with the other if the proper mathematical steps are taken.
These keys are used to negotiate a secured connection to a server, which serves two purposes–first, it assures the person connecting to the server that they are connecting to the genuine article; the fact that the connection can be made successfully proves that the server has the requisite private information associated with the key. Secondly, it allows the entire connection to be encrypted so that people between the computer and the server will be unable to determine the content of the transaction.
This only works, however, if the key is trusted–if anyone has access to the private key other than the server in question, then there is no assurance that the connection is being made to the correct server, and no assurance that someone in the middle is unable to eavesdrop.
Cerifying Authorities such as Comodo, VeriSign, and others are supposed to guarantee that they issue certificates only to the organizations that they identify. However, this is not always the case; especially in the case of their licensees–companies that issue certificates under license from the CA houses, and whose certificates are intended to be just as trusted–the requisite identification of the certificate request is not always accomplished properly, resulting in certificates being issued to incorrect persons.
One particular certificate that has been mentioned is one for ‘localhost’–that is, one’s own computer. While most articles on this subject have noted that this is an essentially frivolous certificate and have paid much more attention to the ‘exchange’ domain certificates that could be used for intercepting corporate email, it’s worth noting that a live ‘localhost’ certificate along with a maliciously-installed local proxy could intercept all internet traffic that a user attempts, thus providing no overt sign (unless the user was very security-conscious and checked carefully the certificates of all the sites that they connected to) to the user that they were being compromised and, for instance, their bank account details were being stolen.
Certain alternatives to the SSL system have been proposed, but as yet they all suffer from the difficulties of limited adoption. Until a significant movement arises–or at least some of the ‘big players’ get involved–eCommerce and the like will likely continue to be only partly secured.
(On a side note, a service that actively tracks and validates certificates issued by all authorities–like the automated one run by Google on page 3 of the article linked–could be made very profitable)
Fri 8 Apr, 2011
While usually I’m more likely to recommend authors, I recently had some dealings with another market that I feel is worth endorsing.
Full Tang Cutlery is a smithy in New York that specializes in creating unique knives. I had been looking for a particular (somewhat unusual) sort of blade for some time, but had not seen any available that I thought were worth getting; when I realized that someone whom I knew from a message board I frequented was in the business of making custom blades, I thought I may well ask if it were possible (and economical) to have such a piece forged.
As it happened, it was very possible; while the piece that I was looking for was rather outside the usual range, it was well within their capabilities to manufacture.
I was kept very well informed of the progress of my piece, including when a flood caused some slight delay. It was finished well before I expected, though, and I quickly found myself in posession of a falcata with the modifications that I had requested to the ‘classic’ design.
The pictures hardly do it justice; it’s a beautiful piece of work–very solidly built, obviously very functional (though, as I don’t have any battles with Rome to attend, I’ll be using it more for insurance against rattlesnakes and the nastier sorts of chaparral) and balanced just as I had wanted it. The construction is as solid as you could ask for, and all the parts are integrated smoothly together.
The jimping–the notches that you can see in the photos–is both decorative and practical; it greatly enhances the grip on the hilt, and adds an interesting dimension to the design.
Altogether, it’s a very solid, well-tempered piece, and an absolute pleasure to hold. I heartily recommend Full Tang Cutlery’s work to anyone interested in good old fashioned quality workmanship.