The Market Ticker
Commentary on The Capital Markets- Category [Technology]

~30% off across the board on all the current models.  That puts the Z10 at just over $200 and the Z30 at $349.

The Passport is coming, obviously... which I am lusting after.

Good deals to be had if you want one.... all unlocked and carrier-agnostic.

Go to

View this entry with comments (registration required to post)

I really wish people would pull their heads out of their asses on this, but they won't.

The massive data breach revealed this week could be even worse than initially feared, warns a cybersecurity expert.

Citing records discovered by security specialist Hold Security, The New York Times reported on Tuesday that a Russian crime ring has managed to gain access to more than a billion stolen Internet credentials. The stolen credentials include 1.2 billion password and username combinations and more than 500 million email addresses, according to Hold Security, which describes the breach as potentially the largest ever.

These thefts typically come from two places -- insecure connections that are "sniffed" and crap code that is broken into on the back end of someone's server.

The latter is distressingly common, as is storing such credentials in plain text instead of via a one-way hash, which cannot be reversed.

A one-way hash looks like this:


That's a real one, by the way, for a real (and it might even be privileged) account on Tickerforum.  

Good luck figuring out the password from it.

Far too many sites stores such credentials in the clear instead.  Specifically, any site that can actually send you the password you used has it, obviously, stored somewhere in the clear (or can retrieve it.)  A one-way hash cannot be reversed; thus, were you to figure out what account that hash was for the best you could do is ask the system to send a password reset link -- and that link would go not to you, but to the account's owner.

The other problem that is being seen is "shim" code that hackers put into a site's software and literally siphon off credentials before it hits the back end software.  To do that you need to break into the host where the site is being run from.  This is frequently easier than you'd think; once you have that then you can steal credentials as the users submit them.

Security is a process, not a product.  What you have to understand is that whenever you use some site on the Internet you're not the only place that has a security risk.  The entity you trust also has one, and if their security sucks yours can be excellent and it doesn't matter since the data can be stolen from their end.

PS: No, the size of the organization does not necessarily correlate with whether they have a handle on things in this regard either......

View this entry with comments (registration required to post)

This story pushes a bunch of buttons for me.

HOUSTON – A cyber-tip generated by Google and sent to the National Center for Missing and Exploited Children led to the arrest of a 41-year-old Houston man who is charged with possessing child pornography.

Police say Google detected explicit images of a young girl in an email that John Henry Skillern was sending to a friend, the company then alerted authorities.

A bit of background -- I've done work for the good guys when it comes to kiddie porn before, and if asked will do so again.  When I ran MCSNet we used to get a subpoena here and there for various people's records related to that crap, and it generated zero sympathy on my part for the targets of same.  This sort of crap deserves the harshest possible punishment; polite company is not the place to discuss what I believe is an "appropriate" punishment for offenders who are caught and duly convicted.

However, that doesn't change the concern I have with this sort of scanner working on an automated and unprompted basis.

In this particular case the accused has a history of committing this sort of crime; he is a registered sex offender with a conviction for assaulting an 8 year old.  But -- there was no active warrant (or other item) disclosed that would generate a defensible reason for particular and targeted suspicion on his activity.  In other words, it appears Google examines everything that goes through it for this sort of purpose.

Is that proper?  It's hard to argue "no" in the instant case but the problem is that Google appears to not be limiting such a thing to that sort of instant case where pretty-much everyone (except the child predators, of course) would agree it's ok:

When you upload,or otherwise submit, store, send or receive content to or through our Services, you give Google (and those we work with) a worldwide license to use, host, store, reproduce, modify, create derivative works (such as those resulting from translations, adaptations or other changes we make so that your content works better with our Services), communicate, publish, publicly perform, publicly display and distribute such content.

That is a very broad release of rights.

I wonder how many businesses, for example, have thought about the implications of this?

This is not a release that just applies to selling "relevant advertising" or "notifying authorities of apparent illegal conduct." 

It's a broad-form, all-use, worldwide release.

Now Google does say this next:

The rights you grant in this license are for the limited purpose of operating, promoting, and improving our Services, and to develop new ones.

The problem is that they don't define what "our Services" encompasses, and indeed they state clearly that it includes not-yet-for-sale services.

What if one of those services (present or future) pertains to data matching of some sort for an insurance company, employers, or similar?  What if "our Services" in the future includes competitive intelligence on others in your industry?

If, while running MCSNet and in the ordinary course of maintenance and operations I came across a customer's stored data that happened to contain kiddie porn you can bet I would have reported it.  But there's a hell of a difference between reporting something that I find incidental to normal operations and designing a surveillance system to prospectively scan everything that goes through the network.

Nonetheless, even given my very public and longstanding view toward this particular criminal act and my efforts to put a stop to it where and when possible, I never did code up such an automated system.

Google, however, has, and they haven't publicly disclosed it (until now, by accident) either -- which leads me to the obvious question: What other automated scanning devices are in active use and how can you possibly know that they are all of the sort that virtually everyone (such as is the case for child pornography) would find to be non-offensive to their sensibilities?

I don't have the answer to that question but it's one that we ought to be thinking and talking about.  Further, if you or your company are using broad-form "cloud" services of any sort, or communicating with someone who is, you wind up subject to these policies and the potential for the provider(s) involved to redefine the services they offer to include something that could do you quite a bit of economic harm even if you've committed no crime.  

The bad news is that as these clauses are currently constructed you've consented to it.

View this entry with comments (registration required to post)

This isn't good at all....

When creators of the state-sponsored Stuxnet worm used a USB stick to infect air-gapped computers inside Iran's heavily fortified Natanz nuclear facility, trust in the ubiquitous storage medium suffered a devastating blow. Now, white-hat hackers have devised a feat even more seminal—an exploit that transforms keyboards, Web cams, and other types of USB-connected devices into highly programmable attack platforms that can't be detected by today's defenses.

This just plain sucks.

What they've done here is figure out that (unfortunately) many of the common USB controller chips are reprogrammable in the field and there is no verification of what's loaded to them.  Apparently there is also enough storage (or, in the case of a pen drive, lots of storage!) to do some fairly evil things.

At the core of this problem is the fact that a USB device has an identifying "class" and vendor ID.  If the "class" is one the computer knows it will attach it, usually without prompting of any sort.  This is especially bad if the "class" presented is what is known as a "HID", or "Human Input Device" -- like a mouse or worse, a keyboard.

Yes, you can have more than one keyboard connected, and all are active at once.  And yes, this is as bad as you think it might be.

The worst part of it is that various virus and anti-spyware programs can't detect it because the code doesn't run on the host machine, it runs on the device.  All the computer sees is a "keyboard" -- but it's not really a keyboard, it's your USB pen drive that sends a key sequence down that invokes something (e.g. a browser to go to a specific bad place.)

This can be detected if you're paying attention, but most people don't.  You can see what classes a particular device attached, but few people will look and current operating systems don't prompt, with good cause.  How do you answer such a prompt if you're plugging in a keyboard -- that isn't yet allowed to attach?  Ah, there's a chicken and egg problem, eh?

In any event there ARE defenses against this, but they will require significant operating system patches and then a paradigm to be taken care of with USB -- which will help, but not prevent these sorts of exploits.  As it sits right now, unfortunately, mainstream operating systems are wide open to this sort of abuse.

For example, if my keyboard is plugged into USB Port 2, and it has a Vendor ID of "X" and a device type of HID/Keyboard, then any other port, or this port, that sees a different vendor ID and/or ANY HID/Keyboard device would bring up a warning that a user input device, specifically a keyboard, was attempting to attach.  You could then say "Yes" or "No", and if the device that popped up that prompt was a webcam or USB data stick go looking for your sledge hammer to get a bit of an upper-body workout taking care the problem.

But as it sits right now the only way you'll catch it is if the vendor and device ID don't match a loaded set of drivers and thus the system has to go looking for them -- in which case you will get a warning.  Sadly, for the common abuses of this (e.g. keyboards and mice in particular) you almost-certainly already have such a driver on the system and thus you're unlikely to catch it.

Yeah, this is a problem.....  and a pretty nasty problem at that.

View this entry with comments (registration required to post)

My view: If this is how Ford views security and the iPhone short Ford to zero.

“We are going to get everyone on iPhones,” Tatchio said. “It meets the overall needs of the employees because it is able to serve both our business needs in a secure way and the needs we have in our personal lives with a single device.”

Given what is publicly known about the fact that any IOS device that is connected to another data-bearing device transfers all of its trust envelope to that second device this means that an IOS device in a corporate environment now becomes only as secure as a personal computer in said employee's home that is not under control of the corporate IT department.

Read this again.

Now contemplate this -- said Ford employee, with a device that Ford, the company believes is "secure", connects said phone to their personal computer at home to transfer some music.  Said computer at home has a virus on it that it picked up when that person, on their own time and in the privacy of their own home, surfed to some porn site on the Internet.

That virus sends the trust records for the iPhone back to a hacker in China!

The device's security has now been permanently compromised; said hacker can now, any time the device is on a network where he also has presence (say, a public WiFi point) access huge amounts of data off said device, including the contact lists, messages, pictures and similar items, along with (gulp!) OAUTH tokens. The latter, by the way, is identical in effect to having someone's password for social media accounts; this allows the impersonation of that individual on those accounts.

Secure my ass.

That Ford published such nonsense tells me exactly how Ford the company looks at data security issues at an enterprise level.  The company has publicly declared that fellating employee egos takes precedence over enterprise data security.

A company that takes this position deserves what befalls them as a consequence.

View this entry with comments (registration required to post)

Main Navigation
Full-Text Search & Archives
Archive Access
Get Adobe Flash player
Legal Disclaimer

The content on this site is provided without any warranty, express or implied. All opinions expressed on this site are those of the author and may contain errors or omissions.


The author may have a position in any company or security mentioned herein. Actions you undertake as a consequence of any analysis, opinion or advertisement on this site are your sole responsibility.

Market charts, when present, used with permission of TD Ameritrade/ThinkOrSwim Inc. Neither TD Ameritrade or ThinkOrSwim have reviewed, approved or disapproved any content herein.

The Market Ticker content may be reproduced or excerpted online for non-commercial purposes provided full attribution is given and the original article source is linked to. Please contact Karl Denninger for reprint permission in other media or for commercial use.

Submissions or tips on matters of economic or political interest may be sent "over the transom" to The Editor at any time. To be considered for publication your submission must include full and correct contact information and be related to an economic or political matter of the day. All submissions become the property of The Market Ticker.