Google's having another shot
at solving one of the biggest problems of the modern age -- the tiny
pain in the arse that is the Captcha human verification text check
system.
You know, the one that makes you try to identify
what a toddler has been scrawling on the walls with its own poo before
being allowed to carry on conducting your important internet business.
That thing.
The big new idea is to replace wonky letter
examination skills with a simple question: are you human? The clever
stuff is all done prior to that click, though, with Google suggesting a
mysterious combination of mouse speed, click accuracy, computer stats
and IP details are used to work out if you're a person or a
data-harvesting software routine before you click.
Only then, if you fail, is there a back-up test involving clicking on photographs of cats.
The
problem is, if we really are all just software people living in a
simulated universe, won't we all be picked out as robots? Introducing
this could bring about the realisation that everything we believe in is a
lie.


Terminator Clickonthis
On Wired,
the argument quickly turned into a debate about the privacy
implications of the new Captcha system, with reader Symplectic pointing
out there's a lot more at stake than just knowing if we're made of meat
or silicon, saying: "Google is using your browsing history and your
mouse pattern to identify you as human. It's not the fact you're human
that privacy-conscious users won't like being reminded of -- it's the
fact that Google knows what websites you browse and which images you
hovered over without clicking."
Commenter Velocipedes
thinks there's not much more Google needs to know about us, quipping:
"If you're using any of Google's services, you're voluntarily providing
that information."
Grover Nilkvist thinks the effort
should be being made by the robots anyway, asking: "Why are WE always
the ones to have to make the effort? Why can't THEY just click the 'I'm a
robot' checkbox? Anyone who thinks we're going to get the same
preferential treatment when they're in charge is just delusional. Wake
up people."
Click farmers
Readers on The Verge
turned their attention to the cat photos, questioning how the vagueness
of the images could be used by Google to fine-tune its own photo
recognition and categorisation tools.
Commenter Outerwave
claims the cat-matching game is deliberately vague, posting: "The
message being somewhat ambiguous is part of the system. These captchas
also improve Google's search. What is considered 'similar' to the
original picture is probably different from person to person. But after
100 people take the test, Google (or whoever) would have an improved
idea of what 'similar' meant to most people by comparing what was
selected most of the time."
Which made some clipart of a
lightbulb appear over the head of reader Miku, who replied: "Ah! This is
not really about making Captcha better, this is about harvesting
Captcha to improve their image search results. That makes sense."
So
Google's not interested in reworking the Captcha system, it's simply
come up with a front for a method of harvesting our spare seconds to
process its photos. Google's turned the whole world into an enormous
complimentary data processing farm.
Bot matrix
The Register
reader Irongut has already had his feelings hurt by rogue AI, claiming
to be a constant failure at existing Captcha technology, and therefore
life. He posts: "I usually find I have to ask for a different image at
least 3 times because I can't make sense of the first few. Even when I
get an image I think I can read I'm usually wrong. Generally a CAPATCHA
will take me about 5 minutes to get an answer the site will accept,
assuming I can be arsed to keep trying."
credit:techradar
0 comments:
Post a Comment