rising_moon: (Default)
[personal profile] rising_moon
Recently I've read a few excellent fantasy novels which were written around believable, consistent, and reasonable systems of magic. Believable magic is one of the elements that will sell me on a writer. I've enjoyed The Abhorsen Trilogy, by Garth Nix, and, most recently, The Name of the Wind, by Patrick Rothfuss.

I've learned that Brandon Sanderson, who wrote this essay on systems of magic, is going to finish Robert Jordan's 12th and final novel of the Wheel of Time series. Depending on my Lady's response to his work, I might take up the first one. :)

Unrelatedly (maybe): can any of you recommend a good history (articles, blogs, anything) of technical approaches to affixing Identity? That is, assuring that individuals are who they say they are? I'm making a study of transaction psychology -- financial services inclined but not fixed -- and would love some background data on approaches to identity assurance. Thanks!

Date: 2008-12-05 02:07 am (UTC)
From: [identity profile] goldsquare.livejournal.com
The issue breaks into three parts: Identity, Authentication and Authorization.

Generally speaking, people tend to confuse or conflate Identity and Authentication, but that is not necessary. Consider LJ - you might grant some people certain rights to read your blog because of what they write, say or do - but never know their real name and identity. You Authorize them via a Friends list.

Meanwhile, when they log in, they Authenticate their credential to LJ (or, since LJ accepts other ID servers and their authentication, maybe to someone else).

I cannot recall where I first read about these issues, I can do a little digging. For interesting browsing, you might look at some of the articles in Wired (and other places) by Bruce Schneier, CTO of Countepane. You might also ask [livejournal.com profile] patsmor or look at the links in her blog. I have not done so, but since she is an expert in Internet Security and Privacy, I am sure she can give you references on the topic. (She is also a close friend of [livejournal.com profile] cvirtue as well as myself - and an SCA person of excellent repute and good cooking skills. Amongst many other terrific features.)

Date: 2008-12-05 04:23 pm (UTC)
From: [identity profile] rising-moon.livejournal.com
This is an excellent taxonomy of the topic, thank you. Authentication is the wing that I'm concerned about -- that is, Authentication of Your Identity, e.g., "what you know" and "what you are", and how certainty and ease-of-use impact your device interaction.

Yeh, I'm a big Schneier fan. :)

Thank you for the pointer to patsmor! I'll go poke at her info page and introduce myself.

Date: 2008-12-05 10:30 pm (UTC)
From: [identity profile] goldsquare.livejournal.com
To brief the topic briefly, one authenticates in one of two ways. One other proffers evidence of a shared secret, or each party proffers secrets.

The secrets are broken down into "what I have, what I am, what I know". An example of each is: a token that generates large numbers over a period of time -or- a fingerprint -or- a password". Highly secure systems use two or even three of those, and often use rotating systems of information, or variable challenges.

When passing the secrets back and forth, every single step of the way must be secure, or in the end the security is worthless. That means not just careful transmission, but careful handling. For example, some old software used to accept a password, and store it clearly, in memory. Users that wanted to break into the system could search used memory, or unauthorized memory or disks for patterns that contained those passwords.

Some of the more sophisticated systems use leased access concepts - where access is temporary, and must be periodically renewed automatically. (Kerberos was one such system, developed at MIT. The Jini software project used leases for everything, including access, and was developed at SUN Microsystems.)

One can proxy authentication to another system - meaning that the two systems can authenticate each other in a complex way, and then the proxying system will trust the other to do the work.

There are two major threats to authentication, although there are countless more. One is compromise of a secret, and the other is to play a "man in the middle" and somehow capture all traffic. Means of losing secrets are legion.

I hope this lecture is helpful. If not, please chalk it up to good intentions. :-)

Date: 2008-12-08 08:23 pm (UTC)
From: [identity profile] rising-moon.livejournal.com
This is all familiar, but more detailed than I'd read before. Thank you! Most of my background on this topic was gleaned through working on the authentication application through my previous employer. I really don't know the history of systems security.

dilettante, below, proffers a fourth kind of secret that compasses unique physical skills/motions/behaviors (like the WWII "fist"). It wasn't cost-friendly, nor certain, to use the "fist" to authenticate the apps I was working on, but our research did make me wonder. Some day maybe we'll add "what I do" to the list. :)

Even a unique, individuated Personal Turing Test wouldn't solve for the "man in the middle" scenario, though. Hm.

Date: 2008-12-09 03:01 am (UTC)
From: [identity profile] goldsquare.livejournal.com
I think those are simply combinations of "what you know" (juggling) and who you are (motions). Gait analysis is another possible example.

A variant of that might be the "anti-drunk driving" tools that you can install on cars now. In addition to the car key (what you have), one is presented with a random number that must be pressed into a keypad within a time frame. Fail, and the car will not start. Fail enough times swiftly, and the car locks down for a while.

There are defenses against man-in-the-middle attacks, as well as "replay" attacks.

I find this stuff amazingly geeky and fascinating.

PS The asymmetric problem solving of primes is the core of what is now know as Public Key Encryption. It is a fascinating variant of "what you know". Do you know much about it?

Date: 2008-12-09 02:41 pm (UTC)
From: [identity profile] dilletante.livejournal.com
yeah, i think "user fist" (great phrase, btw) can be considered biometrics-- like gait analysis, as you say. user skills, like juggling... hm. probably fall into the same category as making two people turn keys on opposite sides of the room at the same time, which probably falls into the same category as having a key with bits that are placed to turn a bolt without being blocked by wards, which is classically considered a "what you know" thing. but it keeps seeming like there's a difference, to me; maybe because for humans, procedural and declarative knowledge are distinct. hm.

Date: 2008-12-09 04:21 pm (UTC)
From: [identity profile] goldsquare.livejournal.com
I think you final point is key, because I am finding the three-fold ontology (what you have, know or are) to be sufficient. You seem to be trying to create a fourth ontological distinction out of the intersection of "know AND are".

My rule of thumb is that if an idea can be expressed in terms of an existing ontology, it is refinement (perhaps) to expand it, but the expressive power of the enclosing ontology is sufficient.

But I like set theory, and unions and intersections, and saying "ontology" a lot. :-)

Date: 2008-12-11 05:00 am (UTC)
From: [identity profile] dilletante.livejournal.com
hah! i figured out what's different about user-capability: faking the credential has a known cost.

here's an early example of user-capability security: penelope saying she'll marry whoever can string her missing husband odysseus's bow. she was able to know not only that none of the men likely to vie for her hand could string it, but that none of them could become strong enough to string it within a short time-frame (hopefully long enough for odysseus to return).

one could as well simply ask users to pay a fixed fee to be authenticated. in fact, i bet casinos do some version of this somewhere... and atm enclosures have locks that open if you produce any card with a mag stripe, thereby proving that you have a card with a mag stripe and so might be a customer.

cryptographers do make calculations based on the cost of breaking their systems by brute force. but that assumes there's no flaw in the algorithm. with user-capability authentication, there is no flaw in the algorithm: what you see is what you get. so calculations of how difficult it is to duplicate the authentication ought to be straightforward.

Date: 2008-12-11 10:57 am (UTC)
From: [identity profile] goldsquare.livejournal.com
I like the way you are thinking, but I might still quibble anyway. :-)

For any such security requirement (have, know or are) there are always two ways to overcome it. One is to fake or have the credential, the other is to suborn the system. One of the touted strengths of biometrics is that the "cost" of faking the credential is very high - unlike a physical key or fob or something, and certainly higher than a simple password.

I really do continue to see the performance-based metric as being an intersection of Are and Know, and no breaking into new ground. Penelope was always asking them to change "what they are", using a process that made it harder to suborn. Then again, it is just the same as swiping a fingerprint under the eye of a guard - you can't use a mock-up.

Date: 2008-12-11 04:45 pm (UTC)
From: [identity profile] dilletante.livejournal.com
yeah, so a classic downside of biometrics is that they can't be revoked or changed once someone has figured out how to duplicate them-- you're stuck with the fingerprints you have, pretty much. a second classic downside is that the actual information that constitutes the credential isn't necessarily secret-- it may even be available for anyone who wants it (anybody can see you walk and observe your gait, say; or lift a fingerprint from something you've touched). it's *assumed* to be difficult to fake, but in practice ways to fake or copy high-tech biometric credentials abound (watch the movie "gattaca" for a lot of examples, some of which occur in the modern day); and actors have been copying low-tech biometric credentials (gait, facial features, voice, mannerisms) for all of human history.

i'm going to ignore "suborn the system," because that's a danger with any system, as you say.

in short, the minimum cost of faking a biometric credential is harder to bound than a capability credential, i think. (i may have to think about this more. the classic police field tests for drunkenness ought to count as capability credentials, and they are known to be fakeable with some not-well-known cost. hm. but in general i think it holds.)

cost of revocation or change is complicated. it's not like changing your fingerprints, but you probably picked the particular cabability credential you did because it met a bunch of constraints, and they might be hard to satisfy with a different credential. you might gloss it by considering it to be the same as switching to a different biometric-- like changing all your locks. very expensive. on the other hand, some might be easy to change: "shibboleth" was another classic capability credential, and if it were found to be too easy to fake, maybe it could have been replaced by some other word that was even harder. maybe.

anyway. i think capability credentials work particularly well in situations where what you care about is really just that the authenticated person has some quality that's inherently associated with the capability you're testing-- like requiring them to pay a fee in order to prove they have money. i think at some levels of analysis they can of course be considered "having a secret" (at that level of analysis, "what you are" and "what you know" are the same, also), but i think they differ from other forms of authentication at lower levels of analysis in ways that are interesting.

Date: 2008-12-11 04:55 pm (UTC)
From: [identity profile] dilletante.livejournal.com
shoot, i meant to say, also: the corresponding classic problem of capability credentials (now that i'm thinking about them, it occurs to me there are lots of historical examples!) is the possibility that the capability you're looking at isn't distributed in the population the way you think it is. maybe some stranger turns up from a far-off land who turns out to be just as strong as odysseus-- oops. i think fairy tales are full of this sort of event.

but they still work great for situations where what you actually care about is not specific identity, but that the person authenticated *have* a capability that's associated with the one you're testing, i think.

Profile

rising_moon: (Default)
rising_moon

April 2019

S M T W T F S
 123456
78910111213
14151617 18 1920
21 222324252627
282930    

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jun. 30th, 2025 10:59 pm
Powered by Dreamwidth Studios