Search Results

Search found 541 results on 22 pages for 'tokens'.

Page 10/22 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • Hidden form and SEO

    - by AntonAL
    I'm using hidden forms, to collects some statistics. Will it have any penalty from search engines ? Update 1: I'm collecting some statistics, based on user interaction with my website. For example, POST requests will be sent to server, when: user stops a playing video user has watched a video till it's end etc. Using form_remote_for in Rails, i'm just rendering the form and keep it invisible. The reason on doing that - is to utilize authencity tokens, and just have less to code. Via JavaScript i'm only filling some hidden fields up and initiating form submission.

    Read the article

  • Handling SMS/email convergence: how does a good business app do it?

    - by Tim Cooper
    I'm writing a school administration software package, but it strikes me that many developers will face this same issue: when communicating with users, should you use email or SMS or both, and should you treat them as fundamentally equivalent channels such that any message can get sent using any media, (with long and short forms of the message template obviously) or should different business functions be specifically tailored to each of the 3? This question got kicked off "StackOverflow" for being overly general, so I'm hoping it's not too general for this site - the answers will no doubt be subjective but "you don't need to write a whole book to answer the question". I'm particularly interested in people who have direct experience of having written comparable business applications. Sub-questions: Do I treat SMS as "moderately secure" and email as less secure? (I'm thinking about booking tokens for parent/teacher nights, permission slips for excursions, absence explanation notes - so high security is not a requirement for us, although medium security is) Is it annoying for users to receive the same message on multiple channels? Should we have a unified framework that reports on delivery or lack thereof of emails and SMS's?

    Read the article

  • How can I estimate the entropy of a password?

    - by Wug
    Having read various resources about password strength I'm trying to create an algorithm that will provide a rough estimation of how much entropy a password has. I'm trying to create an algorithm that's as comprehensive as possible. At this point I only have pseudocode, but the algorithm covers the following: password length repeated characters patterns (logical) different character spaces (LC, UC, Numeric, Special, Extended) dictionary attacks It does NOT cover the following, and SHOULD cover it WELL (though not perfectly): ordering (passwords can be strictly ordered by output of this algorithm) patterns (spatial) Can anyone provide some insight on what this algorithm might be weak to? Specifically, can anyone think of situations where feeding a password to the algorithm would OVERESTIMATE its strength? Underestimations are less of an issue. The algorithm: // the password to test password = ? length = length(password) // unique character counts from password (duplicates discarded) uqlca = number of unique lowercase alphabetic characters in password uquca = number of uppercase alphabetic characters uqd = number of unique digits uqsp = number of unique special characters (anything with a key on the keyboard) uqxc = number of unique special special characters (alt codes, extended-ascii stuff) // algorithm parameters, total sizes of alphabet spaces Nlca = total possible number of lowercase letters (26) Nuca = total uppercase letters (26) Nd = total digits (10) Nsp = total special characters (32 or something) Nxc = total extended ascii characters that dont fit into other categorys (idk, 50?) // algorithm parameters, pw strength growth rates as percentages (per character) flca = entropy growth factor for lowercase letters (.25 is probably a good value) fuca = EGF for uppercase letters (.4 is probably good) fd = EGF for digits (.4 is probably good) fsp = EGF for special chars (.5 is probably good) fxc = EGF for extended ascii chars (.75 is probably good) // repetition factors. few unique letters == low factor, many unique == high rflca = (1 - (1 - flca) ^ uqlca) rfuca = (1 - (1 - fuca) ^ uquca) rfd = (1 - (1 - fd ) ^ uqd ) rfsp = (1 - (1 - fsp ) ^ uqsp ) rfxc = (1 - (1 - fxc ) ^ uqxc ) // digit strengths strength = ( rflca * Nlca + rfuca * Nuca + rfd * Nd + rfsp * Nsp + rfxc * Nxc ) ^ length entropybits = log_base_2(strength) A few inputs and their desired and actual entropy_bits outputs: INPUT DESIRED ACTUAL aaa very pathetic 8.1 aaaaaaaaa pathetic 24.7 abcdefghi weak 31.2 H0ley$Mol3y_ strong 72.2 s^fU¬5ü;y34G< wtf 88.9 [a^36]* pathetic 97.2 [a^20]A[a^15]* strong 146.8 xkcd1** medium 79.3 xkcd2** wtf 160.5 * these 2 passwords use shortened notation, where [a^N] expands to N a's. ** xkcd1 = "Tr0ub4dor&3", xkcd2 = "correct horse battery staple" The algorithm does realize (correctly) that increasing the alphabet size (even by one digit) vastly strengthens long passwords, as shown by the difference in entropy_bits for the 6th and 7th passwords, which both consist of 36 a's, but the second's 21st a is capitalized. However, they do not account for the fact that having a password of 36 a's is not a good idea, it's easily broken with a weak password cracker (and anyone who watches you type it will see it) and the algorithm doesn't reflect that. It does, however, reflect the fact that xkcd1 is a weak password compared to xkcd2, despite having greater complexity density (is this even a thing?). How can I improve this algorithm? Addendum 1 Dictionary attacks and pattern based attacks seem to be the big thing, so I'll take a stab at addressing those. I could perform a comprehensive search through the password for words from a word list and replace words with tokens unique to the words they represent. Word-tokens would then be treated as characters and have their own weight system, and would add their own weights to the password. I'd need a few new algorithm parameters (I'll call them lw, Nw ~= 2^11, fw ~= .5, and rfw) and I'd factor the weight into the password as I would any of the other weights. This word search could be specially modified to match both lowercase and uppercase letters as well as common character substitutions, like that of E with 3. If I didn't add extra weight to such matched words, the algorithm would underestimate their strength by a bit or two per word, which is OK. Otherwise, a general rule would be, for each non-perfect character match, give the word a bonus bit. I could then perform simple pattern checks, such as searches for runs of repeated characters and derivative tests (take the difference between each character), which would identify patterns such as 'aaaaa' and '12345', and replace each detected pattern with a pattern token, unique to the pattern and length. The algorithmic parameters (specifically, entropy per pattern) could be generated on the fly based on the pattern. At this point, I'd take the length of the password. Each word token and pattern token would count as one character; each token would replace the characters they symbolically represented. I made up some sort of pattern notation, but it includes the pattern length l, the pattern order o, and the base element b. This information could be used to compute some arbitrary weight for each pattern. I'd do something better in actual code. Modified Example: Password: 1234kitty$$$$$herpderp Tokenized: 1 2 3 4 k i t t y $ $ $ $ $ h e r p d e r p Words Filtered: 1 2 3 4 @W5783 $ $ $ $ $ @W9001 @W9002 Patterns Filtered: @P[l=4,o=1,b='1'] @W5783 @P[l=5,o=0,b='$'] @W9001 @W9002 Breakdown: 3 small, unique words and 2 patterns Entropy: about 45 bits, as per modified algorithm Password: correcthorsebatterystaple Tokenized: c o r r e c t h o r s e b a t t e r y s t a p l e Words Filtered: @W6783 @W7923 @W1535 @W2285 Breakdown: 4 small, unique words and no patterns Entropy: 43 bits, as per modified algorithm The exact semantics of how entropy is calculated from patterns is up for discussion. I was thinking something like: entropy(b) * l * (o + 1) // o will be either zero or one The modified algorithm would find flaws with and reduce the strength of each password in the original table, with the exception of s^fU¬5ü;y34G<, which contains no words or patterns.

    Read the article

  • Moving between sites using SAML

    - by System Down
    I'm tasked with developing an SSO system, and was guided towards using the SAML spec. After some research I think understand the interaction between a Service Provider and an ID Provider and how a user's identity is confirmed. But what happens when I redirect the user to another Service Provider? How do I ascertain the user's identity there? Do I send his SAML assertion tokens along with the redirect request? Or does the second Service Provider need to contact the ID Provider all over again?

    Read the article

  • How to implement proper identification and session managent on json post requests?

    - by IBr
    I have some minor messaging connection to server from website via json requests. I have single endpoint which distributes requests according to identification data. I am using asynchronous server and handle data when it comes. Now I am thinking about extending requests with some kind of session. What is the best way to define session? Get cookie when registered and use token as long as session runs with each request? Should I implement timeout for token? Is there alternative methods? Can I cache tokens to same origin requests? What could I use on client side (Web browser)? How about safety? What techniques I should use to throw away requests with malformed data, to big data, without choking server down? Should I worry?

    Read the article

  • Access Token Verification

    - by DecafCoder
    I have spent quite a few days reading up on Oauth and token based security measures for REST API's and I am currently looking at implementing an Oauth based authentication approach almost exactly like the one described in this post (OAuth alternative for a 2 party system). From what I understand, the token is to be verified upon each request to the resource server. This means the resource server would need to retrieve the token from a datastore to verify the clients token. Given this would have to happen upon every request I am concerned about the speed implications of hitting a datastore like MySQL or NoSQL upon every request just to verify the token. Is this the standard way to verify tokens by having them stored in a RDBMS or NoSQL database and retrieved upon each request? Or is it a suitable solution to have them cached (baring in mind that we are talking millions of users)?

    Read the article

  • SWI-Prolog tokenize_atom/2 replacement?

    - by Shark
    What I need to do is to break atom to tokens. E. g.: tokenize_string('Hello, World!', L). would unify L=['Hello',',','World','!']. Exactly as tokenize_atom/2 do. But when I try to use tokenize_atom/2 with non-latin letters it fails. Is there any universal replacement or how I can write one? Thanks in advance.

    Read the article

  • Authorizing a computer to access a web application

    - by HackedByChinese
    I have a web application, and am tasked with adding secure sign-on to bolster security, akin to what Google has added to Google accounts. Use Case Essentially, when a user logs in, we want to detect if the user has previously authorized this computer. If the computer has not been authorized, the user is sent a one-time password (via email, SMS, or phone call) that they must enter, where the user may choose to remember this computer. In the web application, we will track authorized devices, allowing users to see when/where they logged in from that device last, and deauthorize any devices if they so choose. We require a solution that is very light touch (meaning, requiring no client-side software installation), and works with Safari, Chrome, Firefox, and IE 7+ (unfortunately). We will offer x509 security, which provides adequate security, but we still need a solution for customers that can't or won't use x509. My intention is to store authorization information using cookies (or, potentially, using local storage, degrading to flash cookies, and then normal cookies). At First Blush Track two separate values (local data or cookies): a hash representing a secure sign-on token, as well as a device token. Both values are driven (and recorded) by the web application, and dictated to the client. The SSO token is dependent on the device as well as a sequence number. This effectively allows devices to be deauthorized (all SSO tokens become invalid) and mitigates replay (not effectively, though, which is why I'm asking this question) through the use of a sequence number, and uses a nonce. Problem With this solution, it's possible for someone to just copy the SSO and device tokens and use in another request. While the sequence number will help me detect such an abuse and thus deauthorize the device, the detection and response can only happen after the valid device and malicious request both attempt access, which is ample time for damage to be done. I feel like using HMAC would be better. Track the device, the sequence, create a nonce, timestamp, and hash with a private key, then send the hash plus those values as plain text. Server does the same (in addition to validating the device and sequence) and compares. That seems much easier, and much more reliable.... assuming we can securely negotiate, exchange, and store private keys. Question So then, how can I securely negotiate a private key for authorized device, and then securely store that key? Is it more possible, at least, if I settle for storing the private key using local storage or flash cookies and just say it's "good enough"? Or, is there something I can do to my original draft to mitigate the vulnerability I describe?

    Read the article

  • dos batch iterate through a delimited string

    - by bjax-bjax
    I have a delimited list of IPs I'd like to process individually. The list length is unknown ahead of time. How do I split and process each item in the list? @echo off FOR /f "tokens=* delims=," %%a IN ("127.0.0.1,192.168.0.1,10.100.0.1") DO call :sub %%a :sub echo In subroutine echo %1 exit /b Outputs: In subroutine 127.0.0.1 In subroutine ECHO is off.

    Read the article

  • lexer/parser ambiguity

    - by John Leidegren
    How does a lexer solve this ambiguity? /*/*/ How is it that it doesn't just say, oh yeah, that's the begining of a multi-line comment, followed by another multi-line comment. Wouldn't a greedy lexer just return the following tokens? /* /* / I'm in the midst of writing a shift-reduce parser for CSS and yet this simple comment thing is in my way. You can read this question if you wan't some more background information.

    Read the article

  • Simple C# Tokenizer Using Regex

    - by Pete
    I'm looking to tokenize really simple strings,but struggling to get the right Regex. The strings might look like this: string1 = "{[Surname]}, some text... {[FirstName]}" string2 = "{Item}foo.{Item2}bar" And I want to extract the tokens in the curly braces (so string1 gets "{[Surname]}","{[FirstName]}" and string2 gets "{Item}" and "{Item2}") this question is quite good, but I can't get the regex right: poor mans lexer for c# Thanks for the help!

    Read the article

  • Split String in C#

    - by ritu
    I thought this will be trivial but I can't get this to work. Assume a line in a CSV file: "Barak Obama", 48, "President", "1600 Penn Ave, Washington DC" string[] tokens = line.split(',') I expect this: "Barak Obama" 48 "President" "1600 Penn Ave, Washington DC" but the last token is 'Washington DC' not "1600 Penn Ave, Washington DC". Is there an easy way to get the split function to ignore the comma within quotes?

    Read the article

  • How to automatically add user account *and* password with a Bash script

    - by ModernCarpentry
    I need to have the ability to create user accounts on my Linux ( Fedora 10 ) and automatically assign a password via a bash script ( or otherwise, if need be ). It's easy to create the user via Bash eg: [whoever@server ]# /usr/sbin/useradd newuser But is it possible to assign a password in Bash, something functionally similar to this (but automated): [whoever@server ]# passwd newuser Changing password for user testpass. New UNIX password: Retype new UNIX password: passwd: all authentication tokens updated successfully. [whoever@server ]#

    Read the article

  • Hibernate criteria DB2 composite keys in IN clause

    - by nkr1pt
    Hibernate criteria, using DB2 dialect, generates the following sql with composite keys in the IN clause, but DB2 answers that the query is incorrect: select * from tableA where (x, y) IN ( ( 'x1', y1) ) but, DB2 throws this: SQL0104N An unexpected token "," was found following ", y) in ( ('x1'". Expected tokens may include: "+". SQLSTATE=42601

    Read the article

  • How can I get the next page of friends using the Twitter API?

    - by vakas
    I am using the api twitterizer2 downloaded from http://code.google.com/p/twitterizer/downloads/list but when I try to get the friends of a user I get 100 friends but I can't get the next 100 friends through the NextPage function. How can I handle this...? Twitterizer.TwitterUserCollection userFollowing = Tw.TwitterUser.GetFriends(tokens,TwitterUrl); Twitterizer.TwitterUserCollection page2=userFollowing.NextPage; When I get the next page it returns the same 100 users.

    Read the article

  • store SID in a variable

    - by user361191
    Hi, I need a way to store the current user's SID in a variable, I tried a lot of variants of: setlocal enableextensions for /f "tokens=*" %%a in ( '"wmic path win32_useraccount where name='%UserName%' get sid"' ) do ( if not "%%a"=="" set myvar=%%a echo/%%myvar%%=%myvar% pause endlocal None are working wmic path win32_useraccount where name='%UserName%' get sid should be returning 3 lines, i need the second one stored in a variabel Can someone fix my script? edit btw; I am using a .cmd file

    Read the article

  • SQL Query to truncate table in IBM DB2

    - by Cshah
    Hi, Can any one give me the syntax to truncate a table in IBM DB2. I m running the following command: truncate table tableName immediate; The eror is DB2 SQL Error: SQLCODE=-104, SQLSTATE=42601, SQLERRMC=table;truncate ;JOIN , DRIVER=3.50.152 Message: An unexpected token "table" was found following "truncate ". Expected tokens may include: "JOIN ".. SQLCODE=-104, SQLSTATE=42601, DRIVER=3.50.152 The syntax matches the one specified in the reference docs of IBM : http://publib.boulder.ibm.com/infocenter/dzichelp/v2r2/index.jsp?topic=/com.ibm.db29.doc.sqlref/db2z_sql_truncate.htm

    Read the article

  • dos batch assign returned values from a command into a variable (from powershell)

    - by nokheat
    i am refering to this question ASSIGN win XP dos commandline output to variable http://stackoverflow.com/questions/537404/assign-win-xp-dos-commandline-output-to-variable i am trying to use it on a powershell code segment so i typed powershell date (get-date).AddDays(-1) -format yyyyMMdd and confirm it returns like 20100601 but then if i tried to for /f "tokens=*" %a in ('powershell date get-date -format yyyyMMdd ') do set var=%a then it failed to work as expected. how can i transfer the date to a variable?

    Read the article

  • lexers vs parsers

    - by Naveen
    Are lexers and parsers really that different in theory ? It seems fashionable to hate regular expressions: coding horror, another blog post. However, popular lexing based tools: pygments, geshi, or prettify, all use regular expressions. They seem to lex anything... When is lexing enough, when do you need EBNF ? Has anyone used the tokens produced by these lexers with bison or antlr parser generators?

    Read the article

  • Rails creating a new session every page view

    - by danhere
    Hi everyone, I'm following the Agile RoR book somewhat to apply it to a project for school. It's going good until I get to sessions. I continually get Authenticity Invalid Tokens and when I look at my sessions table in the database, there's a new session being created every time I refresh the page. Is that right or is something messed up? Thanks.

    Read the article

  • Can I somehow know which replacement is taking place from within a callback of preg_replace_callback

    - by jayarjo
    I'm using preg_replace_callback to substitute particular tokens within the string. But apart from actual token I need to know as well whether that token was first, second or third in a subject string. Is there any way to access that info? I found an argument $count in preg_replace_callback definition (http://php.net/manual/en/function.preg-replace-callback.php), which counts replacements, but I'm not sure if it is accessible from within callback. Any example of the usage in described context?

    Read the article

  • How can I get the google username on Android?

    - by tommy chheng
    I've seen references to using the AccountManager like http://stackoverflow.com/questions/2245545/accessing-google-account-id-username-via-android but it seems like it's for grabbing the authtoken? I just need access to the username, no passwords or any auth tokens. I'm using android 2.1 sdk.

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >