Search Results

Search found 541 results on 22 pages for 'tokens'.

Page 3/22 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Parse string to create a list of element

    - by Nick
    I have a string like this: "\r color=\"red\" name=\"Jon\" \t\n depth=\"8.26\" " And I want to parse this string and create a std::list of this object: class data { std::string name; std::string value; }; Where for example: name = color value = red What is the fastest way? I can use boost. EDIT: This is what i've tried: vector<string> tokens; split(tokens, str, is_any_of(" \t\f\v\n\r")); if(tokens.size() > 1) { list<data> attr; for_each(tokens.begin(), tokens.end(), [&attr](const string& token) { if(token.empty() || !contains(token, "=")) return; vector<string> tokens; split(tokens, token, is_any_of("=")); erase_all(tokens[1], "\""); attr.push_back(data(tokens[0], tokens[1])); } ); } But it does not work if there are spaces inside " ": like color="red 1".

    Read the article

  • Split a string and join it back together in a different order?

    - by Xaisoft
    What is the most concise, yet readable way to split a string and put join it back together in a different order. For example, I want to split the following string: 10-20-30-4000-50000 and I would do this via: string[] tokens = original.Split('-'); and now I want to put it back together in this order: 30-20-10-4000-50000 I know I can use Join to put it back together in it's original form, but I don't want that. The only thing I can think of right now is: string modified = string.Format("{0}{1}{2}{3}{4}",tokens[2],tokens[1],tokens[0],tokens[3], tokens[4]); I realized that if I do: string modified = string.Format("{2}{1}{0}{3}{4}", tokens); it does not keep the dashes which is what I want so is to do that, should I just do: string modified = string.Format("{2}-{1}-{0}-{3}-{4}", tokens);

    Read the article

  • How to efficiently map tokens to code in a script interpreter?

    - by lithander
    I'm writing an interpreter for a simple scripting language where each line is a complete, executable command. (Like the instructions in assembler) When parsing a line I have to map the requested command to actual code. My current solution looks like this: std::string op, param1, param2; //parse line, identify op, param1, param2 ... //call command specific code if(op == "MOV") ExecuteMov(AsNumber(param1)); else if(op == "ROT") ExecuteRot(AsNumber(param1)); else if(op == "SZE") ExecuteSze(AsNumber(param1)); else if(op == "POS") ExecutePos((AsNumber(param1), AsNumber(param2)); else if(op == "DIR") ExecuteDir((AsNumber(param1), AsNumber(param2)); else if(op == "SET") ExecuteSet(param1, AsNumber(param2)); else if(op == "EVL") ... The more commands are supported the more string comparisions I'll have to do to identify and call the associated method. Can you point me to a more efficient implementation in the described scenario?

    Read the article

  • Twitter Typeahead only shows only 5 results

    - by user3685388
    I'm using the Twitter Typeahead version 0.10.2 autocomplete but I'm only receiving 5 results from my JSON result set. I can have 20 or more results but only 5 are shown. What am I doing wrong? var engine = new Bloodhound({ name: "blackboard-names", prefetch: { url: "../CFC/Login.cfc?method=Search&returnformat=json&term=%QUERY", ajax: { contentType: "json", cache: false } }, remote: { url: "../CFC/Login.cfc?method=Search&returnformat=json&term=%QUERY", ajax: { contentType: "json", cache: false }, }, datumTokenizer: Bloodhound.tokenizers.obj.whitespace('value'), queryTokenizer: Bloodhound.tokenizers.whitespace }); var promise = engine.initialize(); promise .done(function() { console.log("done"); }) .fail(function() { console.log("fail"); }); $("#Impersonate").typeahead({ minLength: 2, highlight: true}, { name: "blackboard-names", displayKey: 'value', source: engine.ttAdapter() }).bind("typeahead:selected", function(obj, datum, name) { console.log(obj, datum, name); alert(datum.id); }); Data: [ { "id": "1", "value": "Adams, Abigail", "tokens": [ "Adams", "A", "Ad", "Ada", "Abigail", "A", "Ab", "Abi" ] }, { "id": "2", "value": "Adams, Alan", "tokens": [ "Adams", "A", "Ad", "Ada", "Alan", "A", "Al", "Ala" ] }, { "id": "3", "value": "Adams, Alison", "tokens": [ "Adams", "A", "Ad", "Ada", "Alison", "A", "Al", "Ali" ] }, { "id": "4", "value": "Adams, Amber", "tokens": [ "Adams", "A", "Ad", "Ada", "Amber", "A", "Am", "Amb" ] }, { "id": "5", "value": "Adams, Amelia", "tokens": [ "Adams", "A", "Ad", "Ada", "Amelia", "A", "Am", "Ame" ] }, { "id": "6", "value": "Adams, Arik", "tokens": [ "Adams", "A", "Ad", "Ada", "Arik", "A", "Ar", "Ari" ] }, { "id": "7", "value": "Adams, Ashele", "tokens": [ "Adams", "A", "Ad", "Ada", "Ashele", "A", "As", "Ash" ] }, { "id": "8", "value": "Adams, Brady", "tokens": [ "Adams", "A", "Ad", "Ada", "Brady", "B", "Br", "Bra" ] }, { "id": "9", "value": "Adams, Brandon", "tokens": [ "Adams", "A", "Ad", "Ada", "Brandon", "B", "Br", "Bra" ] } ]

    Read the article

  • How do I get secure AuthSub session tokens in PHP ?

    - by robertdd
    I am using the Google/YouTube APIs to develop web application which needs access to a users YouTube account. Normal unsecure requests work fine and I can upgrade one time tokens to session tokens without any hassle. The problem comes when I try and upgrade a secure token to a session token, I get: ERROR - Token upgrade for CIzF3546351vmq_P____834654G failed : Token upgrade failed. Reason: Invalid AuthSub header. Error 401 i use this: function updateAuthSubToken($singleUseToken) { try { $client = new Zend_Gdata_HttpClient(); $client->setAuthSubPrivateKeyFile('/home/www/key.pem', null, true); $sessionToken = Zend_Gdata_AuthSub::AuthSubRevokeToken($sessionToken, $client); $client->setAuthSubToken($sessionToken); } catch (Zend_Gdata_App_Exception $e) { print 'ERROR - Token upgrade for ' . $singleUseToken . ' failed : ' . $e->getMessage(); return; } $_SESSION['sessionToken'] = $sessionToken; generateUrlInformation(); header('Location: ' . $_SESSION['homeUrl']); }

    Read the article

  • User HasOne ActiveToken, HasMany Tokens, how to setup in Rails?

    - by viatropos
    I have two simple models: class User < ActiveRecord::Base has_many :tokens # has_one doesn't work, because Token already stores # foreign id to user... # has_one :active_token, :class_name => "Token" # belongs_to doesn't work because Token belongs to # User already, and they both can't belong to each other # belongs_to :active_token, :class_name => "Token" end class Token < ActiveRecord::Base belongs_to :user end I want to say "User has_one :active_token, :class_name => 'Token'", but I can't because Token already belongs_to User. What I did instead was just manually add similar functionality to the user like so: class User < ActiveRecord::Base has_many :tokens attr_accessor :active_token after_create :save_active_token before_destroy :destroy_active_token # it belongs_to, but you can't have both belongs_to each other... def active_token return nil unless self.active_token_id @active_token ||= Token.find(self.active_token_id) end def active_token=(value) self.active_token_id = value.id @active_token = value end def save_active_token self.active_token.user = self self.active_token.save end def destroy_active_token self.active_token.destroy if self.active_token end end Is there a better way?

    Read the article

  • Recognizing terminals in a CFG production previously not defined as tokens.

    - by kmels
    I'm making a generator of LL(1) parsers, my input is a CoCo/R language specification. I've already got a Scanner generator for that input. Suppose I've got the following specification: COMPILER 1. CHARACTERS digit="0123456789". TOKENS number = digit{digit}. decnumber = digit{digit}"."digit{digit}. PRODUCTIONS Expression = Term{"+"Term|"-"Term}. Term = Factor{"*"Factor|"/"Factor}. Factor = ["-"](Number|"("Expression")"). Number = (number|decnumber). END 1. So, if the parser generated by this grammar receives a word "1+1", it'd be accepted i.e. a parse tree would be found. My question is, the character "+" was never defined in a token, but it appears in the non-terminal "Expression". How should my generated Scanner recognize it? It would not recognize it as a token. Is this a valid input then? Should I add this terminal in TOKENS and then consider an error routine for a Scanner for it to skip it? How does usual language specifications handle this?

    Read the article

  • How to extract custom tokens in SQL Server NVarChar/VarChar field by using RegEx?

    - by Kthurein
    Is there any way to extract the matched strings by using Regex in T-SQL(SQL Server 2005)? For example: Welcome [CT Name="UserName" /], We hope that you will enjoy our services and your subscription will be expired on [CT Name="ExpiredDate" /]. I would like to extract the custom tokens in tabular format as follows: [CT Name="UserName" /] [CT Name="ExpiredDate" /] Thanks for your suggestion!

    Read the article

  • java phone number validation....

    - by user69514
    Here is my problem: Create a constructor for a telephone number given a string in the form xxx-xxx-xxxx or xxx-xxxx for a local number. Throw an exception if the format is not valid. So I was thinking to validate it using a regular expression, but I don't know if I'm doing it correctly. Also what kind of exception would I have to throw? Do I need to create my own exception? public TelephoneNumber(String aString){ if(isPhoneNumberValid(aString)==true){ StringTokenizer tokens = new StringTokenizer("-"); if(tokens.countTokens()==3){ areaCode = Integer.parseInt(tokens.nextToken()); exchangeCode = Integer.parseInt(tokens.nextToken()); number = Integer.parseInt(tokens.nextToken()); } else if(tokens.countTokens()==2){ exchangeCode = Integer.parseInt(tokens.nextToken()); number = Integer.parseInt(tokens.nextToken()); } else{ //throw an excemption here } } } public static boolean isPhoneNumberValid(String phoneNumber){ boolean isValid = false; //Initialize reg ex for phone number. String expression = "(\\d{3})(\\[-])(\\d{4})$"; CharSequence inputStr = phoneNumber; Pattern pattern = Pattern.compile(expression); Matcher matcher = pattern.matcher(inputStr); if(matcher.matches()){ isValid = true; } return isValid; } Hi sorry, yes this is homework. For this assignments the only valid format are xxx-xxx-xxxx and xxx-xxxx, all other formats (xxx)xxx-xxxx or xxxxxxxxxx are invalid in this case. I would like to know if my regular expression is correct

    Read the article

  • Declaring an array of character pointers (arg passing)

    - by Isaac Copper
    This is something that should be easy to answer, but is more difficult for me to find a particular right answer on Google or in K&R. I could totally be overlooking this, too, and if so please set me straight! The pertinent code is below: int main(){ char tokens[100][100]; char str = "This is my string"; tokenize(str, tokens); for(int i = 0; i < 100; i++){ printf("%s is a token\n", token[i]); } } void tokenize(char *str, char tokens[][]){ //do stuff with string and tokens, putting //chars into the token array like so: tokens[i][j] = <A CHAR> } So I realize that I can't have char tokens[][] in my tokenize function, but if I put in char **tokens instead, I get a compiler warning. Also, when I try to put a char into my char array with tokens[i][j] = <A CHAR>, I segfault. Where am I going wrong? (And in how many ways... and how can I fix it?) Thanks so much!

    Read the article

  • Using UUIDs for cheap equals() and hashCode()

    - by Tom McIntyre
    I have an immutable class, TokenList, which consists of a list of Token objects, which are also immutable: @Immutable public final class TokenList { private final List<Token> tokens; public TokenList(List<Token> tokens) { this.tokens = Collections.unmodifiableList(new ArrayList(tokens)); } public List<Token> getTokens() { return tokens; } } I do several operations on these TokenLists that take multiple TokenLists as inputs and return a single TokenList as the output. There can be arbitrarily many TokenLists going in, and each can have arbitrarily many Tokens. These operations are expensive, and there is a good chance that the same operation (ie the same inputs) will be performed multiple times, so I would like to cache the outputs. However, performance is critical, and I am worried about the expense of performing hashCode() and equals() on these objects that may contain arbitrarily many elements (as they are immutable then hashCode could be cached, but equals will still be expensive). This led me to wondering whether I could use a UUID to provide equals() and hashCode() simply and cheaply by making the following updates to TokenList: @Immutable public final class TokenList { private final List<Token> tokens; private final UUID uuid; public TokenList(List<Token> tokens) { this.tokens = Collections.unmodifiableList(new ArrayList(tokens)); this.uuid = UUID.randomUUID(); } public List<Token> getTokens() { return tokens; } public UUID getUuid() { return uuid; } } And something like this to act as a cache key: @Immutable public final class TopicListCacheKey { private final UUID[] uuids; public TopicListCacheKey(TopicList... topicLists) { uuids = new UUID[topicLists.length]; for (int i = 0; i < uuids.length; i++) { uuids[i] = topicLists[i].getUuid(); } } @Override public int hashCode() { return Arrays.hashCode(uuids); } @Override public boolean equals(Object other) { if (other == this) return true; if (other instanceof TopicListCacheKey) return Arrays.equals(uuids, ((TopicListCacheKey) other).uuids); return false; } } I figure that there are 2^128 different UUIDs and I will probably have at most around 1,000,000 TokenList objects active in the application at any time. Given this, and the fact that the UUIDs are used combinatorially in cache keys, it seems that the chances of this producing the wrong result are vanishingly small. Nevertheless, I feel uneasy about going ahead with it as it just feels 'dirty'. Are there any reasons I should not use this system? Will the performance costs of the SecureRandom used by UUID.randomUUID() outweigh the gains (especially since I expect multiple threads to be doing this at the same time)? Are collisions going to be more likely than I think? Basically, is there anything wrong with doing it this way?? Thanks.

    Read the article

  • [PHP] md5(uniqid) makes sense for random unique tokens?

    - by Exception e
    I want to create a token generator that generates tokens that cannot be guessed by the user and that are still unique (to be used for password resets and confirmation codes). I often see this code; does it make sense? md5(uniqid(rand(), true)); According to a comment uniqid($prefix, $moreEntopy = true) yields first 8 hex chars = Unixtime, last 5 hex chars = microseconds. I don't know how the $prefix-parameter is handled.. So if you don't set the $moreEntopy flag to true, it gives a predictable outcome. QUESTION: But if we use uniqid with $moreEntopy, what does hashing it with md5 buy us? Is it better than: md5(mt_rand())

    Read the article

  • How to define a ternary operator in Scala which preserves leading tokens?

    - by Alex R
    I'm writing a code generator which produces Scala output. I need to emulate a ternary operator in such a way that the tokens leading up to '?' remain intact. e.g. convert the expression c ? p : q to c something. The simple if(c) p else q fails my criteria, as it requires putting if( before c. My first attempt (still using c/p/q as above) is c match { case(true) = p; case _ = q } another option I found was: class ternary(val g: Boolean = Any) { def |: (b:Boolean) = g(b) } implicit def autoTernary (g: Boolean = Any): ternary = new ternary(g) which allows me to write: c |: { b: Boolean = if(b) p else q } I like the overall look of the second option, but is there a way to make it less verbose? Thanks

    Read the article

  • Oauth for Google API example using Python / Django

    - by DrDee
    Hi, I am trying to get Oauth working with the Google API using Python. I have tried different oauth libraries such as oauth, oauth2 and djanog-oauth but I cannot get it to work (including the provided examples). For debugging Oauth I use Google's Oauth Playground and I have studied the API and the Oauth documentation With some libraries I am struggling with getting a right signature, with other libraries I am struggling with converting the request token to an authorized token. What would really help me if someone can show me a working example for the Google API using one of the above-mentioned libraries. EDIT: My initial question did not lead to any answers so I have added my code. There are two possible causes of this code not working: 1) Google does not authorize my request token, but not quite sure how to detect this 2) THe signature for the access token is invalid but then I would like to know which oauth parameters Google is expecting as I am able to generate a proper signature in the first phase. This is written using oauth2.py and for Django hence the HttpResponseRedirect. REQUEST_TOKEN_URL = 'https://www.google.com/accounts/OAuthGetRequestToken' AUTHORIZATION_URL = 'https://www.google.com/accounts/OAuthAuthorizeToken' ACCESS_TOKEN_URL = 'https://www.google.com/accounts/OAuthGetAccessToken' CALLBACK = 'http://localhost:8000/mappr/mappr/oauth/' #will become real server when deployed OAUTH_CONSUMER_KEY = 'anonymous' OAUTH_CONSUMER_SECRET = 'anonymous' signature_method = oauth.SignatureMethod_HMAC_SHA1() consumer = oauth.Consumer(key=OAUTH_CONSUMER_KEY, secret=OAUTH_CONSUMER_SECRET) client = oauth.Client(consumer) request_token = oauth.Token('','') #hackish way to be able to access the token in different functions, I know this is bad, but I just want it to get working in the first place :) def authorize(request): if request.GET == {}: tokens = OAuthGetRequestToken() return HttpResponseRedirect(AUTHORIZATION_URL + '?' + tokens) elif request.GET['oauth_verifier'] != '': oauth_token = request.GET['oauth_token'] oauth_verifier = request.GET['oauth_verifier'] OAuthAuthorizeToken(oauth_token) OAuthGetAccessToken(oauth_token, oauth_verifier) #I need to add a Django return object but I am still debugging other phases. def OAuthGetRequestToken(): print '*** OUTPUT OAuthGetRequestToken ***' params = { 'oauth_consumer_key': OAUTH_CONSUMER_KEY, 'oauth_nonce': oauth.generate_nonce(), 'oauth_signature_method': 'HMAC-SHA1', 'oauth_timestamp': int(time.time()), #The timestamp should be expressed in number of seconds after January 1, 1970 00:00:00 GMT. 'scope': 'https://www.google.com/analytics/feeds/', 'oauth_callback': CALLBACK, 'oauth_version': '1.0' } # Sign the request. req = oauth.Request(method="GET", url=REQUEST_TOKEN_URL, parameters=params) req.sign_request(signature_method, consumer, None) tokens =client.request(req.to_url())[1] params = ConvertURLParamstoDictionary(tokens) request_token.key = params['oauth_token'] request_token.secret = params['oauth_token_secret'] return tokens def OAuthAuthorizeToken(oauth_token): print '*** OUTPUT OAuthAuthorizeToken ***' params ={ 'oauth_token' :oauth_token, 'hd': 'default' } req = oauth.Request(method="GET", url=AUTHORIZATION_URL, parameters=params) req.sign_request(signature_method, consumer, request_token) response =client.request(req.to_url()) print response #for debugging purposes def OAuthGetAccessToken(oauth_token, oauth_verifier): print '*** OUTPUT OAuthGetAccessToken ***' params = { 'oauth_consumer_key': OAUTH_CONSUMER_KEY, 'oauth_token': oauth_token, 'oauth_verifier': oauth_verifier, 'oauth_token_secret': request_token.secret, 'oauth_signature_method': 'HMAC-SHA1', 'oauth_timestamp': int(time.time()), 'oauth_nonce': oauth.generate_nonce(), 'oauth_version': '1.0', } req = oauth.Request(method="GET", url=ACCESS_TOKEN_URL, parameters=params) req.sign_request(signature_method, consumer, request_token) response =client.request(req.to_url()) print response return req def ConvertURLParamstoDictionary(tokens): params = {} tokens = tokens.split('&') for token in tokens: token = token.split('=') params[token[0]] = token[1] return params

    Read the article

  • PyParsing: Is this correct use of setParseAction()?

    - by Rosarch
    I have strings like this: "MSE 2110, 3030, 4102" I would like to output: [("MSE", 2110), ("MSE", 3030), ("MSE", 4102)] This is my way of going about it, although I haven't quite gotten it yet: def makeCourseList(str, location, tokens): print "before: %s" % tokens for index, course_number in enumerate(tokens[1:]): tokens[index + 1] = (tokens[0][0], course_number) print "after: %s" % tokens course = Group(DEPT_CODE + COURSE_NUMBER) # .setResultsName("Course") course_data = (course + ZeroOrMore(Suppress(',') + COURSE_NUMBER)).setParseAction(makeCourseList) This outputs: >>> course.parseString("CS 2110") ([(['CS', 2110], {})], {}) >>> course_data.parseString("CS 2110, 4301, 2123, 1110") before: [['CS', 2110], 4301, 2123, 1110] after: [['CS', 2110], ('CS', 4301), ('CS', 2123), ('CS', 1110)] ([(['CS', 2110], {}), ('CS', 4301), ('CS', 2123), ('CS', 1110)], {}) Is this the right way to do it, or am I totally off? Also, the output of isn't quite correct - I want course_data to emit a list of course symbols that are in the same format as each other. Right now, the first course is different from the others. (It has a {}, whereas the others don't.)

    Read the article

  • Finding position of each word in a sub-array of a multidimensional array

    - by Shreyas Satish
    I have an array: tokens = [["hello","world"],["hello","ruby"]] all_tokens = tokens.flatten.uniq # all_tokens=["hello","world","ruby"] Now I need to create two arrays corresponding to all_tokens, where the first array will contain the position of each word in sub-array of tokens. I.E Output: [[0,0],[1],[1]] # (w.r.t all_tokens) To make it clear it reads, The index of "hello" is 0 and 0 in the 2 sub-arrays of tokens. And second array contains index of each word w.r.t tokens.I.E Output: [[0,1],[0],[1]] To make it clear it reads,the index of hello 0,1. I.E "hello" is in index 0 and 1 of tokens array. Cheers!

    Read the article

  • How to search a text file for strings between two tokens in Ubuntu terminal and save the output?

    - by Blue
    How can I search a text file for this pattern in Ubuntu terminal and save the output as a text file? I'm looking for everything between the string "abc" and the string "cde" in a long list of data. For example: blah blah abc fkdljgn cde blah blah blah blah blah blah abc skdjfn cde blah In the example above I would be looking for an output such as this: fkdljgn skdjfn It is important that I can also save the data output as a text file. Can I use grep or agrep and if so, what is the format?

    Read the article

  • How to prevent ‘Select *’ : The elegant way

    - by Dave Ballantyne
    I’ve been doing a lot of work with the “Microsoft SQL Server 2012 Transact-SQL Language Service” recently, see my post here and article here for more details on its use and some uses. An obvious use is to interrogate sql scripts to enforce our coding standards.  In the SQL world a no-brainer is SELECT *,  all apologies must now be given to Jorge Segarra and his post “How To Prevent SELECT * The Evil Way” as this is a blatant rip-off IMO, the only true way to check for this particular evilness is to parse the SQL as if we were SQL Server itself.  The parser mentioned above is ,pretty much, the best tool for doing this.  So without further ado lets have a look at a powershell script that does exactly that : cls #Load the assembly [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.Management.SqlParser") | Out-Null $ParseOptions = New-Object Microsoft.SqlServer.Management.SqlParser.Parser.ParseOptions $ParseOptions.BatchSeparator = 'GO' #Create the object $Parser = new-object Microsoft.SqlServer.Management.SqlParser.Parser.Scanner($ParseOptions) $SqlArr = Get-Content "C:\scripts\myscript.sql" $Sql = "" foreach($Line in $SqlArr){ $Sql+=$Line $Sql+="`r`n" } $Parser.SetSource($Sql,0) $Token=[Microsoft.SqlServer.Management.SqlParser.Parser.Tokens]::TOKEN_SET $IsEndOfBatch = $false $IsMatched = $false $IsExecAutoParamHelp = $false $Batch = "" $BatchStart =0 $Start=0 $End=0 $State=0 $SelectColumns=@(); $InSelect = $false $InWith = $false; while(($Token = $Parser.GetNext([ref]$State ,[ref]$Start, [ref]$End, [ref]$IsMatched, [ref]$IsExecAutoParamHelp ))-ne [Microsoft.SqlServer.Management.SqlParser.Parser.Tokens]::EOF) { $Str = $Sql.Substring($Start,($End-$Start)+1) try{ ($TokenPrs =[Microsoft.SqlServer.Management.SqlParser.Parser.Tokens]$Token) | Out-Null #Write-Host $TokenPrs if($TokenPrs -eq [Microsoft.SqlServer.Management.SqlParser.Parser.Tokens]::TOKEN_SELECT){ $InSelect =$true $SelectColumns+="" } if($TokenPrs -eq [Microsoft.SqlServer.Management.SqlParser.Parser.Tokens]::TOKEN_FROM){ $InSelect =$false #Write-Host $SelectColumns -BackgroundColor Red foreach($Col in $SelectColumns){ if($Col.EndsWith("*")){ Write-Host "select * is not allowed" exit } } $SelectColumns =@() } }catch{ #$Error $TokenPrs = $null } if($InSelect -and $TokenPrs -ne [Microsoft.SqlServer.Management.SqlParser.Parser.Tokens]::TOKEN_SELECT){ if($Str -eq ","){ $SelectColumns+="" }else{ $SelectColumns[$SelectColumns.Length-1]+=$Str } } } OK, im not going to pretend that its the prettiest of powershell scripts,  but if our parsed script file “C:\Scripts\MyScript.SQL” contains SELECT * then “select * is not allowed” will be written to the host.  So, where can this go wrong ?  It cant ,or at least shouldn’t , go wrong, but it is lacking in functionality.  IMO, Select * should be allowed in CTEs, views and Inline table valued functions at least and as it stands they will be reported upon. Anyway, it is a start and is more reliable that other methods.

    Read the article

  • Simplify Your Code with LINQ

    - by dwahlin
    I’m a big fan of LINQ and use it wherever I can to minimize code and make applications easier to maintain overall. I was going through a code file today refactoring it based on suggestions provided by Resharper and came across the following method: private List<string> FilterTokens(List<string> tokens) { var cleanedTokens = new List<string>(); for (int i = 0; i < tokens.Count; i++) { string token = tokens[i]; if (token != null) { cleanedTokens.Add(token); } } return cleanedTokens; }   In looking through the code I didn’t see anything wrong but Resharper was suggesting that I convert it to a LINQ expression: In thinking about it more the suggestion made complete sense because I simply wanted to add all non-null token values into a List<string> anyway. After following through with the Resharper suggestion the code changed to the following. Much, much cleaner and yet another example of why LINQ (and Resharper) rules: private List<string> FilterTokens(IEnumerable<string> tokens) { return tokens.Where(token => token != null).ToList(); }

    Read the article

  • Friday Tips #3

    - by Chris Kawalek
    Even though yesterday was Thanksgiving here in the US, we still have a Friday tip for those of you around your computers today. In fact, we have two! The first one came in last week via our #AskOracleVirtualization Twitter hashtag. The tweet has disappeared into the ether now, but we remember the gist, so here it is: Question: Will there be an Oracle Virtual Desktop Client for Android? Answer by our desktop virtualization product development team: We are looking at Android as a supported platform for future releases. Question: How can I make a Sun Ray Client automatically connect to a virtual machine? Answer by Rick Butland, Principal Sales Consultant, Oracle Desktop Virtualization: Someone recently asked how they can assign VM’s to specific Sun Ray Desktop Units (“DTU’s”) without any user interfaction being required, without the “Desktop Selector” being displayed, or any User Directory.  That is, they wanted each Sun Ray to power on and immediately connect to a pre-assigned Solaris VM.   This can be achieved by using “tokens” for user assignment – that is, the tokens found on Smart Cards, DTU’s, or OVDC clients can be used in place of user credentials.  Note, however, that mixing “token-only” assignments and “User Directories” in the same VDI Center won’t work.   Much of this procedure is covered in the documentation, particularly here. But it can useful to have everything in one place, “cookbook-style”:  1. Create the “token-only” directory type: From the VDI administration interface, select:  “Settings”, “Company”, “New”, select the “None” radio button, and click “Next.” Enter a name for the new “Company”, and click “Next”, then “Finish.” 2. Create Desktop Providers, Pools, and VM’s as appropriate. 3. Access the Sun Ray administration interface at http://servername:1660 and login using “root” credentials, and access the token-id’s you wish to use for assignment.  If you’re using DTU tokens rather than Smart Card tokens, these can be found under the “Tokens” tab, and “Search-ing” using the “Currently Used Tokens” tab.  DTU’s can be identified by the prefix “psuedo.”   For example: 4. Copy/paste this token into the VDI administrative interface, by selecting “Users”, “New”, and pasting in the token ID, and click “OK” - for example: 5. Assign the token (DTU) to a desktop, that is, in the VDI Admin Gui, select “Pool”, “Desktop”, select the VM, and click "Assign" and select the token you want, for example: In addition to assigning tokens to desktops, you'll need to bypass the login screen.  To do this, you need to do two things:  1.  Disable VDI client authentication with:  /opt/SUNWvda/sbin/vda settings-setprops -p clientauthentication=Disabled 2. Disable the VDI login screen – to do this,  add a kiosk argument of "-n" to the Sun Ray kiosk arguments screen.   You set this on the Sun Ray administration page - "Advanced", "Kiosk Mode", "Edit", and add the “-n” option to the arguments screen, for example: 3.  Restart both the Sun Ray and VDI services: # /opt/SUNWut/sbin/utstart –c # /opt/SUNWvda/sbin/vda-service restart Remember, if you have a question for us, please post on Twitter with our hashtag (again, it's #AskOracleVirtualization), and we'll try to answer it if we can. See you next time!

    Read the article

  • How to replace tokens in the master page in asp.net mvc?

    - by AngryHacker
    I have a master page in my asp.net MVC project, which has code like this: <div id="menu"> <ul> <li><a href="#" class="current">home</a></li> <li><a href="#">add image</a></li> <li><a href="#">contact</a></li> </ul> </div> Depending on what page I am on, I'd like to move the class="current" attribute to a different <li>. What is the general pattern that this type of thing is done with on ASP.NET MVC?

    Read the article

  • Can I embed video on external sites while still using tokens to protect the content?

    - by JKS
    On our own website, it's easy to protect against direct links to our video content by grabbing a token through AJAX and verifying the token through PHP before the file download is started. However I'm also researching how I could provide an embed feature, like YouTube or vimeo etc., without compromising this security feature. The problem is that the embed code I want to provide should look something like <object>...<embed>...</embed></object> -- but I don't know how to grab and append the token to the filename. I mean, I guess I could attach a script that did some gnarly JNOP business, but that's too dirty. I'm using JW Player for the actual video container. Huge thanks to anyone who can help...

    Read the article

  • I need to generate credit card surrogates (tokens) that are format preserving.

    - by jammer59
    For an eCommerce application I need to take a credit card and use the real card for passing through to a payment gateway but I need to store, and return to the transaction initiator, a surrogate that is format preserving. Specifically, this means: 1) The number of digits in the surrogate is the same as the real card number (PAN). 2) The issuer type part of the card -- the initial 1,2 or 4 digits remains the same in the surrogate as in the original PAN. 3) The final 4 digits of the surrogate remain the same (for customer service purposes.) 4) The surrogate passes the Luhn mod10 check for a syntactially valid credit card. I can readily handle requirements 1-3 but #4 has me completely stumped! The final implementation will be either t-sql or c#. Any ideas?

    Read the article

  • are push notification tokens unique across all apps for a single device?

    - by scootklein
    i will have multiple applications on the app store and 1 urban airship account to send push notifications to all of these devices. what i want to know is if each apple device has the same "push token" across all applications? this is more of a database architecture thing so that I don't duplicate a push token many times if one single device uses many of my apps if each apple device generates a unique push token for each application it has installed my architecture needs to change.

    Read the article

  • Appengine BulkExport via Batch File

    - by Chris M
    I've created a batch file to run a bulk export on appengine to a dated file @echo off FOR /F "TOKENS=1* DELIMS= " %%A IN ('DATE/T') DO SET CDATE=%%B FOR /F "TOKENS=1,2 eol=/ DELIMS=/ " %%A IN ('DATE/T') DO SET mm=%%B FOR /F "TOKENS=1,2 DELIMS=/ eol=/" %%A IN ('echo %CDATE%') DO SET dd=%%B FOR /F "TOKENS=2,3 DELIMS=/ " %%A IN ('echo %CDATE%') DO SET yyyy=%%B SET date=%yyyy%%mm%%dd% FOR /f "tokens=1" %%u IN ('TIME /t') DO SET t=%%u IF "%t:~1,1%"==":" SET t=0%t% @REM set timestr=%d:~6,4%%d:~3,2%%d:~0,2%%t:~0,2%%t:~3,2% set time=%t:~0,2%%t:~3,2% @echo on "c:\Program Files\Google\google_appengine\appcfg.py" download_data --config_file=E:\FEEDSYSTEMS\TRACKER\TRACKER\tracker-export.py --filename=%date%data_archive.csv --batch_size=100 --kind="SearchRec" ./TRACKER I cant work out how to get it to authenticate with google automatically; at the moment I get asked the user/pass everytime which means I have to run it manually. Any Ideas?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >