Search Results

Search found 1725 results on 69 pages for 'token'.

Page 24/69 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • Jquery return mulitple values in ajax call

    - by Scarface
    Hey guys quick question, I have a jquery post function that returns a response on success after the click of a div, however I would like to return multiple variables on success. Do I have to use JSON, and if so, is it possible to integrate it into the $.ajax function after success? Thanks in advance for your time. $.ajax({ type: "POST", data: "action=favorite&username=" + username + "&topic_id=" + topic_id + "&token=" + token, url: "favorite.php", success: function(response) { } });

    Read the article

  • is there any faster way to parse than by walk each byte?

    - by uray
    is there any faster way to parse a text than by walk each byte of the text? I wonder if there is any special CPU (x86/x64) instruction for string operation that is used by string library, that somehow used to optimize the parsing routine. for example instruction like finding a token in a string that could be run by hardware instead of looping each byte until a token is found.

    Read the article

  • What is the best approach of creating a login System?

    - by Starx
    I am always wondering that the login systems I have created is vulnerable to attacks or not. As many other programmers I also use sessions to hold a specific token token to know the login status. Cookies to hold the username or even sometime saved status. What I am wondering is, Is this the right way? Is there any approach better that this?

    Read the article

  • Twitter + Grackle, determining the logged in user

    - by JP
    This is crazy, but I'm stumped! Once my user has logged into twitter via OAuth how do I determine their username using grackle? @twitter = Grackle::Client.new(:auth => { :type => :oauth, :consumer_key => consumer_key, :consumer_secret => consumer_secret, :token => @access_token.token, :token_secret => @access_token.secret }) username = @twitte.something_here?

    Read the article

  • PHP Multiple form tokens

    - by Juddling
    I understand the idea of generating a form token and storing it in a session, and also putting it as a hidden input in my forms. But how could I make this work if I have pages with multiple forms, is it still safe to use the same token for each form? And I still feel weary about bots and stuff on my website, can these form tokens really safely replace CAPTCHAs?

    Read the article

  • Get scanner to include but ignore quoted text?

    - by user1086516
    Basically my problem is this, I need to parse text where , is the delimiter but anything in " " quotes should not be checked for a delimiter. Is this what the Scanner.skip method is for? I would check it myself but I don't understand how to write a regex pattern in java where the token is something in between two " ". I also want to include any quoted text in the proper token which was delimited by the valid ,.

    Read the article

  • on facebook, how do i authenticate an application, using JavaScript?

    - by GilShalit
    I can only find samples using php or curl. I want to do something like https://graph.facebook.com/<app_id>/accounts/test-users? installed=true&permissions=read_stream and the response is: { "error": { "type": "OAuthException", "message": "An access token is required to request this resource." } } as well is should... so how do i get the access token in JavaScript (using the JavaScript SDK obviously). thanks!

    Read the article

  • Net::HTTP Gives time out but browser visit returns data

    - by steve
    I tried the following Net::HTTP.get_print URI.parse(URI.encode('https://graph.facebook.com/me/likes?access_token=mytoken', '|')) (My Token is my actual token in code) I get a EOFError: end of file reached error If I visit the page with my browswer it loads up a JSON page. Any idea what could be causing the error? It was working a few days ago. Can't see any changes to facebook api.

    Read the article

  • how to cancel an awaiting task c#

    - by user1748906
    I am trying to cancel a task awaiting for network IO using CancellationTokenSource, but I have to wait until TcpClient connects: try { while (true) { token.Token.ThrowIfCancellationRequested(); Thread.Sleep(int.MaxValue); //simulating a TcpListener waiting for request } } any ideas ? Secondly, is it OK to start each client in a separate task ?

    Read the article

  • Javascript jQuery ajax submit default data

    - by onlineracoon
    How to submit default data when using $.ajax? I have tried: $.ajaxPrefilter(function(options, originalOptions, jqXHR) { // can't access anything here... (tried jqXHR.data but undefined) }); $.ajaxSetup({ beforeSend: function(jqXHR, settings) { // can't access anything here... (tried jqXHR.data but undefined) } }); I need this so I can send my CSRF token, each time and when logged in submit the user ID each time. I prefer not to submit a token in headers. Thanks

    Read the article

  • getJSON callback not firing

    - by Marty Trenouth
    I'm making the call using the following script which is called on click of an anchor tag function GetToken(videoId) { debugger; var json = $.getJSON("/Vod/RequestAccessToken/"+videoId, function(result){ alert("token recieved: " + result.token); }); } In the server application I recieve the call so I know it is a valid URL, but the callback is not being invoked. If i set though the jquery code (f11/f10) the callback is called??!!!?

    Read the article

  • Abstract class and an inheritor: is it possible to factorize .parent() here?

    - by fge
    Here are what I think are the relevant parts of the code of these two classes. First, TreePointer (original source here): public abstract class TreePointer<T extends TreeNode> implements Iterable<TokenResolver<T>> { //... /** * What this tree can see as a missing node (may be {@code null}) */ private final T missing; /** * The list of token resolvers */ protected final List<TokenResolver<T>> tokenResolvers; /** * Main protected constructor * * <p>This constructor makes an immutable copy of the list it receives as * an argument.</p> * * @param missing the representation of a missing node (may be null) * @param tokenResolvers the list of reference token resolvers */ protected TreePointer(final T missing, final List<TokenResolver<T>> tokenResolvers) { this.missing = missing; this.tokenResolvers = ImmutableList.copyOf(tokenResolvers); } /** * Alternate constructor * * <p>This is the same as calling {@link #TreePointer(TreeNode, List)} with * {@code null} as the missing node.</p> * * @param tokenResolvers the list of token resolvers */ protected TreePointer(final List<TokenResolver<T>> tokenResolvers) { this(null, tokenResolvers); } //... /** * Tell whether this pointer is empty * * @return true if the reference token list is empty */ public final boolean isEmpty() { return tokenResolvers.isEmpty(); } @Override public final Iterator<TokenResolver<T>> iterator() { return tokenResolvers.iterator(); } // .equals(), .hashCode(), .toString() follow } Then, JsonPointer, which contains this .parent() method which I'd like to factorize here (original source here: public final class JsonPointer extends TreePointer<JsonNode> { /** * The empty JSON Pointer */ private static final JsonPointer EMPTY = new JsonPointer(ImmutableList.<TokenResolver<JsonNode>>of()); /** * Return an empty JSON Pointer * * @return an empty, statically allocated JSON Pointer */ public static JsonPointer empty() { return EMPTY; } //... /** * Return the immediate parent of this JSON Pointer * * <p>The parent of the empty pointer is itself.</p> * * @return a new JSON Pointer representing the parent of the current one */ public JsonPointer parent() { final int size = tokenResolvers.size(); return size <= 1 ? EMPTY : new JsonPointer(tokenResolvers.subList(0, size - 1)); } // ... } As mentioned in the subject, the problem I have here is with JsonPointer's .parent() method. In fact, the logic behind this method applies to TreeNode all the same, and therefore to its future implementations. Except that I have to use a constructor, and of course such a constructor is implementation dependent :/ Is there a way to make that .parent() method available to each and every implementation of TreeNode or is it just a pipe dream?

    Read the article

  • Updating a SharePoint master page via a solution (WSP)

    - by Kelly Jones
    In my last blog post, I wrote how to deploy a SharePoint theme using Features and a solution package.  As promised in that post, here is how to update an already deployed master page. There are several ways to update a master page in SharePoint.  You could upload a new version to the master page gallery, or you could upload a new master page to the gallery, and then set the site to use this new page.  Manually uploading your master page to the master page gallery might be the best option, depending on your environment.  For my client, I did these steps in code, which is what they preferred. (Image courtesy of: http://www.joiningdots.net/blog/2007/08/sharepoint-and-quick-launch.html ) Before you decide which method you need to use, take a look at your existing pages.  Are they using the SharePoint dynamic token or the static token for the master page reference?  The wha, huh? SO, there are four ways to tell an .aspx page hosted in SharePoint which master page it should use: “~masterurl/default.master” – tells the page to use the default.master property of the site “~masterurl/custom.master” – tells the page to use the custom.master property of the site “~site/default.master” – tells the page to use the file named “default.master” in the site’s master page gallery “~sitecollection/default.master” – tells the page to use the file named “default.master” in the site collection’s master page gallery For more information about these tokens, take a look at this article on MSDN. Once you determine which token your existing pages are pointed to, then you know which file you need to update.  So, if the ~masterurl tokens are used, then you upload a new master page, either replacing the existing one or adding another one to the gallery.  If you’ve uploaded a new file with a new name, you’ll just need to set it as the master page either through the UI (MOSS only) or through code (MOSS or WSS Feature receiver code – or using SharePoint Designer). If the ~site or ~sitecollection tokens were used, then you’re limited to either replacing the existing master page, or editing all of your existing pages to point to another master page.  In most cases, it probably makes sense to just replace the master page. For my project, I’m working with WSS and the existing pages are set to the ~sitecollection token.  Based on this, I decided to just upload a new version of the existing master page (and not modify the dozens of existing pages). Also, since my client prefers Features and solutions, I created a master page Feature and a corresponding Feature Receiver.  For information on creating the elements and feature files, check out this post: http://sharepointmagazine.net/technical/development/deploying-the-master-page . This works fine, unless you are overwriting an existing master page, which was my case.  You’ll run into errors because the master page file needs to be checked out, replaced, and then checked in.  I wrote code in my Feature Activated event handler to accomplish these steps. Here are the steps necessary in code: Get the file name from the elements file of the Feature Check out the file from the master page gallery Upload the file to the master page gallery Check in the file to the master page gallery Here’s the code in my Feature Receiver: 1: public override void FeatureActivated(SPFeatureReceiverProperties properties) 2: { 3: try 4: { 5:   6: SPElementDefinitionCollection col = properties.Definition.GetElementDefinitions(System.Globalization.CultureInfo.CurrentCulture); 7:   8: using (SPWeb curweb = GetCurWeb(properties)) 9: { 10: foreach (SPElementDefinition ele in col) 11: { 12: if (string.Compare(ele.ElementType, "Module", true) == 0) 13: { 14: // <Module Name="DefaultMasterPage" List="116" Url="_catalogs/masterpage" RootWebOnly="FALSE"> 15: // <File Url="myMaster.master" Type="GhostableInLibrary" IgnoreIfAlreadyExists="TRUE" 16: // Path="MasterPages/myMaster.master" /> 17: // </Module> 18: string Url = ele.XmlDefinition.Attributes["Url"].Value; 19: foreach (System.Xml.XmlNode file in ele.XmlDefinition.ChildNodes) 20: { 21: string Url2 = file.Attributes["Url"].Value; 22: string Path = file.Attributes["Path"].Value; 23: string fileType = file.Attributes["Type"].Value; 24:   25: if (string.Compare(fileType, "GhostableInLibrary", true) == 0) 26: { 27: //Check out file in document library 28: SPFile existingFile = curweb.GetFile(Url + "/" + Url2); 29:   30: if (existingFile != null) 31: { 32: if (existingFile.CheckOutStatus != SPFile.SPCheckOutStatus.None) 33: { 34: throw new Exception("The master page file is already checked out. Please make sure the master page file is checked in, before activating this feature."); 35: } 36: else 37: { 38: existingFile.CheckOut(); 39: existingFile.Update(); 40: } 41: } 42:   43: //Upload file to document library 44: string filePath = System.IO.Path.Combine(properties.Definition.RootDirectory, Path); 45: string fileName = System.IO.Path.GetFileName(filePath); 46: char slash = Convert.ToChar("/"); 47: string[] folders = existingFile.ParentFolder.Url.Split(slash); 48:   49: if (folders.Length > 2) 50: { 51: Logger.logMessage("More than two folders were detected in the library path for the master page. Only two are supported.", 52: Logger.LogEntryType.Information); //custom logging component 53: } 54:   55: SPFolder myLibrary = curweb.Folders[folders[0]].SubFolders[folders[1]]; 56:   57: FileStream fs = File.OpenRead(filePath); 58:   59: SPFile newFile = myLibrary.Files.Add(fileName, fs, true); 60:   61: myLibrary.Update(); 62: newFile.CheckIn("Updated by Feature", SPCheckinType.MajorCheckIn); 63: newFile.Update(); 64: } 65: } 66: } 67: } 68: } 69: } 70: catch (Exception ex) 71: { 72: string msg = "Error occurred during feature activation"; 73: Logger.logException(ex, msg, ""); 74: } 75:   76: } 77:   78: /// <summary> 79: /// Using a Feature's properties, get a reference to the Current Web 80: /// </summary> 81: /// <param name="properties"></param> 82: public SPWeb GetCurWeb(SPFeatureReceiverProperties properties) 83: { 84: SPWeb curweb; 85:   86: //Check if the parent of the web is a site or a web 87: if (properties != null && properties.Feature.Parent.GetType().ToString() == "Microsoft.SharePoint.SPWeb") 88: { 89:   90: //Get web from parent 91: curweb = (SPWeb)properties.Feature.Parent; 92: 93: } 94: else 95: { 96: //Get web from Site 97: using (SPSite cursite = (SPSite)properties.Feature.Parent) 98: { 99: curweb = (SPWeb)cursite.OpenWeb(); 100: } 101: } 102:   103: return curweb; 104: } This did the trick.  It allowed me to update my existing master page, through an easily repeatable process (which is great when you are working with more than one environment and what to do things like TEST it!).  I did run into what I would classify as a strange issue with one of my subsites, but that’s the topic for another blog post.

    Read the article

  • Windows Azure Use Case: New Development

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx Description: Computing platforms evolve over time. Originally computers were directed by hardware wiring - that, the “code” was the path of the wiring that directed an electrical signal from one component to another, or in some cases a physical switch controlled the path. From there software was developed, first in a very low machine language, then when compilers were created, computer languages could more closely mimic written statements. These language statements can be compiled into the lower-level machine language still used by computers today. Microprocessors replaced logic circuits, sometimes with fewer instructions (Reduced Instruction Set Computing, RISC) and sometimes with more instructions (Complex Instruction Set Computing, CISC). The reason this history is important is that along each technology advancement, computer code has adapted. Writing software for a RISC architecture is significantly different than developing for a CISC architecture. And moving to a Distributed Architecture like Windows Azure also has specific implementation details that our code must follow. But why make a change? As I’ve described, we need to make the change to our code to follow advances in technology. There’s no point in change for its own sake, but as a new paradigm offers benefits to our users, it’s important for us to leverage those benefits where it makes sense. That’s most often done in new development projects. It’s a far simpler task to take a new project and adapt it to Windows Azure than to try and retrofit older code designed in a previous computing environment. We can still use the same coding languages (.NET, Java, C++) to write code for Windows Azure, but we need to think about the architecture of that code on a new project so that it runs in the most efficient, cost-effective way in a Distributed Architecture. As we receive new requests from the organization for new projects, a distributed architecture paradigm belongs in the decision matrix for the platform target. Implementation: When you are designing new applications for Windows Azure (or any distributed architecture) there are many important details to consider. But at the risk of over-simplification, there are three main concepts to learn and architect within the new code: Stateless Programming - Stateless program is a prime concept within distributed architectures. Rather than each server owning the complete processing cycle, the information from an operation that needs to be retained (the “state”) should be persisted to another location c(like storage) common to all machines involved in the process.  An interesting learning process for Stateless Programming (although not unique to this language type) is to learn Functional Programming. Server-Side Processing - Along with developing using a Stateless Design, the closer you can locate the code processing to the data, the less expensive and faster the code will run. When you control the network layer, this is less important, since you can send vast amounts of data between the server and client, allowing the client to perform processing. In a distributed architecture, you don’t always own the network, so it’s performance is unpredictable. Also, you may not be able to control the platform the user is on (such as a smartphone, PC or tablet), so it’s imperative to deliver only results and graphical elements where possible.  Token-Based Authentication - Also called “Claims-Based Authorization”, this code practice means instead of allowing a user to log on once and then running code in that context, a more granular level of security is used. A “token” or “claim”, often represented as a Certificate, is sent along for a series or even one request. In other words, every call to the code is authenticated against the token, rather than allowing a user free reign within the code call. While this is more work initially, it can bring a greater level of security, and it is far more resilient to disconnections. Resources: See the references of “Nondistributed Deployment” and “Distributed Deployment” at the top of this article for more information with graphics:  http://msdn.microsoft.com/en-us/library/ee658120.aspx  Stack Overflow has a good thread on functional programming: http://stackoverflow.com/questions/844536/advantages-of-stateless-programming  Another good discussion on Stack Overflow on server-side processing is here: http://stackoverflow.com/questions/3064018/client-side-or-server-side-processing Claims Based Authorization is described here: http://msdn.microsoft.com/en-us/magazine/ee335707.aspx

    Read the article

  • Windows Azure Use Case: New Development

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx Description: Computing platforms evolve over time. Originally computers were directed by hardware wiring - that, the “code” was the path of the wiring that directed an electrical signal from one component to another, or in some cases a physical switch controlled the path. From there software was developed, first in a very low machine language, then when compilers were created, computer languages could more closely mimic written statements. These language statements can be compiled into the lower-level machine language still used by computers today. Microprocessors replaced logic circuits, sometimes with fewer instructions (Reduced Instruction Set Computing, RISC) and sometimes with more instructions (Complex Instruction Set Computing, CISC). The reason this history is important is that along each technology advancement, computer code has adapted. Writing software for a RISC architecture is significantly different than developing for a CISC architecture. And moving to a Distributed Architecture like Windows Azure also has specific implementation details that our code must follow. But why make a change? As I’ve described, we need to make the change to our code to follow advances in technology. There’s no point in change for its own sake, but as a new paradigm offers benefits to our users, it’s important for us to leverage those benefits where it makes sense. That’s most often done in new development projects. It’s a far simpler task to take a new project and adapt it to Windows Azure than to try and retrofit older code designed in a previous computing environment. We can still use the same coding languages (.NET, Java, C++) to write code for Windows Azure, but we need to think about the architecture of that code on a new project so that it runs in the most efficient, cost-effective way in a Distributed Architecture. As we receive new requests from the organization for new projects, a distributed architecture paradigm belongs in the decision matrix for the platform target. Implementation: When you are designing new applications for Windows Azure (or any distributed architecture) there are many important details to consider. But at the risk of over-simplification, there are three main concepts to learn and architect within the new code: Stateless Programming - Stateless program is a prime concept within distributed architectures. Rather than each server owning the complete processing cycle, the information from an operation that needs to be retained (the “state”) should be persisted to another location c(like storage) common to all machines involved in the process.  An interesting learning process for Stateless Programming (although not unique to this language type) is to learn Functional Programming. Server-Side Processing - Along with developing using a Stateless Design, the closer you can locate the code processing to the data, the less expensive and faster the code will run. When you control the network layer, this is less important, since you can send vast amounts of data between the server and client, allowing the client to perform processing. In a distributed architecture, you don’t always own the network, so it’s performance is unpredictable. Also, you may not be able to control the platform the user is on (such as a smartphone, PC or tablet), so it’s imperative to deliver only results and graphical elements where possible.  Token-Based Authentication - Also called “Claims-Based Authorization”, this code practice means instead of allowing a user to log on once and then running code in that context, a more granular level of security is used. A “token” or “claim”, often represented as a Certificate, is sent along for a series or even one request. In other words, every call to the code is authenticated against the token, rather than allowing a user free reign within the code call. While this is more work initially, it can bring a greater level of security, and it is far more resilient to disconnections. Resources: See the references of “Nondistributed Deployment” and “Distributed Deployment” at the top of this article for more information with graphics:  http://msdn.microsoft.com/en-us/library/ee658120.aspx  Stack Overflow has a good thread on functional programming: http://stackoverflow.com/questions/844536/advantages-of-stateless-programming  Another good discussion on Stack Overflow on server-side processing is here: http://stackoverflow.com/questions/3064018/client-side-or-server-side-processing Claims Based Authorization is described here: http://msdn.microsoft.com/en-us/magazine/ee335707.aspx

    Read the article

  • A more elegant way of embedding a SOAP security header in Silverlight 4

    - by Your DisplayName here!
    The current situation with Silverlight is, that there is no support for the WCF federation binding. This means that all security token related interactions have to be done manually. Requesting the token from an STS is not really the bad part, sending it along with outgoing SOAP messages is what’s a little annoying. So far you had to wrap all calls on the channel in an OperationContextScope wrapping an IContextChannel. This “programming model” was a little disruptive (in addition to all the async stuff that you are forced to do). It seems that starting with SL4 there is more support for traditional WCF extensibility points – especially IEndpointBehavior, IClientMessageInspector. I never read somewhere that these are new features in SL4 – but I am pretty sure they did not exist in SL3. With the above mentioned interfaces at my disposal, I thought I have another go at embedding a security header – and yeah – I managed to make the code much prettier (and much less bizarre). Here’s the code for the behavior/inspector: public class IssuedTokenHeaderInspector : IClientMessageInspector {     RequestSecurityTokenResponse _rstr;       public IssuedTokenHeaderInspector(RequestSecurityTokenResponse rstr)     {         _rstr = rstr;     }       public void AfterReceiveReply(ref Message reply, object correlationState)     { }       public object BeforeSendRequest(ref Message request, IClientChannel channel)     {         request.Headers.Add(new IssuedTokenHeader(_rstr));                  return null;     } }   public class IssuedTokenHeaderBehavior : IEndpointBehavior {     RequestSecurityTokenResponse _rstr;       public IssuedTokenHeaderBehavior(RequestSecurityTokenResponse rstr)     {         if (rstr == null)         {             throw new ArgumentNullException();         }           _rstr = rstr;     }       public void ApplyClientBehavior(       ServiceEndpoint endpoint, ClientRuntime clientRuntime)     {         clientRuntime.MessageInspectors.Add(new IssuedTokenHeaderInspector(_rstr));     }       // rest omitted } This allows to set up a proxy with an issued token header and you don’t have to worry anymore with embedding the header manually with every call: var client = GetWSTrustClient();   var rst = new RequestSecurityToken(WSTrust13Constants.KeyTypes.Symmetric) {     AppliesTo = new EndpointAddress("https://rp/") };   client.IssueCompleted += (s, args) => {     _proxy = new StarterServiceContractClient();     _proxy.Endpoint.Behaviors.Add(new IssuedTokenHeaderBehavior(args.Result));   };   client.IssueAsync(rst); Since SL4 also support the IExtension<T> interface, you can also combine this with Nicholas Allen’s AutoHeaderExtension.

    Read the article

  • Gmail IMAP OAuth for desktop clients

    - by Sabya
    Recently Google announced that they are supporting OAUth for Gmail IMAP/SMTP. I browsed through their multiple documentations, but still I am confused about if they support OAuth for installed applications. 1. In this documentation they say: Note: Though the OAuth protocol supports the desktop/installed application use case, Google only supports OAuth for web applications. But they also have a document for OAuth for installed applications. 2. When I read the OAuth specification pointed by them, it says (in section 11.7): In many applications, the Consumer application will be under the control of potentially untrusted parties. For example, if the Consumer is a freely available desktop application, an attacker may be able to download a copy for analysis. In such cases, attackers will be able to recover the Consumer Secret used to authenticate the Consumer to the Service Provider. Also I think the disclaimer in point 1 above is about Google Data APIs, and surely IMAP/SMTP is not a part of them. I understand that for installed applications I can have a setup like: Have a small web-app at say example.com for my application. This web-app talks to Google gets the access token. The installed application talks to example.com only to get the access token. Installed application then talks to Google with the access token. I am now confused. Is this the only way?

    Read the article

  • Need help regarding one LALR(1) parsing.

    - by AppleGrew
    I am trying to parse a context-free language, called Context Free Art. I have created its parser in Javascript using a YACC-like JS LALR(1) parser generator JSCC. Take the example of following CFA (Context Free Art) code. This code is a valid CFA. startshape A rule A { CIRCLE { s 1} } Notice the A and s in above. s is a command to scale the CIRCLE, but A is just a name of this rule. In the language's grammar I have set s as token SCALE and A comes under token STRING (I have a regular expression to match string and it is at the bottom of of all tokens). This works fine, but in the below case it breaks. startshape s rule s { CIRCLE { s 1} } This too is a perfectly valid code, but since my parser marks s after rule as SCALE token so it errors out saying that it was expecting STRING. Now my question is, if there is any way to re-write the production rules of the parser to account for this? The related production rule is:- rule: RULE STRING '{' buncha_replacements '}' [* rule(%2, 1) *] | RULE STRING RATIONAL '{' buncha_replacements '}' [* rule(%2, 1*%3) *] ; One simple solution I can think of is create a copy of above rule with STRING replaced by SCALE, but this is just one of the many similar rules which would need such fixing. Furthermore there are many other terminals which can get matched to STRING. So that means way too many rules!

    Read the article

  • How to encrypt an NSString in Objective C with DES in ECB-Mode?

    - by blauesocke
    Hi, I am trying to encrypt an NSString in Objective C on the iPhone. At least I wan't to get a string like "TmsbDaNG64lI8wC6NLhXOGvfu2IjLGuEwc0CzoSHnrs=" when I encode "us=foo;pw=bar;pwAlg=false;" by using this key: "testtest". My problem for now is, that CCCrypt always returns "4300 - Parameter error" and I have no more idea why. This is my code (the result of 5 hours google and try'n'error): NSString *token = @"us=foo;pw=bar;pwAlg=false;"; NSString *key = @"testtest"; const void *vplainText; size_t plainTextBufferSize; plainTextBufferSize = [token length]; vplainText = (const void *) [token UTF8String]; CCCryptorStatus ccStatus; uint8_t *bufferPtr = NULL; size_t bufferPtrSize = 0; size_t *movedBytes; bufferPtrSize = (plainTextBufferSize + kCCBlockSize3DES) & ~(kCCBlockSize3DES - 1); bufferPtr = malloc( bufferPtrSize * sizeof(uint8_t)); memset((void *)bufferPtr, 0x0, bufferPtrSize); // memset((void *) iv, 0x0, (size_t) sizeof(iv)); NSString *initVec = @"init Vec"; const void *vkey = (const void *) [key UTF8String]; const void *vinitVec = (const void *) [initVec UTF8String]; ccStatus = CCCrypt(kCCEncrypt, kCCAlgorithmDES, kCCOptionECBMode, vkey, //"123456789012345678901234", //key kCCKeySizeDES, NULL,// vinitVec, //"init Vec", //iv, vplainText, //"Your Name", //plainText, plainTextBufferSize, (void *)bufferPtr, bufferPtrSize, movedBytes); NSString *result; NSData *myData = [NSData dataWithBytes:(const void *)bufferPtr length:(NSUInteger)movedBytes]; result = [myData base64Encoding];

    Read the article

  • WCF MessageHeaders in OperationContext.Current

    - by Nate Bross
    If I use code like this [just below] to add Message Headers to my OperationContext, will all future out-going messages contain that data on any new ClientProxy defined from the same "run" of my application? The objective, is to pass a parameter or two to each OpeartionContract w/out messing with the signature of the OperationContract, since the parameters being passed will be consistant for all requests for a given run of my client application. public void DoSomeStuff() { var proxy = new MyServiceClient(); Guid myToken = Guid.NewGuid(); MessageHeader<Guid> mhg = new MessageHeader<Guid>(myToken); MessageHeader untyped = mhg.GetUntypedHeader("token", "ns"); OperationContext.Current.OutgoingMessageHeaders.Add(untyped); proxy.DoOperation(...); } public void DoSomeOTHERStuff() { var proxy = new MyServiceClient(); Guid myToken = Guid.NewGuid(); MessageHeader<Guid> mhg = new MessageHeader<Guid>(myToken); MessageHeader untyped = mhg.GetUntypedHeader("token", "ns"); OperationContext.Current.OutgoingMessageHeaders.Add(untyped); proxy.DoOtherOperation(...); } In other words, is it safe to refactor the above code like this? bool isSetup = false; public void SetupMessageHeader() { if(isSetup) { return; } Guid myToken = Guid.NewGuid(); MessageHeader<Guid> mhg = new MessageHeader<Guid>(myToken); MessageHeader untyped = mhg.GetUntypedHeader("token", "ns"); OperationContext.Current.OutgoingMessageHeaders.Add(untyped); isSetup = true; } public void DoSomeStuff() { var proxy = new MyServiceClient(); SetupMessageHeader(); proxy.DoOperation(...); } public void DoSomeOTHERStuff() { var proxy = new MyServiceClient(); SetupMessageHeader(); proxy.DoOtherOperation(...); } Since I don't really understand what's happening there, I don't want to cargo cult it and just change it and let it fly if it works, I'd like to hear your thoughts on if it is OK or not.

    Read the article

  • need some help in Abraham twitteroauth class

    - by diEcho
    Hello All, i m learning how to use twitter from twitter developer link on Authenticating Requests with OAuth page i was debugging my code with given procedure on Sending the user to authorization section there is written that if you are using the callback flow, your oauth_callback should have received back your oauth_token (the same that you sent, your "request token") and a field called the oauth_verifier. You'll need that for the next step. Here's the response I received: oauth_token=8ldIZyxQeVrFZXFOZH5tAwj6vzJYuLQpl0WUEYtWc&oauth_verifier=pDNg57prOHapMbhv25RNf75lVRd6JDsni1AJJIDYoTY my original code is require_once('twitteroauth/twitteroauth.php'); require_once('config.php'); /* Build TwitterOAuth object with client credentials. */ $connection = new TwitterOAuth(CONSUMER_KEY, CONSUMER_SECRET); /* Get temporary credentials. */ $request_token = $connection->getRequestToken(OAUTH_CALLBACK); /* Save temporary credentials to session. */ $_SESSION['oauth_token'] = $token = $request_token['oauth_token']; $_SESSION['oauth_token_secret'] = $request_token['oauth_token_secret']; /* If last connection failed don't display authorization link. */ switch ($connection->http_code) { case 200: /* Build authorize URL and redirect user to Twitter. */ echo "<br/>Authorize URL:".$url = $connection->getAuthorizeURL($token); //header('Location: ' . $url); break; default: /* Show notification if something went wrong. */ echo 'Could not connect to Twitter. Refresh the page or try again later.'; } and i m getting Authorize URL: https://twitter.com/oauth/authenticate?oauth_token=BHqbrTjsPcyvaAsfDwfU149aAcZjtw45nhLBeG1c i m not getting above URL having oauth_verifier. please tell me from where do i see/debug that url??

    Read the article

  • Uploading to TwitVid using x_verify_credentials_authorization

    - by deepa
    Hi, I am developing an iPhone application that uploads videos to TwitVid using the library TwitVid-iPhone-OAuth3. I have selected this library since Twitter is shifting to OAuth mechanism of authentication. 1) Does this new library allows to upload without user-name and password parameters? 2) I am using the following steps for uploading videos (assumed that new library allows to upload without user-name and password parameters). But, I am getting the error 'Incorrect Signature'. I am not able to identify the mistake that I am doing here. Could you please help me out to solve this problem. Authenticate the app using OAuth (MGTwitterEngine etc) which redirects the user to the web-page asking for log-in and app authentication. If the user authenticates the app, access token will obtained as final step. Upload the video to TwitVid using the following code snippet: OAuth *authorizer = [[OAuth alloc] initWithConsumerKey: @"my_consumer_key" andConsumerSecret: @"my_consumer_secret"]; authorizer.oauth_token = //The key-part of the access token obtained in step 2 authorizer.oauth_token_secret = //The secret-part of the access token obtained in step 2 authorizer.oauth_token_authorized = YES; //Since authenticated in steps 1 and 2 NSString *authHeader = [authorizer oAuthHeaderForMethod: @"POST" andUrl: @"https://api.twitter.com/1/account/verify_credentials.xml" andParams: nil]; twitVid = [[TwitVid alloc] init]; [mTwitVid setDelegate: self]; [mTwitVid authenticateWithXVerifyCredentialsAuthorization: authHeader xAuthServiceProvider: nil]; Thanks and Regards, Deepa

    Read the article

  • SmtpClient.SendAsync - How to ensure my application doesn't finish before callback?

    - by James
    Hi, I need to send emails asychronously through a console application. I need to do some DB updates on the callback but my application is exiting before the callback code gets run! How can I stop this from happening in a nice manner rather than simply guessing how long to wait before exiting. I would imagine the Async calls get placed in some form of thread? Is it possible to check if any are waiting to be called? Sample Code private static void SendCompletedCallback(object sender, AsyncCompletedEventArgs e) { // Get the unique identifier for this asynchronous operation. String token = (string) e.UserState; if (e.Cancelled) { Console.WriteLine("[{0}] Send canceled.", token); } if (e.Error != null) { Console.WriteLine("[{0}] {1}", token, e.Error.ToString()); } else { // update DB Console.WriteLine("Message sent."); } } public static void Main(string[] args) { var users = Repository.GetUsers(); SmtpClient client = new SmtpClient("Host"); client.SendCompleted += new SendCompletedEventHandler(SendCompletedCallback); MailAddress from = new MailAddress("[email protected]", "System", Encoding.UTF8); foreach (var user in users) { MailAddress to = new MailAddress(user.Email); MailMessage message = new MailMessage(from, to); message.Body = "This is a test"; message.BodyEncoding = System.Text.Encoding.UTF8; message.Subject = "test message 1" + someArrows; message.SubjectEncoding = System.Text.Encoding.UTF8; string userState = String.Format("Message for user id {0}", user.ID); client.SendAsync(message, userState); message.Dispose(); } // need to wait here until I have received a callback for each message // otherwise the application will exit }

    Read the article

  • Django - Override admin site's login form

    - by TrojanCentaur
    I'm currently trying to override the default form used in Django 1.4 when logging in to the admin site (my site uses an additional 'token' field required for users who opt in to Two Factor Authentication, and is mandatory for site staff). Django's default form does not support what I need. Currently, I've got a file in my templates/ directory called templates/admin/login.html, which seems to be correctly overriding the template used with the one I use throughout the rest of my site. The contents of the file are simply as below: # admin/login.html: {% extends "login.html" %} The actual login form is as below: # login.html: {% load url from future %}<!DOCTYPE html> <html> <head> <title>Please log in</title> </head> <body> <div id="loginform"> <form method="post" action="{% url 'id.views.auth' %}"> {% csrf_token %} <input type="hidden" name="next" value="{{ next }}" /> {{ form.username.label_tag }}<br/> {{ form.username }}<br/> {{ form.password.label_tag }}<br/> {{ form.password }}<br/> {{ form.token.label_tag }}<br/> {{ form.token }}<br/> <input type="submit" value="Log In" /> </form> </div> </body> </html> My issue is that the form provided works perfectly fine when accessed using my normal login URLs because I supply my own AuthenticationForm as the form to display, but through the Django Admin login route, Django likes to supply it's own form to this template and thus only the username and password fields render. Is there any way I can make this work, or is this something I am just better off 'hard coding' the HTML fields into the form for?

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >