Search Results

Search found 21287 results on 852 pages for 'www liveinternet ruusersilya bog'.

Page 210/852 | < Previous Page | 206 207 208 209 210 211 212 213 214 215 216 217  | Next Page >

  • Javascript regex URL matching

    - by Blondie
    I have this so far: chrome.tabs.getSelected(null, function(tab) { var title = tab.title; var btn = '<a href="' + tab.url + '" onclick="save(\'' + title + '\');"> ' + title + '</a>'; if(tab.url.match('/http:\/\/www.mydomain.com\/version.php/i')) { document.getElementById('link').innerHTML = '<p>' + btn + '</p>'; } }); Basically it should match the domain within this: http://www.mydomain.com/version.php?* Anything that matches that even when it includes something like version.php?ver=1, etc When I used the code above of mine, it doesn't display anything, but when I remove the if statement, it's fine but it shows on other pages which it shouldn't only on the matched URL.

    Read the article

  • ReportBuilder.application fails on my PC - but works on localhost

    - by JayTee
    We're running SQL 2005 on Win2K3 server and are using SSRS. Here's the situation: I can run Report Builder from localhost My coworker can run Report Builder on his Vista computer Another coworker can run Report Builder on his XP SP3 computer (IE7) I can NOT run Report Builder on my XP SP3 computer (IE7) I'm told that it could be anything from an errant registry entry to a group policy problem. Here is what I've tried: Put the site into "Trusted Sites" with "low" security re-install .NET create a new local user account and attempt to run it The results? Every single time, I get a dialog box: "Application cannot be started. Contact the application vendor" I click the details button and get this: PLATFORM VERSION INFO Windows : 5.1.2600.196608 (Win32NT) Common Language Runtime : 2.0.50727.3607 System.Deployment.dll : 2.0.50727.3053 (netfxsp.050727-3000) mscorwks.dll : 2.0.50727.3607 (GDR.050727-3600) dfdll.dll : 2.0.50727.3053 (netfxsp.050727-3000) dfshim.dll : 2.0.50727.3053 (netfxsp.050727-3000) SOURCES Deployment url : http://www.example.com/ReportServer/ReportBuilder/ReportBuilder.application Server : Microsoft-IIS/6.0 X-Powered-By : ASP.NET X-AspNet-Version: 2.0.50727 IDENTITIES Deployment Identity : ReportBuilder.application, Version=9.0.3042.0, Culture=neutral, PublicKeyToken=c3bce3770c238a49, processorArchitecture=msil APPLICATION SUMMARY * Online only application. * Trust url parameter is set. ERROR SUMMARY Below is a summary of the errors, details of these errors are listed later in the log. * Activation of http://www.example.com/ReportServer/ReportBuilder/ReportBuilder.application resulted in exception. Following failure messages were detected: + Value does not fall within the expected range. COMPONENT STORE TRANSACTION FAILURE SUMMARY No transaction error was detected. WARNINGS There were no warnings during this operation. OPERATION PROGRESS STATUS * [4/7/2010 2:53:57 PM] : Activation of http://www.example.com/ReportServer/ReportBuilder/ReportBuilder.application has started. * [4/7/2010 2:53:58 PM] : Processing of deployment manifest has successfully completed. ERROR DETAILS Following errors were detected during this operation. * [4/7/2010 2:53:58 PM] System.ArgumentException - Value does not fall within the expected range. - Source: System.Deployment - Stack trace: at System.Deployment.Application.NativeMethods.CorLaunchApplication(UInt32 hostType, String applicationFullName, Int32 manifestPathsCount, String[] manifestPaths, Int32 activationDataCount, String[] activationData, PROCESS_INFORMATION processInformation) at System.Deployment.Application.ComponentStore.ActivateApplication(DefinitionAppId appId, String activationParameter, Boolean useActivationParameter) at System.Deployment.Application.SubscriptionStore.ActivateApplication(DefinitionAppId appId, String activationParameter, Boolean useActivationParameter) at System.Deployment.Application.ApplicationActivator.Activate(DefinitionAppId appId, AssemblyManifest appManifest, String activationParameter, Boolean useActivationParameter) at System.Deployment.Application.ApplicationActivator.PerformDeploymentActivation(Uri activationUri, Boolean isShortcut, String textualSubId, String deploymentProviderUrlFromExtension, BrowserSettings browserSettings, String& errorPageUrl) at System.Deployment.Application.ApplicationActivator.ActivateDeploymentWorker(Object state) COMPONENT STORE TRANSACTION DETAILS * Transaction at [4/7/2010 2:53:58 PM] + System.Deployment.Internal.Isolation.StoreOperationSetDeploymentMetadata - Status: Set - HRESULT: 0x0 + System.Deployment.Internal.Isolation.StoreTransactionOperationType (27) - HRESULT: 0x0 I'm really at a loss. I'm certain there is something on my PC preventing the application from running - but I just don't know what. Google hasn't been much of a help because most problems are related to the server configuration (which I know is correct since it works on other PCs) Help me, Overflow Kenobi, you're my only hope..

    Read the article

  • How can I get the new Facebook Javascript SDK to work in IE8?

    - by archbishop
    I've boiled down my page to the simplest possible thing, and it still doesn't work in IE8. Here's the entire html page: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:fb="http://www.facebook.com/2008/fbml" xml:lang="en" lang="en"> <head></head> <body> <div id="fb-root"></div> <fb:login-button></fb:login-button> <script src="http://connect.facebook.net/en_US/all.js"></script> <script> FB.init({appId: 'd663755ef4dd07c246e047ea97b44d6a', status: true, cookie: true, xfbml: true}); FB.Event.subscribe('auth.sessionChange', function(response) { alert(JSON.stringify(response)); }); FB.getLoginStatus(function (response) { alert(JSON.stringify(response)); }); </script> </body> </html> In firefox, safari, and chrome (on a mac), I get the behavior I expect: if I am not logged into Facebook, I get a dialog on page load with an empty session. When I click the Login button and log in, I get a second dialog with a session. If I am logged into Facebook, I get two dialogs with sessions: one from the getLoginStatus call, and another from the event. In IE8, I get no dialogs when I load the page. The getLoginStatus callback is not invoked. When I click the Login button, I get a dialog, but it has a very strange error in it: Invalid Argument The Facebook Connect cross-domain receiver URL (http://static.ak.fbcdn.net/connect/xd_proxy.php#?=&cb=f3e91da434653f2&origin=http%3A%2F%2Fsword.sqprod.com%2Ff210cba91f2a6d4&relation=opener&transport=flash&frame=f27aa957225164&result=xxRESULTTOKENxx) must have the application's Connect URL (http://mysiteurl.com/) as a prefix. You can configure the Connect URL in the Application Settings Editor. I've sanitized the Connect URL above, but it is correct. The dialog does have username/password fields. If I log in, the dialog box gets redirected to my Connect URL, but there's no fb cookie, so of course nothing works. What am I doing wrong here?

    Read the article

  • Using .htaccess to server files from Amazon S3 CloudFront

    - by Adrian A.
    My ideal setup would be to take a current clients site, upload a .htaccess with a regex inside, that would match the URI, and if it finds a certain file extension, it would use the same path, but with an altered domain. ie. Normal path: http://www.domain.com/something/images/someimage.jpeg http://www.domain.com/assets/js/jquery.js .htaccess translated would turn the above into: http://mycdn.other.com/something/images/someimage.jpeg http://mycdn.other.com/assets/js/jquery.js I googled this for hours in a row, no luck. Again, this is for actually making use of Amazon's CloudFront. S3 is already mounted to the website for backups and storing files using s3fs, but this doesn't solve the issue since it's using S3 directly, not using the CloudFront.

    Read the article

  • Creating meaningful add URLs in Cakephp

    - by Loftx
    Hi there, For my site I have a number of Orders each of which contains a number of Quotes. A quote is always tied to an individual order, so in the quotes controller I add a quote with reference to it's order: function add($orderId) { // funtion here } And the calling URL looks a bit like http://www.example.com/quotes/add/1 It occurred to me the URLs would make more sense if they looked a bit more like http://www.example.com/orders/1/quotes/add As the quote is being added to order 1. Is this something it's possible to achive in CakePHP? Cheers, Tom

    Read the article

  • Overwrite the Soap Envelope in Suds python

    - by chrissygormley
    Hello, I have a camera and I am trying to connect to it vis suds. I have tried to send raw xml and have found that the only thing stopping the xml suds from working is an incorrect Soap envelope namespace. The envelope namespace is: xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" and I want to rewrite it to: xmlns:SOAP-ENV="http://www.w3.org/2003/05/soap-envelope" In order to add a namespace in python I try this code: message = Element('Element_name').addPrefix(p='SOAP-ENC', u='www.w3.org/ENC') But when I add the SOAP-ENV to the namespace it doesn't write as it is hardcoded into the suds bindings. Is there a way to overwrite this in suds? Thanks for any help.

    Read the article

  • SSH Tunneling from Windows to Linux/Ubuntu

    - by Mike
    My question is for my girlfriend basicly.... She works at a mall and doesn't do much so she likes to get on myspace and facebook as most girls do and yahoo to check her email. Well she uses her laptop to connect to a wireless network that doesn't allow it.... so I did some research and got putty and connected to my linux box I have here at home and it worked somewhat. My problem is it only views my webpages I have created here on this box it won't go outside of the linux host. I did it like this in putty... port is 1000 and hostname:80 is what I got outa my research then connected after seting up the tunnel bam worked for all webpages on my box but when she puts in www.myspace.com it redirects to my index.php in my var/www and won't travel outside that as I said.. Any help would be much obliged.

    Read the article

  • Dumping mod_perlified variables--what's the local namespace?

    - by Kev
    I have a mod_perl script: use strict; use warnings FATAL => 'all'; use 5.010001; my $face = 'ugly'; use Data::Dump qq(pp); die pp($ModPerl::ROOT::ModPerl::Registry::C_3a_www_test_2epl::face); It dies undef at C:/www/test.pl line 8. I was expecting "ugly" at C:/www/test.pl line 8. If instead I die pp(%ModPerl::ROOT::ModPerl::Registry::C_3a_www_test_2epl::); ...after restarting the service to clear any cached variables, face is not even listed. I could have sworn this code was working the last time I used it...I wrote a whole die hook around this way of naming local variables so that I could get at certain local variables to dump debug information. What's the local namespace?

    Read the article

  • Browser connects to WCF service but not my WCF client. What can be the reason?

    - by Slauma
    On a production server (Windows Server 2003 SP2) I can connect to a remote WCF service with Internet Explorer 8: When I browse to the URL http://www.domain.com/Service.svc (where my service listens) I get the expected info page of the service displayed. Connection settings in Internet Explorer only specify "auto detect", proxy settings are disabled. If I start a console application (built with WCF in .NET 4.0) on the same server which also tries to connect to the same WCF service it fails telling me that no endpoint was available listening on http://www.domain.com/Service.svc. Configuration of the WCF client: <configuration> <system.serviceModel> <bindings> <wsHttpBinding> <binding name="WSHttpBinding_IMyService" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" bypassProxyOnLocal="false" transactionFlow="false" hostNameComparisonMode="StrongWildcard" maxBufferPoolSize="524288" maxReceivedMessageSize="65536" messageEncoding="Text" textEncoding="utf-8" useDefaultWebProxy="true" allowCookies="false"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <reliableSession ordered="true" inactivityTimeout="00:10:00" enabled="false" /> <security mode="None"> <transport clientCredentialType="Windows" proxyCredentialType="None" realm="" /> <message clientCredentialType="Windows" negotiateServiceCredential="true"/> </security> </binding> </wsHttpBinding> </bindings> <client> <endpoint address="http://www.domain.com/Service.scv" binding="wsHttpBinding" bindingConfiguration="WSHttpBinding_IMyService" contract="Service.IMyService" name="WSHttpBinding_IMyService" /> </client> </system.serviceModel> <configuration> With these settings I can communicate successfully with the remote service from my development machine. Looking around for other options I found that I can specify to use the Internet Explorer proxy settings with: <system.net> <defaultProxy> <proxy usesystemdefault="true" /> </defaultProxy> </system.net> It didn't work and I am not sure if I understood this setting really correctly. (My hope was that the WCF client will adopt the "autodetect" setting of Internet Explorer and then connect the same way to the service like the installed IE.) I also had toggled the useDefaultWebProxy setting in the binding configuration between true and false with no success. Now I am asking for help what I can do? Which settings might be wrong or missing? What could I test and how can I get more detailed error messages to better identify the problem? Thank you in advance!

    Read the article

  • Sharing cookies/session from WebView to HttpClient doesn't work

    - by Toni Kanoni
    I know this question has been asked a hundred times, and I've read and tried for 2 hours now, but I can't find my error :-( I am trying to create a simple webbrowser and therefore have a webview, where I login on a site and get access to a picture area. With help of a DefaultHttpClient, I want to make it possible to download pictures in the secured area. Therefore I am trying to share the cookies from the webview and pass them on to the HttpClient, so that it is authenticated and able to download. But whatever I try and do, I always get a 403 response back... Basically the steps are the following: 1) Enter URL, webview loads website 2) Enter login details in a form 3) Navigate to picture and long hold for context menu 4) Retrieve the image URL and pass it on to AsynTask for downloading Here's the code of the AsyncTask with the Cookie stuff: protected String doInBackground(String... params) { //params[0] is the URL of the image try { CookieManager cookieManager = CookieManager.getInstance(); String c = cookieManager.getCookie(new URL(params[0]).getHost()); BasicCookieStore cookieStore = new BasicCookieStore(); BasicHttpContext localContext = new BasicHttpContext(); localContext.setAttribute(ClientContext.COOKIE_STORE, cookieStore); String[] cookieParts = null; String cookies[] = null; cookies = c.split(";"); for(int i=0;i<cookies.length;i++) { cookieParts = cookies[i].split("="); BasicClientCookie sessionCookie = new BasicClientCookie(cookieParts[0].trim(), cookieParts[1].trim()); sessionCookie.setDomain(new URL(params[0]).getHost()); cookieStore.addCookie(sessionCookie); } DefaultHttpClient httpClient = new DefaultHttpClient(); httpClient.setCookieStore(cookieStore); HttpGet pageGet = new HttpGet(new URL(params[0]).toURI()); HttpResponse response = httpClient.execute(pageGet, localContext); if(response.getStatusLine().getStatusCode() == HttpStatus.SC_OK) -- NEVER Happens, always get 403 .) One of the problems is that the webview saves some cookies for the host *www.*example.com, but the image-URL to download (params[0]) is *static.*example.com. The line cookieManager.getCookie(new URL(params[0]).getHost()); returns null, because there is no cookie for static.example.com, but only for www.example.com. .) When I manually say cookieManager.getCookie("www.example.com"); I get some cookies back, which I add to the HttpClient cookie store: There are 5 cookies added - testcookie = 0 - PHPSESSID = 320947238someGibberishSessionId - email = [email protected] - pass = 32423te32someEncodedPassGibberish - user = 345542 So although these cookies, a session ID and other stuff, get added to the HttpClient, it never get's through to download an image. Im totally lost... though I guess that it either has something to do with the cookies domains, or that Im still missing other cookies. But from where the heck should I know which cookies exist in the webview, when I have to specify a specific URL to get a cookie back?? :-( Any advice?

    Read the article

  • Spam Assassin on windows

    - by ebeworld
    I just installed spam assassin and run for its sample ham mail as spamassassin sample-nonspam.txt, but it ended up marking it as a spam. What configuration am i missing to change? Result of the check is: From: Keith Dawson To: [email protected] Subject: **SPAM** TBTF ping for 2001-04-20: Reviving Date: Fri, 20 Apr 2001 16:59:58 -0400 Message-Id: X-Spam-Flag: YES X-Spam-Checker-Version: SpamAssassin 3.2.3 (2007-08-08) on ebeworld-PC X-Spam-Level: **** X-Spam-Status: Yes, score=10.5 required=6.3 tests=DCC_CHECK,DIGEST_MULTIPLE, DNS_FROM_OPENWHOIS,RAZOR2_CF_RANGE_51_100,RAZOR2_CF_RANGE_E4_51_100, RAZOR2_CHECK shortcircuit=no autolearn=no version=3.2.3 MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="----------=_4BF17E8E.BF8E0000" This is a multi-part message in MIME format. ------------=_4BF17E8E.BF8E0000 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit This mail is probably spam. The original message has been attached intact in RFC 822 format. Content preview: -----BEGIN PGP SIGNED MESSAGE----- TBTF ping for 2001-04-20: Reviving T a s t y B i t s f r o m t h e T e c h n o l o g y F r o n t [...] Content analysis details: (10.5 points, 6.3 required) 2.4 DNS_FROM_OPENWHOIS RBL: Envelope sender listed in bl.open-whois.org. 1.5 RAZOR2_CF_RANGE_E4_51_100 Razor2 gives engine 4 confidence level above 50% [cf: 58] 2.5 RAZOR2_CHECK Listed in Razor2 (http://razor.sf.net/) 0.5 RAZOR2_CF_RANGE_51_100 Razor2 gives confidence level above 50% [cf: 58] 3.6 DCC_CHECK Listed in DCC (http://rhyolite.com/anti-spam/dcc/) 0.0 DIGEST_MULTIPLE Message hits more than one network digest check ------------=_4BF17E8E.BF8E0000 Content-Type: message/rfc822; x-spam-type=original Content-Description: original message before SpamAssassin Content-Disposition: inline Content-Transfer-Encoding: 8bit Return-Path: Delivered-To: [email protected] Received: from europe.std.com (europe.std.com [199.172.62.20]) by mail.netnoteinc.com (Postfix) with ESMTP id 392E1114061 for ; Fri, 20 Apr 2001 21:34:46 +0000 (Eire) Received: (from daemon@localhost) by europe.std.com (8.9.3/8.9.3) id RAA09630 for tbtf-outgoing; Fri, 20 Apr 2001 17:31:18 -0400 (EDT) Received: from sgi04-e.std.com (sgi04-e.std.com [199.172.62.134]) by europe.std.com (8.9.3/8.9.3) with ESMTP id RAA08749 for ; Fri, 20 Apr 2001 17:24:31 -0400 (EDT) Received: from world.std.com (world-f.std.com [199.172.62.5]) by sgi04-e.std.com (8.9.3/8.9.3) with ESMTP id RAA8278330 for ; Fri, 20 Apr 2001 17:24:31 -0400 (EDT) Received: (from dawson@localhost) by world.std.com (8.9.3/8.9.3) id RAA26781 for [email protected]; Fri, 20 Apr 2001 17:24:31 -0400 (EDT) Received: from sgi04-e.std.com (sgi04-e.std.com [199.172.62.134]) by europe.std.com (8.9.3/8.9.3) with ESMTP id RAA07541 for ; Fri, 20 Apr 2001 17:12:06 -0400 (EDT) Received: from world.std.com (world-f.std.com [199.172.62.5]) by sgi04-e.std.com (8.9.3/8.9.3) with ESMTP id RAA8416421 for ; Fri, 20 Apr 2001 17:12:06 -0400 (EDT) Received: from [208.192.102.193] (ppp0c199.std.com [208.192.102.199]) by world.std.com (8.9.3/8.9.3) with ESMTP id RAA14226 for ; Fri, 20 Apr 2001 17:12:04 -0400 (EDT) Mime-Version: 1.0 Message-Id: Date: Fri, 20 Apr 2001 16:59:58 -0400 To: [email protected] From: Keith Dawson Subject: TBTF ping for 2001-04-20: Reviving Content-Type: text/plain; charset="us-ascii" Sender: [email protected] Precedence: list Reply-To: [email protected] -----BEGIN PGP SIGNED MESSAGE----- TBTF ping for 2001-04-20: Reviving T a s t y B i t s f r o m t h e T e c h n o l o g y F r o n t Timely news of the bellwethers in computer and communications technology that will affect electronic commerce -- since 1994 Your Host: Keith Dawson ISSN: 1524-9948 This issue: < http://tbtf.com/archive/2001-04-20.html > To comment on this issue, please use this forum at Quick Topic: < http://www.quicktopic.com/tbtf/H/kQGJR2TXL6H > ________________________________________________________________________ Q u o t e O f T h e M o m e n t Even organizations that promise "privacy for their customers" rarely if ever promise "continued privacy for their former customers..." Once you cancel your account with any business, their promises of keeping the information about their customers private no longer apply... you're not a customer any longer. This is in the large category of business behaviors that individuals would consider immoral and deceptive -- and businesses know are not illegal. -- "_ankh," writing on the XNStalk mailing list ________________________________________________________________________ ..TBTF's long hiatus is drawing to a close Hail subscribers to the TBTF mailing list. Some 2,000 [1] of you have signed up since the last issue [2] was mailed on 2000-07-20. This brief note is the first of several I will send to this list to excise the dead addresses prior to resuming regular publication. While you time the contractions of the newsletter's rebirth, I in- vite you to read the TBTF Log [3] and sign up for its separate free subscription. Send "subscribe" (no quotes) with any subject to [email protected] . I mail out collected Log items on Sun- days. If you need to stay more immediately on top of breaking stories, pick up the TBTF Log's syndication file [4] or read an aggregator that does. Examples are Slashdot's Cheesy Portal [5], Userland [6], and Sitescooper [7]. If your news obsession runs even deeper and you own an SMS-capable cell phone or PDA, sign up on TBTF's WebWire- lessNow portal [8]. A free call will bring you the latest TBTF Log headline, Jargon Scout [9] find, or Siliconium [10]. Two new columnists have bloomed on TBTF since last summer: Ted By- field's roving_reporter [11] and Gary Stock's UnBlinking [12]. Late- ly Byfield has been writing in unmatched depth about ICANN, but the roving_reporter nym's roots are in commentary at the intersection of technology and culture. Stock's UnBlinking latches onto topical sub- jects and pursues them to the ends of the Net. These writers' voices are compelling and utterly distinctive. [1] http://tbtf.com/growth.html [2] http://tbtf.com/archive/2000-07-20.html [3] http://tbtf.com/blog/ [4] http://tbtf.com/tbtf.rdf [5] http://www.slashdot.org/cheesyportal.shtml [6] http://my.userland.com/ [7] http://www.sitescooper.org/ [8] http://tbtf.com/pull-wwn/ [9] http://tbtf.com/jargon-scout.html [10] http://tbtf.com/siliconia.html [11] http://tbtf.com/roving_reporter/ [12] http://tbtf.com/unblinking/ ________________________________________________________________________ S o u r c e s For a complete list of TBTF's email and Web sources, see http://tbtf.com/sources.html . ________________________________________ B e n e f a c t o r s TBTF is free. If you get value from this publication, please visit the TBTF Benefactors page < http://tbtf.com/the-benefactors.html > and consider contributing to its upkeep. ________________________________________________________________________ TBTF home and archive at http://tbtf.com/ . To unsubscribe send the message "unsubscribe" to [email protected]. TBTF is Copy- right 1994-2000 by Keith Dawson, <[email protected]>. Commercial use prohibited. For non-commercial purposes please forward, post, and link as you see fit. _______________________________________________ Keith Dawson [email protected] Layer of ash separates morning and evening milk. -----BEGIN PGP SIGNATURE----- Version: PGPfreeware 6.5.2 for non-commercial use http://www.pgp.com iQCVAwUBOuCi3WAMawgf2iXRAQHeAQQA3YSePSQ0XzdHZUVskFDkTfpE9XS4fHQs WaT6a8qLZK9PdNcoz3zggM/Jnjdx6CJqNzxPEtxk9B2DoGll/C/60HWNPN+VujDu Xav65S0P+Px4knaQcCIeCamQJ7uGcsw+CqMpNbxWYaTYmjAfkbKH1EuLC2VRwdmD wQmwrDp70v8= =8hLB -----END PGP SIGNATURE----- ------------=_4BF17E8E.BF8E0000--

    Read the article

  • Way to Remove Invite Limit on FBML Multi-Friend Selector

    - by David
    Hi there, I tried to look through various resources before posting here, but was having a surprisingly difficult time finding an answer to my question. Sorry in advance if I overlooked it. I'm currently trying to add the FBML Multi-Friend Selector to my Facebook page. It has a limit on the number of friends you can invite at a time ("Add up to 20 of your friends by clicking on their pictures below"). From what I've looked through it sounds like 20 is the max number of friends a user can invite, but then looking at Mint's page, they have a 22 max invite (http://www.facebook.com/mint?ref=ts) I thought it might be based on number of page fans, as Mint has 56,000, but that doesn't seem to be the case as this page only has 256 fans and have a max of 26 friend invites (http://www.facebook.com/tivix?v=app_106437999388442). Therefore, I don't really understand how this system works. Is there a way for me to increase to 26? Unlimited? Thanks for your help!

    Read the article

  • Use WatiN for automation upload file on the website

    - by MrDuDuDu
    I need upload file on the website. But Have a problem, i can't choose file automatic in code. Always browser show me choose file window. What wrong in my code? IE ie = new IE("https://www.xxxx.com/WFrmlogin.aspx"); FileUploadDialogHandler uploadHandler = new FileUploadDialogHandler(@"D:\065-6405_URGENT.xls"); ie.WaitForComplete(); ie.TextField(Find.ById("txtUser")).TypeText("login"); ie.TextField(Find.ById("txtPassWord")).TypeText("***"); ie.Button(Find.ById("btnok")).Click(); ie.WaitForComplete(); ie.GoTo("https://www.orientspareparts.com/inq/WFrmUpOption.aspx"); ie.WaitForComplete(); ie.DialogWatcher.Clear(); ie.AddDialogHandler(uploadHandler); // This code show choose file dialog ie.FileUpload(Find.ById("FilUpload")).ClickNoWait(); ie.Button(Find.ById("butUpload")).Click(); ie.WaitForComplete();

    Read the article

  • Perl, LibXML and Schemas

    - by Xetius
    I have an example Perl script which I am trying to load and validate a file against a schema, them interrogate various nodes. #!/usr/bin/env perl use strict; use warnings; use XML::LibXML; my $filename = 'source.xml'; my $xml_schema = XML::LibXML::Schema->new(location=>'library.xsd'); my $parser = XML::LibXML->new (); my $doc = $parser->parse_file ($filename); eval { $xml_schema->validate ($doc); }; if ($@) { print "File failed validation: $@" if $@; } eval { print "Here\n"; foreach my $book ($doc->findnodes('/library/book')) { my $title = $book->findnodes('./title'); print $title->to_literal(), "\n"; } }; if ($@) { print "Problem parsing data : $@\n"; } Unfortunately, although it is validating the XML file fine, it is not finding any $book items and therefore not printing out anything. If I remove the schema from the XML file and the validation from the PL file then it works fine. I am using the default namespace. If I change it to not use the default namespace (xmlns:lib="http://libs.domain.com" and prefix all items in the XML file with lib and change the XPath expressions to include the namespace prefix (/lib:library/lib:book) then it again works file. Why? and what am I missing? XML: <?xml version="1.0" encoding="utf-8"?> <library xmlns="http://lib.domain.com" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://lib.domain.com .\library.xsd"> <book> <title>Perl Best Practices</title> <author>Damian Conway</author> <isbn>0596001738</isbn> <pages>542</pages> <image src="http://www.oreilly.com/catalog/covers/perlbp.s.gif" width="145" height="190"/> </book> <book> <title>Perl Cookbook, Second Edition</title> <author>Tom Christiansen</author> <author>Nathan Torkington</author> <isbn>0596003137</isbn> <pages>964</pages> <image src="http://www.oreilly.com/catalog/covers/perlckbk2.s.gif" width="145" height="190"/> </book> <book> <title>Guitar for Dummies</title> <author>Mark Phillips</author> <author>John Chappell</author> <isbn>076455106X</isbn> <pages>392</pages> <image src="http://media.wiley.com/product_data/coverImage/6X/07645510/076455106X.jpg" width="100" height="125"/> </book> </library> XSD: <?xml version="1.0" encoding="utf-8"?> <xs:schema xmlns="http://lib.domain.com" xmlns:xs="http://www.w3.org/2001/XMLSchema" elementFormDefault="qualified" targetNamespace="http://lib.domain.com"> <xs:attributeGroup name="imagegroup"> <xs:attribute name="src" type="xs:string"/> <xs:attribute name="width" type="xs:integer"/> <xs:attribute name="height" type="xs:integer"/> </xs:attributeGroup> <xs:element name="library"> <xs:complexType> <xs:sequence> <xs:element maxOccurs="unbounded" name="book"> <xs:complexType> <xs:sequence> <xs:element name="title" type="xs:string"/> <xs:element maxOccurs="unbounded" name="author" type="xs:string"/> <xs:element name="isbn" type="xs:string"/> <xs:element name="pages" type="xs:integer"/> <xs:element name="image"> <xs:complexType> <xs:attributeGroup ref="imagegroup"/> </xs:complexType> </xs:element> </xs:sequence> </xs:complexType> </xs:element> </xs:sequence> </xs:complexType> </xs:element> </xs:schema>

    Read the article

  • Rewrite a dynamic URL to a new dynamic URL

    - by Jmino14
    I am new to the RewriteEngine and have not been able to find an answer to the following issue. I run an ecommerce site with an ever changing catalog of product skus. Our URLs are dynamic. The question is, what if I want to have a dynamic variable redirect to a different dynamic variable. For instance, I want: http://www.mydomain.com/product.jhtm?id=12345 to now go to: www.mydomain.com/product.jhtm?id=78910 How can I do this through the .htaccess? Thanks in advance.

    Read the article

  • Rails + Nginx + Unicorn multiple apps

    - by Mikhail Nikalyukin
    I get the server where is currently installed two apps and i need to add another one, here is my configs. nginx.conf user www-data www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Disable unknown domains ## server { listen 80 default; server_name _; return 444; } ## # Virtual Host Configs ## include /home/ruby/apps/*/shared/config/nginx.conf; } unicorn.rb deploy_to = "/home/ruby/apps/staging.domain.com" rails_root = "#{deploy_to}/current" pid_file = "#{deploy_to}/shared/pids/unicorn.pid" socket_file= "#{deploy_to}/shared/sockets/.sock" log_file = "#{rails_root}/log/unicorn.log" err_log = "#{rails_root}/log/unicorn_error.log" old_pid = pid_file + '.oldbin' timeout 30 worker_processes 10 # ????? ???? ? ??????????? ?? ????????, ???????? ??????? ? ??????? ???? ???? listen socket_file, :backlog => 1024 pid pid_file stderr_path err_log stdout_path log_file preload_app true GC.copy_on_write_friendly = true if GC.respond_to?(:copy_on_write_friendly=) before_exec do |server| ENV["BUNDLE_GEMFILE"] = "#{rails_root}/Gemfile" end before_fork do |server, worker| defined?(ActiveRecord::Base) and ActiveRecord::Base.connection.disconnect! if File.exists?(old_pid) && server.pid != old_pid begin Process.kill("QUIT", File.read(old_pid).to_i) rescue Errno::ENOENT, Errno::ESRCH end end end after_fork do |server, worker| defined?(ActiveRecord::Base) and ActiveRecord::Base.establish_connection end Also im added capistrano to the project deploy.rb # encoding: utf-8 require 'capistrano/ext/multistage' require 'rvm/capistrano' require 'bundler/capistrano' set :stages, %w(staging production) set :default_stage, "staging" default_run_options[:pty] = true ssh_options[:paranoid] = false ssh_options[:forward_agent] = true set :scm, "git" set :user, "ruby" set :runner, "ruby" set :use_sudo, false set :deploy_via, :remote_cache set :rvm_ruby_string, '1.9.2' # Create uploads directory and link task :configure, :roles => :app do run "cp #{shared_path}/config/database.yml #{release_path}/config/database.yml" # run "ln -s #{shared_path}/db/sphinx #{release_path}/db/sphinx" # run "ln -s #{shared_path}/config/unicorn.rb #{release_path}/config/unicorn.rb" end namespace :deploy do task :restart do run "if [ -f #{unicorn_pid} ] && [ -e /proc/$(cat #{unicorn_pid}) ]; then kill -s USR2 `cat #{unicorn_pid}`; else cd #{deploy_to}/current && bundle exec unicorn_rails -c #{unicorn_conf} -E #{rails_env} -D; fi" end task :start do run "cd #{deploy_to}/current && bundle exec unicorn_rails -c #{unicorn_conf} -E #{rails_env} -D" end task :stop do run "if [ -f #{unicorn_pid} ] && [ -e /proc/$(cat #{unicorn_pid}) ]; then kill -QUIT `cat #{unicorn_pid}`; fi" end end before 'deploy:finalize_update', 'configure' after "deploy:update", "deploy:migrate", "deploy:cleanup" require './config/boot' nginx.conf in app shared path upstream staging_whotracker { server unix:/home/ruby/apps/staging.whotracker.com/shared/sockets/.sock; } server { listen 209.105.242.45; server_name beta.whotracker.com; rewrite ^/(.*) http://www.beta.whotracker.com/$1 permanent; } server { listen 209.105.242.45; server_name www.beta.hotracker.com; root /home/ruby/apps/staging.whotracker.com/current/public; location ~ ^/sitemaps/ { root /home/ruby/apps/staging.whotracker.com/current/system; if (!-f $request_filename) { break; } if (-f $request_filename) { expires -1; break; } } # cache static files :P location ~ ^/(images|javascripts|stylesheets)/ { root /home/ruby/apps/staging.whotracker.com/current/public; if ($query_string ~* "^[0-9a-zA-Z]{40}$") { expires max; break; } if (!-f $request_filename) { break; } } if ( -f /home/ruby/apps/staging.whotracker.com/shared/offline ) { return 503; } location /blog { index index.php index.html index.htm; try_files $uri $uri/ /blog/index.php?q=$uri; } location ~ \.php$ { try_files $uri =404; include /etc/nginx/fastcgi_params; fastcgi_pass unix:/var/run/php-fastcgi/php-fastcgi.socket; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } location / { proxy_set_header HTTP_REFERER $http_referer; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_max_temp_file_size 0; # If the file exists as a static file serve it directly without # running all the other rewite tests on it if (-f $request_filename) { break; } if (!-f $request_filename) { proxy_pass http://staging_whotracker; break; } } error_page 502 =503 @maintenance; error_page 500 504 /500.html; error_page 503 @maintenance; location @maintenance { rewrite ^(.*)$ /503.html break; } } unicorn.log executing ["/home/ruby/apps/staging.whotracker.com/shared/bundle/ruby/1.9.1/bin/unicorn_rails", "-c", "/home/ruby/apps/staging.whotracker.com/current/config/unicorn.rb", "-E", "staging", "-D", {5=>#<Kgio::UNIXServer:/home/ruby/apps/staging.whotracker.com/shared/sockets/.sock>}] (in /home/ruby/apps/staging.whotracker.com/releases/20120517114413) I, [2012-05-17T06:43:48.111717 #14636] INFO -- : inherited addr=/home/ruby/apps/staging.whotracker.com/shared/sockets/.sock fd=5 I, [2012-05-17T06:43:48.111938 #14636] INFO -- : Refreshing Gem list worker=0 ready ... master process ready ... reaped #<Process::Status: pid 2590 exit 0> worker=6 ... master complete Deploy goes successfully, but when i try to access beta.whotracker.com or ip-address i get SERVER NOT FOUND error, while others app works great. Nothing shows up in error logs. Can you please point me where is my fault?

    Read the article

  • Ajax: distinguish old posts and new posts.

    - by xRobot
    Hi at all, I have created a very simple blog at www.example.com with a only one page. When I connect to www.example.com I see all posts inserted in the database ( in mysql ). Now I want that every 60 seconds an ajax request check in the database if there are new posts. If there are new posts these will be inserted at the top above the old posts. This is my question: How can I through Ajax retrieve only new posts ( and so distinguish old posts and new posts ) ?

    Read the article

  • Sql server version error...655 version needed but you computer has 612 or earlier version ? error

    - by xpugur
    Hi i have a error like " dbFileName cannot be opened because it is version 655. This server supports version 612 and earlier. " what should i do ? some friend of mine done a project but i guess he done it with sql 2008 and i have sql 2005 is that the reason why i got this error? can i fix it ? if i setup a newer version of sql does it will solve the problem? www.microsoft.com/express/Database/default.aspx#Installation_Options here sql server 2008 R2 express is available can it be the solution? thank you... by the way i found a link of an update http://www.microsoft.com/downloads/details.aspx?FamilyID=E1109AEF-1AA2-408D-AA0F-9DF094F993BF&displaylang=en is this a solution to my problem ?

    Read the article

  • RegEXP Javascript URL matching

    - by Blondie
    I have this so far: chrome.tabs.getSelected(null, function(tab) { var title = tab.title; var btn = '<a href="' + tab.url + '" onclick="save(\'' + title + '\');"> ' + title + '</a>'; if(RegExp('/http:\/\/www.mydomain.com\/version.php/i') == true) { document.getElementById('link').innerHTML = '<p>' + btn + '</p>'; } }); Basically it should match the domain within this: http://www.mydomain.com/version.php?* Anything that matches that even when it includes something like version.php?ver=1, etc When I used the code above of mine, it doesn't display anything, but when I remove the if statement, it's fine but it shows on other pages which it shouldn't only on the matched URL.

    Read the article

  • groovy multithreading

    - by srinath
    Hi, I'm newbie to groovy/grails. How to implement thread for this code . Had 2500 urls and this was taking hours of time for checking each url. so i decided to implement multi-thread for this : Here is my sample code : def urls = [ "http://www.wordpress.com", "http://67.192.103.225/QRA.Public/" , "http://www.subaru.com", "http://baldwinfilter.com/products/start.html" ] def up = urls.collect { ur - try { def url = new URL(ur) def connection = url.openConnection() if (connection.responseCode == 200) { return true } else { return false } } catch (Exception e) { return false } } For this code i need to implement multi-threading . Could any one please suggest me the code. thanks in advance, sri.

    Read the article

  • Appengine Apps Vs Google bot web crawler

    - by sandeep koduri
    i built an appengine web app cricket.hover.in. The web app consists of about 15k url's linked in it, But even after a long time of my launch, no pages are indexed on google. Any base link place on my root site hover.in are being indexed with in minutes. but i placed the same link home page of root site a long back. but its of no use. can any one analyse , if there is any issue with cricket.hover.in or if bots have any issues with Google app engine actually tested the url using labs app of webmaster tools of google there the return is fine and html is clear. but when tested the same (cricket.hover.in) at the following urls its showing different results of failure www.dnsqueries.com/en/googlebot_simulator.php www.smart-it-consulting.com/internet/google/googlebot-spoofer/ but if i test some of my php or word press links at the above url's the results are good and fine. please help me with this.

    Read the article

  • jQuery ajax doesn't seem to be reading HTML data in Chromium

    - by Mahesh
    I have an HTML (App) file that reads another HTML (data) file via jQuery.ajax(). It then finds specific tags in the data HTML file and uses text within the tags to display sort-of tool tips. Here's the App HTML file: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" lang="en-US" xml:lang="en-US"> <head> <title>Test</title> <style type="text/css"> <!--/* <![CDATA[ */ body { font-family : sans-serif; font-size : medium; margin-bottom : 5em; } a, a:hover, a:visited { text-decoration : none; color : #2222aa; } a:hover { background-color : #eeeeee; } #stat_preview { position : absolute; background : #ccc; border : thin solid #aaa; padding : 3px; font-family : monospace; height : 2.5em; } /* ]]> */--> </style> <script type="text/javascript" src="http://code.jquery.com/jquery-1.4.2.min.js"></script> <script type="text/javascript"> //<![CDATA[ $(document).ready(function() { $("#stat_preview").hide(); $(".cfg_lnk").mouseover(function () { lnk = $(this); $.ajax({ url: lnk.attr("href"), success: function (data) { console.log (data); $("#stat_preview").html("A heading<br>") .append($(".tool_tip_text", $(data)).slice(0,3).text()) .css('left', (lnk.offset().left + lnk.width() + 30)) .css('top', (lnk.offset().top + (lnk.height()/2))) .show(); } }); }).mouseout (function () { $("#stat_preview").hide(); }); }); //]]> </script> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" /> </head> <body> <h1>Test</h1> <ul> <li><a class="cfg_lnk" href="data.html">Sample data</a></li> </ul> <div id="stat_preview"></div> </body> </html> And here is the data HTML <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" lang="en-US" xml:lang="en-US"> <head> <title>Test</title> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" /> </head> <body> <h1>Test</h1> <table> <tr> <td class="tool_tip_text"> Some random value 1</td> <td class="tool_tip_text"> Some random value 2</td> <td class="tool_tip_text"> Some random value 3</td> <td class="tool_tip_text"> Some random value 4</td> <td class="tool_tip_text"> Some random value 5</td> </tr> <tr> <td class="tool_top_text"> Some random value 11</td> <td class="tool_top_text"> Some random value 21</td> <td class="tool_top_text"> Some random value 31</td> <td class="tool_top_text"> Some random value 41</td> <td class="tool_top_text"> Some random value 51</td> </tr> </table> </body> </html> This is working as intended in Firefox, but not in Chrome (Chromium 5.0.356.0). The console.log (data) displays empty string in Chromium's JavaScript console. Firebug in Firefox, however, displays the entire data HTML. Am I missing something? Any pointers?

    Read the article

  • ASP.NET 3.5 Routing?

    - by Maushu
    I was looking for a way to route http://www.example.com/WebService.asmx to http://www.example.com/service/ using only the ASP.NET 3.5 Routing framework without needing to configure the IIS server. Until now I have done what most tutorials told me, added a reference to the routing assembly, configured stuff in the web.config, added this to the Global.asax: protected void Application_Start(object sender, EventArgs e) { RouteCollection routes = RouteTable.Routes; routes.Add( "WebService", new Route("service/{*Action}", new WebServiceRouteHandler()) ); } ...created this class: public class WebServiceRouteHandler : IRouteHandler { public IHttpHandler GetHttpHandler(RequestContext requestContext) { // What now? } } ...and the problem is right there, I don't know what to do. The tutorials and guides I've read use routing for pages, not webservices. Is this even possible? Ps: The route handler is working, I can visit /service/ and it throws the NotImplementedException I left in the GetHttpHandler method.

    Read the article

< Previous Page | 206 207 208 209 210 211 212 213 214 215 216 217  | Next Page >