Search Results

Search found 19375 results on 775 pages for 'codeigniter url'.

Page 426/775 | < Previous Page | 422 423 424 425 426 427 428 429 430 431 432 433  | Next Page >

  • How to generate Visa checkout token? [on hold]

    - by Muhammad Junaid
    I am on process of creating a Visa checkout plugin but stuck in generating token Here are the token requirment: Format: Alphanumeric; maximum 100 characters in the form of token: x:UNIX_UTC_Timestamp:SHA256_hash, where UNIX_UTC_Timestamp is a UNIX Epoch timestamp SHA256_hash is an SHA256 hash of the following unseparated items: Your shared secret Timestamp from the transaction; exactly the same as UNIX_UTC_Timestamp Resource path (API name). This HTTPS request's query string Note: The query string includes one or more parameters in name-value pair format, whose names are separated from values by equal signs (=); an empty value may be omitted but the name and equal sign must be present. The initial question mark (?) is not included. Note: All parameters must be present. The parameters must be in lexicographic sort order (UTF-8, uppercase hex characters) with parameters separated from each other by an ampersand (&). Note: The query string must be URL encoded (excepting the following characters, per RFC 3986: hyp You can find on Google "visa checkout developer updating 1 px image"

    Read the article

  • 301 redirects - can we not delete old pages?

    - by KBS
    First time here :) We have a page on the site which ranks well for an SEO term (top 5) but contains old information. We have added a new page but Google doesn't rank it that well. Information on these pages is time sensitive. Old: example.com/2013-related-information.html New: example.com/2014-related-information.html Obvious solution is to delete old page and do a 301 redirect to the new page. Now, can we still keep the old page by giving it a new URL. Step1: example.com/2013-related-information.html is redirect to example.com/2014-related-information.html Step2: example.com/2014-related-information.html is recreated with a new address such as example.com/new-2013-related-information.html What we are trying to do is to send the user to the fresh page but still not wasting the record copy if someone wants to go and dig up old page. Would appreciate help!! Cheers

    Read the article

  • HTML5 - check if font has loaded

    - by espais
    At present I load my font for my game in with @font-face For instance: @font-face { font-family: 'Orbitron'; src: url('res/orbitron-medium.ttf'); } and then reference it throughout my JS implementation as such: ctx.font = "12pt Orbitron"; where ctx is my 2d context from the canvas. However, I notice a certain lag time while the font is downloaded to the user. Is there a way I can use a default font until it is loaded in? Edit - I'll expand the question, because I hadn't taken the first comment into account. What would the proper method of handling this be in the case that a user has disabled custom fonts?

    Read the article

  • Grub can't find device on boot resulting in Grub Rescue

    - by user1160163
    So I have 2 hard drives a HDD 320GB and a SSD 20GB. Before I had Windows 7 on the HDD and Ubuntu on the SSD but wanted to get rid of windows and reinstall a clean Ubuntu on the SSD then use the HDD for storage. So I deleted everything from the HDD and set up the SSD with 18GB ext4 and 2GB Swap and installed Ubuntu on the 18GB ext4. Though now when I boot up I get "Error: No such device Grub Rescue" I have a live USB and I ran the Boot Repair following these instructions - grub rescue after install of Ubuntu 12.04 (dual boot) - it says successful though still have the same problem. This is the given URL from Boot Repair - http://paste.ubuntu.com/1257988/ Thanks for any help given.

    Read the article

  • Website .htaccess file for Wordpress sub folder

    - by ubique
    I developed a Flash website for a client and added the following .htaccess file in the root directory and the non-www to www redirect works perfectly. RewriteEngine On RewriteCond %{HTTP_HOST} ^website.com [NC] RewriteRule ^(.*)$ http://www.website.com/$1 [L,R=301] I was also asked to add a Wordpress blog so I put it in a new directory folder (as opposed to a sub domain) with so the URL is www.website.com/blog Does Google now see the main site and blog as two different websites? Do I need to link them together using another .htaccess file in the Wordpress Root so Google automatically crawls the whole domain? Any help appreciated....

    Read the article

  • help redirecting IP address

    - by Alice
    Google has indexed the IP address of my site rather than the domain, so now I'm trying to set up a 301 redirect that will redirect the IP address and all subsequent pages to the domain. I currently have something like this in my .htaccess file (however don't think it's working correctly?): RewriteCond %{HTTP_HOST} ^12.34.567.890 RewriteRule (.*) (domain address)/$1 [R=301,L] I've used various redirect checker tools and keep getting the message: "... not redirecting to any URL or the redirect is NOT SEARCH ENGINE FRIENDLY" Am I doing something wrong or is there something else I should be trying? Thanks! Alice

    Read the article

  • WPF Alphabet (Available for download)

    - by mbcrump
    WPF Alphabet is a application that I created to help my child learn the alphabet. It displays each letter and pronounces it using speech synthesis. It was developed using WPF and c# in about 3 hours (so its kinda rough). I went ahead and uploaded it to codeplex for those in similar situation or just wanting to see a particular WPF feature. I would also recommend Scott Hanselman’s BabySmash!. Specific WPF Features: DispatcherTimer (WPF) SpeechSynthesizer (WPF) URL Navigate (WPF) not PAGE XAML Examples: DockPanel Border TextBlock HyperLink Button's and Events Download full source and binaries here.

    Read the article

  • Best strategy for supporting multiple server communication from iPhone/android app?

    - by tipycalFlow
    I'm making an app that will be used in multiple hospitals in the US. As per HIPAA compliance requirement, every hospital will have its own server that complies with these requirements of ensuring patient data security, etc. Now the task is that the app should communicate with a particular server based on the login info. An additional requirement is that new hospitals(servers) are likely to be added along the way, even after the app is available on the market. So basically, according to some login credentials, the app should communicate with the server of the hospital assigned to that person. One pretty crude way is to set up our own server which links the hospitals with the login info and accordingly, provides a base-url for data exchange. Is there a more efficient way to handle this?

    Read the article

  • Les premiers noms de domaines non-latins fonctionnent, avec des URLs en caractères arabes

    Mise à jour du 07.05.2010 par Katleen Les premiers noms de domaines non-latins fonctionnent, avec des URLs en caractères arabes Il y a quelques heures, les trois premiers noms de domaines non-latins on été placé dans la root zone du DNS. Ils sont donc désormais en service, et fonctionnent parfaitement. Voici un exemple de ce que vous pourrez voir dans le champ d'URL de votre navigateur, si vous visitez l'un de ces sites : [IMG]http://blog.icann.org/wp-content/uploads/2010/05/idn-example-450px.png[/IMG] Ces trois nouveaux domaines sont السعودية. (?Al-Saudiah?), امارات. ( ?Emarat?) et ...

    Read the article

  • Need to setup and access web disk for a hosting account

    - by mtk
    I am on linux(ubuntu 12.04) and have purchased a hosting space. In the cpanel, I selected nautilus for accessing web-disk and I was given a note: Note: In order for the Web Disk to work, you will need to allow port 2078 (SSL) or 2077 (non-SSL) on your computer's firewall. As, I am unable to connect to this, i.e. on entering the given url in Nautilus address bar, it says 'Connection closed'. So, I believe the above this quoted is not correctly configured. Please let me know, how to configure this? How to allow the given port access?

    Read the article

  • Editing a command-line argument to create a new variable

    - by user1883614
    I have a bash script called test.sh that uses command-line argument: lynx -dump $1 > $name".txt" But I need name to be created from the argument by specific keywords in the argument. An example is: http://www.pcmag.com/article2/0,2817,2412941,00.asp http://www.pcmag.com/article2/0,2817,2412919,00.asp Both are two separate articles but are the difference can only be seen in those 12 characters. How do I create a variable from a url for those 12 characters? So that when I run test.sh in Terminal: ./test.sh http://www.pcmag.com/article2/0,2817,2412941,00.asp there is a text file saved as 0,2817,2412941,00?

    Read the article

  • "find" command and piping its output through another program

    - by Charbel
    this is not an Ubuntu specific quesion, it applies to all unix/linux. how can I run a command like this: find . -maxdepth 1 -type d -print -exec svn info "{}" | grep URL \; the command above doesn't do what I want, I can't seem to pipe the output of the svn info to grep. This works, but the output contains much more than I need: find . -maxdepth 1 -type d -print -exec svn info "{}" \; Any ideas?

    Read the article

  • How to tell google that i have changed my website urls ?

    - by Momen M El Zalabany
    I have done major updates in my website, and renamed all my urls. Problem is how can i tell google to i have renamed all urls and let google refresh his library ? I have uploaded sitemap vai google webmaster tools many times. My Website url : http://www.pndmasr.com My sitemap http://www.pndmasr.com/sitemap.xml but still every time i search google for "pndmasr" i get old pages results, i have waited more than 3 days by yet same problem. any solutions ? is there a problem with my sitemap ?

    Read the article

  • Disqus thread migration. Gotchas?

    - by sramsay
    I've been migrating a site to a new domain. The site itself is pretty straightforward (it uses Jekyll), and everything has gone fine -- except migration of Disqus threads. I've had partial success -- some of the threads have migrated successfully, but not all. I've tried the domain migration wizard (which caught a few), the URL mapper (which caught a few), and the 301 redirect crawler (which caught a few). But the remaining threads just won't move, no matter which method I use. So, I suppose I suppose I'm asking if there are any "gotchas" I should know about with this. When you execute any of these migration tools, it says it will "take awhile." Does that mean hours? Days? I can't tell if it's working, and there's no logging or error reporting that I can see.

    Read the article

  • Extracting meta tags attribute using wget [migrated]

    - by Amit
    I have a file having some URLs per line. I need to extract the "keywords" present in the tags i.e. if there is meta tag for "keywords" then i want to get "content" value for it. Example: if the web-page has this meta-tag then for that URL i want "wikipedia,encyclopedia" to be extracted. One approach is to download the web-page using "wget" and then parse it using some standard HTML parser. I was wondering is there any better way to do this without downloading the entire web-page.

    Read the article

  • SAML Request / Response decoding.

    - by Shawn Cicoria
    When you’re working with Web SSO integration, sometimes it’s helpful to be able to decode the tokens that get passed around via the browser from the various participants in the trust – RP, STS, etc. With SAML tokens, sometimes they’re simply base64 encoded when they’re in the POST body; other times they’re part of the query string, which they end up being base64encoded, deflated, then Url encoded. I always end up putting together some simple tool that does this for me – so, this is an effort to make this more permanent. It’s a simple WinForms application that is using NetFx 4.0. Download

    Read the article

  • Will duplicate international (i18n) content hinder SEO rankings?

    - by Rhys
    Google clearly states that duplicate content within a single, or multiple, domains is not advised. This is understood, but I am not sure of any exceptions for sites with region-specific content that is often replicated across locales. For example, a site's /en-us/about page could be identical to /en-uk/about, whereas most likely /en-ja/about is unique. Are GYM smart enough to understand that the initial URL depth is a locale specifier? Is there any robots.txt or header, etc, trickery that I should include to outline the site's international structure?

    Read the article

  • JavaScript malware analysis

    - by begueradj
    I want to test websites for JavaScript malware presence . I plan to develop a Python program that sends the URL of a given website to a virtual machine where the dynamic execution of the eventual malicious JavaScript embedded in the website's page is monitored. My questions: Should my VM be Windows or Linux ? What if the malware damages my VM: is there a hint how to avoid that ? Or launch a new VM automatically instead ? If I use telnet client library to communicate with the VM: must I implement a server within the VM to deal with my queries or can I overcome this ? I am jut looing for hints, general ideas. Thank you for any help.

    Read the article

  • SEO Benefits of adding a Tumblr feed to site

    - by Paul
    A client of ours has a CMS driven Blog in his hotel site - he would like to use the blog to add depth top his site and add seo benefits relating to the blogs content. The current blog is a basic header / text field and doesn't contain any tagging / meta features. Unfortunately we dont have a .net developer in our team to alter the existing blog and add meta / tagging and there isn't budget to hire one - so I considered using a Tumblr blog - setting it up externally - giving it a blog.hotelname.com address and feeding it into the existing page via tumblrs js - which basically does a document.write into the page - which we can style. I understand from a previous post (Poor CMS blog vs Tumblr embed as a general rule most search engines ignore JS created content - but will the above approach act as an improvement on the existing system for now - as the blog will be setup externally with its own url and also feed into the existing site? Cheers Paul

    Read the article

  • IIS isn't propagating domain

    - by ErocM
    I called Godaddy and 'verified' my settings for the two ips were correct. ns1.asezo.com = xx.xx.xx.15 ns2.asezo.com = xx.xx.xx.16 then I set the nameserver of asezo.com to the ns1/ns2 above, which Godaddy tech support says is right. When I try to visit my site, it says Oops! Google Chrome could not find asezo.com. When I try to ping the website, it gives me a time out. I have the bindings in IIS for the default website as: http - xx.xx.xx.15 - 80 www.asezo.com and http - xx.xx.xx.15 - 80 asezo.com And I'm still getting nothing. I can go directly to the ip http://xx.xx.xx.15/ and it gives me the IIS default website, I just can't get the url to propagate. What am I doing wrong?

    Read the article

  • Github Feed affecting my WordPress installation? [on hold]

    - by saul
    Any idea how this fork is affecting my site? I went to verify my website log stats, and realized this may be the cause of a strange redirect constantly happening on my WordPress installation. Here's a line I found on my log: 54.81.91.95 - - [07/May/2014:22:52:08 -0400] "GET /category/selfie/feed/ HTTP/1.1" 200 1826 "-" "feedzirra http://github.com/pauldix/feedzirra/tree/master" And this is the Github fork (or however these are called). https://github.com/feedjira/feedjira/tree/master Basically, I think everytime I update my categories, (selfie in this case), I get redirected to install.php. Probably by triggering some GET function on that feed. to the best of my knowledge, this feed parses all url with this structure, blocking them, kind of like a DDoS attack?? Any ideas how to go about it??

    Read the article

  • Tracking logged in vs. non-logged in users in Google Analytics

    - by Justin
    I am building a social media site that is similar is structure to twitter and facebook.com where unauthenticated users who go to https://mysite.com will see a login + sign-up page, and authenticated users who go to https://mysite.com will see their timeline. My question is, what is the best practice (using Google Analytics) for tracking these two different types of users who are viewing completely different content but are visiting the same URL. I tried searching the Google Analytics docs but couldn't find what they suggested for this scenario. Perhaps I just don't know what keywords to search for. Thanks in advance for any help.

    Read the article

  • How can I rewrite a subdomain to go to a specific file in a specific folder?

    - by FasterHorses
    I've done searching for my answer and have tested a few solutions, but nothing has worked so far. I'm trying to get a URL like this: http://baseball.sports.com to rewrite to... http://pro.sports.com/baseball-index.php However, I still need to keep the domain the same (http://baseball.sports.com). The reason being I have about 5 subdomains (baseball, football, soccer, etc) that I want to run off the same code base (pro.sports.com). Everything is on the same server. I'd be happy to answer any other questions that would help me get a resolution. I truly appreciate any direction that can be given to me to solve this. Thanks! --Nick

    Read the article

  • How to make Evolution mail work with my work email address?

    - by Fady
    this is the 1st time to write here and the 1st time to use a mail client other than outlook. I tried to add my enterprise email address to evolution mail, I tried both server types exchange mapi and microsoft exchange. With exchange mapi i get this error message "Authentication failed. MapiLogonProvider: Failed to login into the server" With Microsoft Exchange I get this error "Could not connect to server . Make sure the URL is correct and try again." Although I'm sure of all the steps Server: ip address of the mail server Username: Domainname\Username Domain: domain name My system is Ubuntu Release 11.04 (natty) Kernel Linux 2.6.38-15-generic Genome 2.32.1 Evolution 2.32.2 Any kind of help is appreciated and thanks in advance

    Read the article

  • Upload image file: is compression on client side already possible?

    - by Chris
    When offering photo file uploading, usually the user will have badly compressed and huge (10+ megapixels) JPEG files from their cameras or phones. On the server side, these files will get re-compressed to something like 800x600px and JPEG quality 7 or 8. Is it (already) possible to do that re-compression on the client side? So that I would only need to transmit some 100kB (800x600px) and not 3 MB or more. Something like: (1) With javascript's new FileSystem API ( http://slides.html5rocks.com/#filewriter ) it would be possible to read the photo file's data into client side JS. (2) Then it would be necessary to re-encode the JPEG data, which is possible, but I counld not find any library for that (yet). Anybody knows such a library? (3) Last step would be to POST the re-compressed JPEG data to the server side for storage and get a URL to the stored photo file back from the server for inclusion into the client's HTML. I am looking for some jQuery plugin, other JS library or example web page that does this.

    Read the article

< Previous Page | 422 423 424 425 426 427 428 429 430 431 432 433  | Next Page >