Search Results

Search found 51988 results on 2080 pages for 'http headers'.

Page 450/2080 | < Previous Page | 446 447 448 449 450 451 452 453 454 455 456 457  | Next Page >

  • Read data from a folder in main domain folder (CPanel\WHM)

    - by Memphis Raines
    I have defined a host in my CPanel\WHM server and put all my websites under one host account. The host Main Domain is domain.com, and all other websites are Add-on Domains: domain.com --folder --domain1 --domain2 --domain3 ... The thing I need is that when calling domain.com in browser, the server read files from another folder. for example when call http://domain.com it shows us http://domain.com/folder BUT I don't mean a redirection, I want server do this in background without showing visitors the real path. I couldn't do this with Domain WildCard Redirection because it got error. How can I do this? With htaccess or ... ?

    Read the article

  • Is it possible to track redirects to external sites from our subdomains?

    - by ChaBuku
    I have a handful of subdomains set up as redirects because we are using them for QR codes. I want to be able to track the QR code redirects (which are already set up and printed so no changing them at this point) and see the effectiveness of each. Here's two examples: http://qr.glorkianwarrior.com and http://ad.glorkianwarrior.com are set up to forward to our iTunes page (later on this year it may forward to Google Play or a specific landing page), is there any way on my server to track the redirect from the subdomain to iTunes and see where traffic is coming from first? I have the redirects set up through cPanel presently using subdomains. Edit: From the research I've seen I can't track a 301 directly. If I redirect to an internal page and then do a timed redirect to the iTunes link, how long will it take for the tracking script to track a hit?

    Read the article

  • What is the SEO-recommended method for using underscores and dashes in URLs that contain geographic locations?

    - by ElHaix
    In reading through this article: In Subfolder & File Names, Use Dashes, Not Underscores Good: Good: http://www.domain.com/sub-folder/file-name.htm Bad: http://www.domain.com/sub_folder/file_name.htm In my URL's, I may have one or two city names, ending with the province/state: Burnaby_New_Westminister-BC/[some search term]. My URL rules currently are defined such that everything after the dash is the prov/state. Some geographic locations already contain dashes: Notre-Dame-de-Grâce (in QC), which I would convert to ~/Notre_Dame_de_Grace-QC/ I thought of placing the prov/state after another "/", however in some cases the province/state name may not exist, thus ~/Notre_Dame_de_Grace/, so the first term after the domain name contains the geo location {city, city_name-state}. I am now revisiting this, and wondering if this rule set should change, and if so, what is the recommended way of implementing this? -- UPDATE -- After reviewing this video, I see that I should be using the dashes, rather than underscores. However since I still want to have my geo locations in the first URL section, is there anything wrong with using a double-dash separator - ie: /city-name--state/ ?

    Read the article

  • URL slugs: ideal length, and the real SEO effects of these slugs

    - by tattvamasi
    this question is addressed widely on SO and outside it, but for some reason, instead of taking it as a good load of great advice, all this information is confusing me. ** Problem ** I already had, on one of my sites, "prettified" urls. I had taken out the query strings, rewritten the URLS, and the link was short enough for me, but had a problem: the ID of the item or post in the URL isn't good for users. One of the users asked is there's a way to get rid of numbers, and I thought it was better for users to just see a clue of the page content in the URL. ** Solution ** With this in mind, I am trying with a section of the site.Armed with 301 redirects, some parsing work, and a lot of patience, I have added the URL slugs to some blog entries, and the slug of the URL reports the title of the article (something close to http://example.com/my-news/terribly-boring-and-long-url-that-replaces-the-number-I-liked-so-much/ ** Problems after Solution ** The problem, as I see it, is that now the URL of those blog articles is very descriptive for sure, but it is also impossible to remember. So, this brings me to the same issue I had with my previous problem: if numbers say nothing and can't be remembered, what's the use of these slugs? I prefer to see http://example.com/my-news/1/ than http://example.com/my-news/terribly-boring-and-long-url-that-replaces-the-number-I-liked-so-much/ To avoid forcing my user to memorize my URLS, I have added a script that finds the closest match to the URL you type, and redirects there. This is something I like, because the page now acts as a sort of little search engine, and users can play with the URLS to find articles. ** Open questions ** I still have some open questions, and don't seem to be able to find an answer, because answers tend to contradict one another. 1) How many characters should an URL ideally be long? I've read the magic number 115 and am sticking to that, but am not sure. 2) Is this really good for SEO? One of those blog articles I have redirected, with ID number in the URL and all, ranked second on Google. I've just found this question, and the answer seems to be consistent with what I think URL slug and SEO - structure (but see this other question with the opposite opinion) 3) To make a question with a specific example, would this URL risk to be penalized? Is it acceptable? Is it too long? StackOverflow seems to have comparably long URLs, but I'm not sure it's a winning strategy in my case. I just wanted to facilitate my users without running into Google's algorithms.

    Read the article

  • Footer not showing in website depending on which item is loaded [on hold]

    - by samyb8
    I designed a website which is having an issue, but I checked the html tagging very well and cannot fix it. If you go to this item: http://www.tahara.es/store/headbands/11/Ivory-Turquoise-headband You will see the FOOTER display normally. However if you go to this other item: http://www.tahara.es/store/headscarves/15/Grey-and-ivory-with-stoned-flower-Headscarf The footer does not show. Any clue of what I am missing or adding? The footer DIV is like this: <div id="footer">

    Read the article

  • How to pass information across domains to ask for newsletter only once?

    - by Michal Stefanow
    Lets assume following scenario, I have two sites: example1.com example2.com When user visits 1 there is a prompt "please signup to a newsletter". Same thing happens when user visits 2. However when navigating from 1 to 2 I don't want signup form to be shown. My first thought were 3rd-party cookies, but it seems that they are blocked / not working: http://stackoverflow.com/questions/4701922/how-does-facebook-set-cross-domain-cookies-for-iframes-on-canvas-pages?rq=1 http://stackoverflow.com/questions/172223/how-do-i-set-cookies-from-outside-domains-inside-iframes-in-safari?rq=1 Another thought is to append #noshow for each URL but that would require some work - for instance a script that would intercept click / tap events and modify URL structure depending on the address. (but that seems hacky) I wonder if you know a robust well-established solution to this issue? Thanks

    Read the article

  • Directory access control with Apache: do I need to use a specific .htaccess?

    - by Mirror51
    I have an Apache webserver, and in the Apache configuration, I have Alias /backups "/backups" <Directory "/backups"> AllowOverride None Options Indexes Order allow,deny Allow from all </Directory> I can access files via http://127.0.0.1/backups. The problem is everyone can access that. I have a web interface, e.g. http://localhost/adminm that is protected with htaccess and password. Now I don't want separate .htaccess and .htpasswd for /backups, and I don't want a second password prompt when a user clicks on /backups in the web interface. Is there any way to use same .htaccess and .htpasswd for the backups directory?

    Read the article

  • Make public webcam. Which protocol, which codec. (Using VLC)

    - by gsedej
    Hi! I want to use my old (1GHz) PC as webcam video stream server (like you can see those road cameras). I thought of using VLC and already tried using http output but it was not really good. Too cpu hungry, too big stream (kBps), not stable... I been reading VLC how-to's but thre is still a question. Which output should I use? Http, RTSP, UDP? I want to make for more than one computer at the same time (multicast). Which codec should be good? PC is not so fast so it shouldn't be too cpu hungry codec. Mpeg2, mpeg4, xvid? how much video buffer should I use (vb=?)? What about setting IP and ports? So I need some help with ideas, but if someone can make a VLC command line it's even better :) Oh, computer has direct internet connection and own IP.

    Read the article

  • Some post-VS2010 Launch Resources

    Here are some useful links related to the Vermont .NET VS2010 launch meeting on Monday night with our RECORD Breaking attendance! :) MSDN Visual Studio Developer Center: msdn.microsoft.com/vstudio VS2010 Comparison of various SKUs: http://www.microsoft.com/visualstudio/en-us/products VS2010 Trial Downloads: http://www.microsoft.com/visualstudio/en-us/download Great links from MicrosoftFeed.Com VS2010 Wallpapers for the hardcore: 10+ Beautiful Microsoft Visual Studio 2010 Wallpapers …and...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • SharePoint Search Problem: The start address sps3://server cannot be crawled.

    - by Clara Oscura
    With this post, I'm going to start a series on problems I have encountered with SharePoint search. Error: The start address sps3://luapp105 cannot be crawled. Context: Application 'Search_Service_Application', Catalog 'Portal_Content' Details:  Access is denied. Verify that either the Default Content Access Account has access to this repository, or add a crawl rule to crawl this repository. If the repository being crawled is a SharePoint repository, verify that the account you are using has "Full Read" permissions on the SharePoint Web Application being crawled.   (0x80041205) (Event ID: 14, Task Category: Gatherer) Solution: give appropriate permissions to User Profile Synchronisation Service http://social.technet.microsoft.com/Forums/en-US/sharepoint2010setup/thread/64cdf879-f01e-4595-bc52-15975fefd18d http://www.dotnetmafia.com/blogs/dotnettipoftheday/archive/2010/03/29/how-to-set-up-people-search-in-sharepoint-2010.aspx

    Read the article

  • Comment améliorer le temps de compilation pour C/C++ ? Apple propose un système de modules pour remplacer les en-têtes

    Comment améliorer le temps de compilation pour C/C++ ? Apple propose un système de modules pour remplacer les en-têtes Un des problèmes assez décriés des langages C et C++ est le temps de compilation, qui est un peu plus long. Cela est surtout dû à l'utilisation des en-entêtes (headers). Les développeurs d'Apple viennent de proposer un document assez intéressant qui introduit un système de modules pour C et C++ afin d'améliorer le temps de compilation. À titre d'exemple, Apple cite le minuscule code source de « Hello world » en C : quatre lignes de code seulement. Pourtant, le fichier d'en-tête nécessaire pour sa compilation est 173 fois plus grand que l'application elle-m...

    Read the article

  • BleachBit: How to Completely Clear URL History in Firefox?

    - by tSquirrel
    14.04 / Firefox 29.0 I've been using Bleachbit to clear usage/file history, and for the most part it works great. However, it doesn't seem to clear the website hostnames out of the URL, at all. These addresses are not bookmarked. Also, the total URL isn't preserved, just the hostname. Visit site http://www.bluesnews.com/some_random_URL_string Exit Firefox Run Bleachbit, with ALL Firefox options selected Restart Firefox Check history: completely empty, other than bookmarked sites. www.bluesnews is NOT bookmarked Type "blue" which is Firefox automatically completes as "http://www.bluesnews.com/" Alternate Step #3: Use Firefox's built-in "Clear History" and select ALL entries with a time frame of "Everything". Same result as above. My inquiry in BB forums hasn't been responded to. I found Dan's proposed solution, however changing autocomplete in about:config only turns off the function, it doesn't actually stop storing URLs.

    Read the article

  • Windows 8 Location Services

    - by ryanabr
    I spent the afternoon with the Geolocator object in the WinRT and Widows 8 platform. I have also been working with doing Windows Phone 7 development, and first had to wrap my head around the fact that while similar, it is not the same as the GeoCoordinateWatcher that environment. I found a nice example here http://code.msdn.microsoft.com/windowsapps/Geolocation-2483de66 But the behavior of my app wasn’t the same. Once you ensure that location services is enabled by following these instructions: http://msdn.microsoft.com/en-us/library/windows/desktop/hh768219.aspx Location Services was still disabled. From everything I read, it sounded like the first time you try to use the Geolocator object, the user would be prompted to allow to “Access to your location”. After nosing around I found the issue. You need to add the location service as a Capability in the Package.appxmanifest file: After checking the box, I was prompted to allow access to location services as expected the first time I needed to use it.

    Read the article

  • Help with tracking sub domain

    - by roobus
    I currently have my app's marketing/external website on the root level, e.g. http://example.com My web app itself is hosted at: http://app.example.com What's the best strategy to set-up Google Analytics tracking for both of them? Should I create a separate web property? Also, what's the difference between creating a new web property and a new profile? UPDATE: I would want to be able to track conversion from a page on the root domain to a sign-up page on the app sub-domain.

    Read the article

  • How to reduce the time it takes to load my web game? [closed]

    - by Danial
    I created a puzzle game with Unity and uploaded it to one server. This works fine, but I bought a new server and uploaded my game to it as well. There, the loading time is much longer. These are the servers: http://pinheadsinteractive.com/Mozzie/ (fast) http://operation-mozzie-free.com/ (slow) The Unity files are exactly the same from one server to the next. My client is dissatisfied with the new, slow loading time. So, how can I reduce the time my Unity game takes to load? Even in some cases they faced the problem that they could not load the game at all. For the the moment, I'm using an iframe on the new sever as a workaround, but the issue still remains unsolved.

    Read the article

  • How to create a JMS durable subscriber in WebLogic Server?

    - by lmestre
    WebLogic Server Provides a set of examples that are very helpful to get started with Weblogic ServerHere you can check how to install the examples:http://docs.oracle.com/cd/E23943_01/doc.1111/e14142/prepare.htmAfter you have installed the examples, you can find the example you want to review, in this case TopicReceive, here:wlserver_10.3/samples/server/examples/src/examples/jms/topicTo review details of the specific example, you can open:wlserver_10.3/samples/server/examples/src/examples/jms/topic/instructions.htmlTo create a Durable Subscriber, you can just set the client ID  and invoke createDurableSubscriber instead of calling createSubscriber, i.e.:    tconFactory = (TopicConnectionFactory)       PortableRemoteObject.narrow(ctx.lookup(JMS_FACTORY),                                   TopicConnectionFactory.class);    tcon = tconFactory.createTopicConnection();    //Set Client ID for this Durable Subscriber    tcon.setClientID("GT2");    tsession = tcon.createTopicSession(false, Session.AUTO_ACKNOWLEDGE);    topic = (Topic)       PortableRemoteObject.narrow(ctx.lookup(topicName),                                   Topic.class);    // Create Durable Subscription    tsubscriber = tsession.createDurableSubscriber(topic, "Test");    tsubscriber.setMessageListener(this);    tcon.start(); Enjoy!   You can read more about this here:http://docs.oracle.com/cd/E23943_01/web.1111/e13727/advpubsub.htm#CHDEBABChttp://docs.oracle.com/cd/E23943_01/web.1111/e13727/manage_apps.htm#i1097671    http://docs.oracle.com/cd/E23943_01/apirefs.1111/e13943/WebLogic.Messaging.ISession.CreateDurableSubscriber_overload_2.html

    Read the article

  • Alternatives to OAuth?

    - by sdolgy
    The Web industry is shifting / has shifted towards using OAuth when extending API services to external consumers & developers. There is some elegance in simple....and well, the 3-step OAuth process isn't too bad ... i just find it is the best of a bad bunch of options. Are there alternatives out there that could be better, and more secure? The security reference is derived from the following URLs: http://www.infoq.com/news/2010/09/oauth2-bad-for-web http://hueniverse.com/2010/09/oauth-2-0-without-signatures-is-bad-for-the-web/

    Read the article

  • How do I install drivers for a Konica Minolta 200?

    - by th3pr0ph3t
    This copy machine / scanner / network printer works with Windows but no drivers are available for linux. When Ubuntu supports a printer it works fine but this one is not supported. I found the drivers in: http://onyxftp.mykonicaminolta.com/download/SearchResults.aspx?productname=bizhub%20200 //But I don't know how to install them, nor which one to download. //How can I install this driver? EDIT: The file with the driver is here http://onyxftp.mykonicaminolta.com/DownloadFile/Download.ashx?fileid=18571&productid=865 Inside the archive there is a .deb package that installs correctly but doesn't work. So far the question is: "How can I make it work?"

    Read the article

  • Help on PHP CURL script [closed]

    - by Sumeet Jain
    This script uses a cookie.txt in the same folder chmoded to 777... The problem i am facing is i hav many accounts to login... Say if i hav 5 accounts...i created cookie1.txt,cookie2.txt an so on.. then the script worked..with the post data But i want this to be always logged in and post data.. Can anyone tell me how to do this????? Code which works for login and post data is http://pastebin.com/zn3gfdF2 Code which i require should be something like this ( i tried with using the same cookie.txt but i guess it expires :( ) http://pastebin.com/45bRENLN Please help me with dealing with cookies... Or suggest how to modify the code without using cookie files...

    Read the article

  • Bug? Flash of white when changing orientation on iOS Safari [migrated]

    - by Baumr
    What causes the flash of white to the right of a responsive design when changing orientation from portrait to landscape on iOS? Try it on iOS6 Safari: Websites like this don't do it: http://html5boilerplate.com But this one does: http://www.initializr.com Something to do with re-processing (CPU lag) to fit a wider screen? It doesn't happen in Chrome for iOS6... Update: I just removed all img and from my testing site, but it still happens. This seems to happen with a lot of different websites out there. Is it a bug with their code, or a Safari for iOS bug? Others are completely immune to it...

    Read the article

  • Google indexing pages very slowly [duplicate]

    - by Clark
    This question already has an answer here: Older post not indexed, new post indexed right away? 1 answer Is there anything I can do to speed up the time it takes to index my pages? It's currently indexing them on it's own time I believe which is every 2 - 3 days and when working in music and media I need to have the latest post fairly quickly. My robots.txt file is. User-agent: * Disallow: /wp-admin/ Disallow: /wp-content/ Disallow: /wp-includes/ sitemap: http://vipes.us/sitemapindex.xml If I am understanding this correctly, I would put this URL into Google http://vipes.us/sitemapindex.xml. But in doing so I still only get some of my pages indexed?

    Read the article

  • Update Manager Not working

    - by Deena
    Hi When I try to press CHECK Button in Updated, it is not looking for updates available. getting below error. "Failed to Download repository information" Check your Internet connection.. W:GPG error: http://archive.canonical.com oneiric Release: The following signatures were invalid: BADSIG 40976EAF437D05B5 Ubuntu Archive Automatic Signing Key <[email protected]> W:GPG error: http://archive.canonical.com lucid Release: The following signatures were invalid: BADSIG 40976EAF437D05B5 Ubuntu Archive Automatic Signing Key <[email protected]> W:Failed to fetch gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_oneiric_main_source_Sources Hash Sum mismatch E:Some index files failed to download. They have been ignored, or old ones used instead. Ubuntu Version 11.10

    Read the article

  • Cannot make NVIDIA driver work with Ubuntu 12.10

    - by user1293231
    I seem to have a problem similar to many but I didn't manage to get it solved: have a Lenovo N581 with an NVIDIA GeForce 610M have just installed a fresh Ubuntu 12.10 64 bits, + KDE and am trying to have my NVIDIA card work. Have tried all workarounds posted: purge nvidia, install kernel source/headers and then reinstall nvidia-current-updates (or just nvidia-current), do "sudo nvidia-xconfig". It does create a xorg.conf but does not much (no Module Section by the way). The result is that my system (jokey) tells me that the driver is there but not in use and I only get a 640x480 resolution. If I try to launch nvidia-settings it does indeed tell me that the nvidia driver is not used. I do all this under kde but I guess it does matter at this stage. Any hint of how to resolve this? I feel stuck and cannot use any of the acceleration which is partly why I got that laptop in the first place... thanks for any help/advise you may provide!

    Read the article

  • APress Deal of the Day 23/Aug/2014 - Pro Windows 8 Development with HTML5 and JavaScript

    - by TATWORTH
    Originally posted on: http://geekswithblogs.net/TATWORTH/archive/2014/08/23/apress-deal-of-the-day-23aug2014---pro-windows-8.aspxToday’s $10 Deal of the Day from APress at http://www.apress.com/9781430244011 is Pro Windows 8 Development with HTML5 and JavaScript. “Apps are at the heart of Windows 8, bringing rich and engaging experiences to both tablet and desktop users. Windows 8 uses the Windows Runtime (WinRT), a complete reimagining of Windows development that supports multiple programming languages and is built on HTML5, CSS and JavaScript. These applications are the future of Windows development and JavaScript is perfect language to take advantage of this exciting and flexible environment.”

    Read the article

  • APress Deal of the Day - 19/Nov/2011 - Beginning GIMP

    - by TATWORTH
    Today's$10 Deal of the Day from APress at http://www.apress.com/9781430210702 is "Beginning GIMP". "In this fully-updated second edition, author and long-time member of the GIMP community Akkana Peck introduces the GIMP and shows you everything about it that you'll want to know—including how to prepare images for display on web pages, touch up digital photos, tap into powerful filters, effects, and plug-ins, and automate tasks using scripts." For those of you unfamilar with GIMP it is the GNU Image Manipulation Program and it is available for free from http://www.gimp.org/downloads/   Can't code withoutThe best C# & VB.NET refactoring plugin for Visual Studio

    Read the article

< Previous Page | 446 447 448 449 450 451 452 453 454 455 456 457  | Next Page >