Search Results

Search found 51988 results on 2080 pages for 'http headers'.

Page 226/2080 | < Previous Page | 222 223 224 225 226 227 228 229 230 231 232 233  | Next Page >

  • Better control on code updates

    - by yes123
    I will briefly explain my situation. I have a website in PHP, this website is powered by a custom framework + some "plug-in" made ad hoc for it. I am the only developer of this. Until now I just test locally any changes than I upload the php files via FTP. I don't feel confortable anymore with this. The code base has grown quite a lot and I need some sort of system that helps to keep track of changes (line by line) and can restore to an old version easly if something goes wrong. Are there any good solution for this? Note: I never used something like version control or subversion because I think they are too much for this situation (I am the only developer and I just need basic feature) Note2: Something with a nice web interface would be perfect, I can pay for a good service too As now I found: http://beanstalkapp.com/ http://github.com/ http://www.codespaces.com/ http://codesion.com/ https://bitbucket.org/

    Read the article

  • Installing Realtek rtl-8192ce on Ubuntu 9.4

    - by dutchman79
    I followed the below steps to install my rtl8192ce drivers on my Ubuntu 9.4 system. But I still got errors and nothing installed and I can't connect to the modem to get onto the Internet. Can someone please help me? Move the file you downloaded to your home directory using your file manager or terminal mv [destination of downloaded file] /home/[username] Now we move to our home directory and Unzip the file using the following command or right click and select Extract here: cd /home/user tar xvjf rtl_92ce_92se_92de_8723ae_88ee_linux_mac80211_0012.0207.2013(1).tar.bz2 Now access the Directory which we extracted cd rtl_92ce_92se_92de_8723ae_88ee_linux_mac80211_0012.0207.2013(1) Next we install the necessary dependencies to compile the driver sudo apt-get install gcc build-essential linux-headers-generic linux-headers-$(uname -r) Now we start the compilation make and then sudo make install Execute modprobe rtl8192ce Now If all went right your system should be running the wireless driver."

    Read the article

  • How to prevent Google Analytics from adding a second slash between domain and page specific URL when viewing a page?

    - by Jeromy Anglim
    I have a blog http://foo.tumblr.com. I sometimes go to Site Content - All Pages on Google Analytics and then navigate to page listing and then click the icon to take me to that page on my blog. However, instead of opening http://foo.tumblr.com/post/1234/blah.html Google Analytics is opening http://foo.tumblr.com//post/1234/blah.html (i.e., it is adding a second slash between the domain the page specific component of the URL). How can I stop Google Analytics from doing this?

    Read the article

  • Other link relationships and impact to the SEO

    - by haha
    Example other link relationships <head> <link rel='index' title='Main Title' href='http://domain.com/' /> <link rel='start' title='Part Three' href='http://domain.com/part-3/' /> <link rel='prev' title='Part Two' href='http://domain.com/part-2/' /> <link rel='next' title='Part Four' href='http://domain.com/part-4/' /> </head> Questions Have a big impact to make my site get a nice rank to Search Engine?

    Read the article

  • Switch to https

    - by Mike
    I'm looking to use an .htaccess file to use mod_rewrite to switch the protocol from http:// to https:// when someone hits my website. For instance, once someone goes to: http://www.mywebsite.com/ I'd like the browser to switch to: http*s*://www.mywebsite.com/ The same goes for the http://mywebsite.com/ - https://mywebsite.com This is the following code I've been using and I've experienced some odd things so if anyone could provide me with information if this is the right way to do it, or if you have a better way, please provide it. Thanks in advance. RewriteEngine On RewriteCond %{SERVER_PORT} !=443 RewriteRule ^(.*)$ https://www.ebaillv.com/$1 [R=301,L]

    Read the article

  • That Tool is cURLy

    When you just use IE, Firefox or Chrome it can be easy to forget that HTTP is about more then just going to check the latest tech news at Engadget. It is a full and rich protocol, and a great way to experience that richness is the powerful command line utility cURL. cURL has a lot of options, but the syntax starts out simple. You can retrieve the contents of a web page with a simple curl http://blogs.claritycon.com/. The results should be the full text of the web page, tags and all. From there, you can use X to specify the HTTP verb to use, POST, PUT, DELETE, PATCH, etc and d to specify the payload of a POST or PUT. I have found cURL to be incredibly useful for two scenarios. First, as a good way to test basic web services. Second, while working a bit with CouchDB and another document based database, cURL has helped me learn more about RESTful APIs, including different verbs and response codes. cURL is a mainstay in our environments and programming languages precisely because it is simple, powerful and discoverable. I encourage more .NET developers to take a look, bask on the command line for a while and enjoy the plain text of the web. And this excellent logo:     -- Relevant Links -- Its not always the case with manuals, but the manual for cURL is quite useful: http://curl.haxx.se/docs/manual.html To make your command line look a little nicer (and more powerful) on Windows, check out Console and add some transparency effects: http://sourceforge.net/projects/console/Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Alias a Virtual Directory or Application as Root on IIS 7

    - by manyxcxi
    Our current IIS setup as two applications running on different paths at (for example) http://server/sub-a and http://server/sub-b. I want to alias http://server/sub-a as root so that just going to http://server/ will bring up the contents of sub-a. The problem I face is that when I initially set up a ReverseProxy it negatively affected http://server/sub-b. I know this is a fairly common problem- how have you solved it? 99.9% of my experience is with Apache, so I feel a tad lost in the GUI world of IIS.

    Read the article

  • SearchServer2008Express Search Webservice

    - by Mike Koerner
    I was working on calling the Search Server 2008 Express search webservice from Powershell.  I kept getting <ResponsePacket xmlns="urn:Microsoft.Search.Response"><Response domain=""><Status>ERROR_NO_RESPONSE</Status><DebugErrorMessage>The search request was unable to connect to the Search Service.</DebugErrorMessage></Response></ResponsePacket>I checked the user authorization, the webservice search status, even the WSDL.  Turns out the URL for the SearchServer2008 search webservice was incorrect.  I was calling $URI= "http://ss2008/_vti_bin/spsearch.asmx?WSDL"and it should have been$URI= "http://ss2008/_vti_bin/search.asmx?WSDL"Here is my sample powershell script:# WSS Documentation http://msdn.microsoft.com/en-us/library/bb862916.aspx$error.clear()#Bad SearchServer2008Express Search URL $URI= "http://ss2008/_vti_bin/spsearch.asmx?WSDL"#Good SearchServer2008Express Search URL $URI= "http://ss2008/_vti_bin/search.asmx?WSDL"$search = New-WebServiceProxy -uri $URI -namespace WSS -class Search -UseDefaultCredential $queryXml = "<QueryPacket Revision='1000'>  <Query >    <SupportedFormats>      <Format revision='1'>urn:Microsoft.Search.Response.Document.Document</Format>    </SupportedFormats>    <Context>      <QueryText language='en-US' type='MSSQLFT'>SELECT Title, Path, Description, Write, Rank, Size FROM Scope() WHERE CONTAINS('Microsoft')</QueryText>      <!--<QueryText language='en-US' type='TEXT'>Microsoft</QueryText> -->    </Context>  </Query></QueryPacket>" $statusResponse = $search.Status()write-host '$statusResponse:'  $statusResponse $GetPortalSearchInfo = $search.GetPortalSearchInfo()write-host '$GetPortalSearchInfo:'  $GetPortalSearchInfo $queryResult = $search.Query($queryXml)write-host '$queryResult:'  $queryResult

    Read the article

  • Serious problem with my sound system, No Hardware detected Suddenly, Please Help

    - by Aravind
    I'm Quite new to this Ubuntu but recently started using it, 2 days back I had problem with muting the Laptop speaker when headphone jack is plugged, to resove this i searched and somehow tried with the Alsa mixer and got it perfect. FYI this is the output of the Alsa script: [http://www.alsa-project.org/db/?f=9ec8099800aca2cb74ee35c2bf58125e45ca9f43][1] But since today morning there is no sound and no sound hardware are detected!! it looks like below http://i.imgur.com/gnK8R.png http://i.imgur.com/nxgvU.png Please help, - regards Aravind

    Read the article

  • New Gencode Sql to Linq

    Hi all members I have a programme generator code for c# It gen by 3tier and include MS SQL to Linq, MS SQL, and Access. You can use it and give idia to I can do it better. Links down is below:   http://depositfiles.com/files/38hcd9xf8 ho?c http://www.easy-share.com/1910377507/Ge ... artent.rarThanks you for using it!   record include example and Guide you can down in page dofactory.com or Links below: http://www.easy-share.com/1910377763/Guide Do Patterns In Action 3.5.pdf http://depo...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How to compile Asus-syntec-webcam-driver stk11xx

    - by berot3
    i have asus v1s and some time ago i was able to easily compile and install it. but now compiling fails... i hope u guys know what i can do, if there is anything that can be done... after svn checkout https://syntekdriver.svn.sourceforge.net/svnroot/syntekdriver syntekdriver and cd ./syntekdriver/trunk/driver as well as make -f Makefile.standalone i got: make -C /lib/modules/2.6.36.2/build SUBDIRS=/home/berot3/syntekdriver/trunk/driver modules make[1]: Betrete Verzeichnis '/usr/src/linux-headers-2.6.38-8-generic' CC [M] /home/berot3/syntekdriver/trunk/driver/stk11xx-usb.o CC [M] /home/berot3/syntekdriver/trunk/driver/stk11xx-v4l.o /home/berot3/syntekdriver/trunk/driver/stk11xx-v4l.c:43:28: fatal error: linux/videodev.h: File not found! compilation terminated. make[2]: *** [/home/berot3/syntekdriver/trunk/driver/stk11xx-v4l.o] Error 1 make[1]: *** [_module_/home/berot3/syntekdriver/trunk/driver] Error 2 make[1]: Verlasse Verzeichnis '/usr/src/linux-headers-2.6.38-8-generic' make: *** [driver] Error 2 same for http://bookeldor-net.info/merdier/Makefile-syntekdriver http://webcache.googleusercontent.com/search?q=cache:OM0zVNlYiVQJ:bookeldor-net.info/merdier/Makefile-syntekdriver+http://bookeldor-net.info/merdier/Makefile-syntekdriver&hl=de&strip=1

    Read the article

  • How to wipe RAM on shutdown (prevent Cold Boot Attacks)?

    - by proper
    My system is encrypted using Full Disk Encryption, i.e. everything except /boot is encrypted using dmcrypt/luks. I am concerned about Cold Boot Attacks. Prior work: https://tails.boum.org/contribute/design/memory_erasure/ http://tails.boum.org/forum/Ram_Wipe_Script/ http://dee.su/liberte-security http://forum.dee.su/topic/stand-alone-implementation-of-your-ram-wipe-scripts Can you please provide instructions on how to wipe the RAM once Ubuntu is shutdown/restarted? Thanks for your efforts!

    Read the article

  • How to make RewriteCond+RewriteRule change domain2/folder1 to domain1/folder1

    - by gman
    There's actually 2 questions. One is, how do I make RewriteCond+RewriteRule change domain2/folder1 to domain1/folder1 Actually what I want is any domain that tries to access folder1 that is not domain1 gets switched to domain1. So for example domain2.com/domain1/foo - domain1.com/domain1/foo as well as domain3.com/domain1/foo - domain1.com/domain1/foo This is what I tried RewriteCond %{HTTP_HOST} !^domain1\.com$ [NC] RewriteCond %{REQUEST_URI} ^/folder1/ RewriteRule ^/folder1/(.*)$ http://domain1.com/folder1/$1 [L,R=permanent] But that doesn't work. Next I tried some a simpler rule to see if I could narrow down the issue. RewriteCond ${HTTP_HOST} domain2\.com [NC] RewriteRule ^(.*)$ http://google.com/ [L] I though that would make ANY request to domain2.com go to google.com so I tried http://domain2.com/foo but I get domain2.com/foo not google.com If I go to http://domain2.com I get google. Why don't I get there if I go to http://domain2.com/foo? What am I not understanding about mod_rewrite?

    Read the article

  • Updating Google sitemap for mobile

    - by dimo414
    I have a series of utilities to generate Google sitemaps for my whole site. These files are massive, and slow to build. We want to start telling Google these pages are mobile-crawl-able too, by adding them to mobile sitemaps, but the documentation is unclear if I need to specify physically different files for my mobile URLs than for my normal ones. If this is my current sitemap: <?xml version="1.0" encoding="UTF-8" ?> <urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"> <url> <loc>http://mobile.example.com/article100.html</loc> </url> </urlset> Can I simply change it to: <?xml version="1.0" encoding="UTF-8" ?> <urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9" xmlns:mobile="http://www.google.com/schemas/sitemap-mobile/1.0"> <url> <loc>http://mobile.example.com/article100.html</loc> <mobile:mobile/> </url> </urlset> Or do I need to create new files with the additional markup, alongside my existing files?

    Read the article

  • Trouble Updating to Version 13.10 (from Version 13.04)

    - by user206783
    By trying to update to Version 13.10 (German) from Version 13.04 i'd received the following Problem-Message: W:Fehlschlag beim Holen von "http: //de.archive.ubuntu.com/ubuntu/dists/natty-backports/main/source/Sources 404 Not Found" W:Fehlschlag beim Holen von "http: //de.archive.ubuntu.com/ubuntu/dists/natty-backports/restricted/source/Sources 404 Not Found" W:Fehlschlag beim Holen von "http: //de.archive.ubuntu.com/ubuntu/dists/natty-backports/universe/source/Sources 404 Not Found" W:Fehlschlag beim Holen von "http: //de.archive.ubuntu.com/ubuntu/dists/natty-backports/multiverse/source/Sources 404 Not Found" E:Einige Indexdateien konnten nicht heruntergeladen werden. Sie wurden ignoriert oder alte an ihrer Stelle benutzt. Update rolls back Anyone got an solution?

    Read the article

  • .htaccess redirect to subfolder in different domain, maintaining old domain in the URL

    - by Naoise Golden
    Redirect has been widely discussed and most problems solved, so I am sorry for opening yet another post about this, but none of the codes I am trying work. I have a WordPress site hosted in http://mydomain.com/clientsdomain.com/wordpress I would like to temporarily redirect http://clientsdomain.com/ to the abovementioned URL, maintaining the clientsdomain.com domain in the URL. So for example http://clientsdomain.com/some/page would be pointing to http://mydomain.com/clientsdomain.com/wordpress/some/page Is this even possible with .htaccess? Maybe som configuration or plugin option with WordPress?

    Read the article

  • ecommerce item deleted by user, 301 rediret to HOME PAGE or 404 not found?

    - by Marco Demaio
    I know this question is someway similar to this one where they reccomend using 404, but after reading this other one where they suggest to use 301 when changing site urls (in the specific case was due to redesign/refactoring) I get a bit of confused and I hope someone could clarify for this specific example: Let's say I have an ecommerce site, let's also say the final user inserted some interesting items in the site and the ecommerce webapp created the item pages at the urls: http://...?id=20, http://...?id=30 etc. Now let's say some of these interesting items got many external links toward them from many other sites because some people found those items very interesting and linked to them. After some years the final user deletes those items, so obviously the pages/urls http://...?id=20, http://...?id=30, etc. now do not exist anymore, but still many pages on the web are linking toward them. What should the ecommerce site do now, just show a 404 page for those items? But, I'm confused, wouldn't this loose all the Google PR passed by the external links to the items pages? So isn't it better to use 301 redirect to HOME PAGE that at least passes the PR to the HOME PAGE? Thanks, EDIT: Well, according to answeres the best thing to do so far is to do a 404/410. In order to make this question more complete, I would like to talk about a special case, just to make sure I understood. properly. Let's say the user creates those items again (the ones he previously deleted at point 4), maybe he changes a bit their names and description, but they are basically the same items. The webapp has no way to know these new added items were the old items so it obviously create them as new items with new urls http://...?id=100, http://...?id=101, does it makes sense at this point to redirect 301 the old urls to the new ones? MORE EDIT (It would be VERY IMPORTANT TO UNDERSTAND): Well according to the clever answers received so far it seems for the special case, explained in my last EDIT, I could use 301, since it's something of not deceptive cause basically the new pages is a replacement for the old page in term of contents. This is basically done to keep the PR passed from external link and also for better user experience. But beside the user experince, that is discussible (*1), in order to preserve PR from external broken linlks why not just always use 301, In my understanding Google dislikes duplicated contents, but are we sure that 301 redirect to HOME PAGE is seen as duplicated contents for Google?! Google itself suggests to redircet 301 index.html to document root so if they consider 301 as duplicated contents wouldn't that be considered duplicated contents too?! Why do they suggest it? Let me provoke you: “why not just add a 301 to HOME PAGE for every not found page?” (*1) as a user, when I follo a broken url from some external link to some website's page I would stick more on this website if I get redirected to HOME PAGE rather than seeing a 404 page where I would think the webiste does not even exist anymore and maybe I don't even try to go to HOME PAGE of the website.

    Read the article

  • Is it a good idea to add robots "noindex" meta tags to deep low content pages, e.g. product model data

    - by Cognize
    I'm considering adding robots "noindex, follow" tags to the very numerous product data pages that are linked from the product style pages in our online store. For example, each product style has a page with full text content on the product: http://www.shop.example/Product/Category/Style/SOME-STYLE-CODE Then many data pages with technical data for each model code is linked from the product style page. http://www.shop.example/Product/Category/Style/SOME-STYLE-CODE-1 http://www.shop.example/Product/Category/Style/SOME-STYLE-CODE-2 http://www.shop.example/Product/Category/Style/SOME-STYLE-CODE-3 It is these technical data pages that I intend to add the no index code to, as I imagine that this might stop these pages from cannibalizing keyword authority for more important content rich pages on the site. Any advice appreciated.

    Read the article

  • "apt-get --print-uris" to list URI for downloading extra functionality package for amarok,reqonk

    - by munir
    Hello I do not have internet connection at my home. So i use "apt-get --print-uris" to list the http's and take it to other PC with internet connection and use wget to download them. I am facing difficulty about how to list the http's that amarok will download from for the extra functionality (like mp3 codecs). KDE deamon is showing me that these packages are needed for extra functionality of amarok but i don't know how to list their http's. "apt-get --print-uris upgrade" does not list http's need for amarok/reqonk.

    Read the article

< Previous Page | 222 223 224 225 226 227 228 229 230 231 232 233  | Next Page >