Search Results

Search found 29495 results on 1180 pages for 'cross site scripting'.

Page 1034/1180 | < Previous Page | 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041  | Next Page >

  • Empty $upstream_http_location variable if response was cached

    - by Ivaldi
    I would like to cache the response of an redirect. (Cache the request to some site which returns a redirect and cache the second request which returns the actual content.) So far my config looks like this: location = /proxy { error_page 301 302 307 = @redir; resolver 8.8.8.8; proxy_pass $arg_url; proxy_intercept_errors on; proxy_cache pcache; proxy_cache_key $arg_url; proxy_cache_valid 200 301 302 307 1d; proxy_cache_min_uses 1; proxy_ignore_client_abort on; proxy_ignore_headers Set-Cookie Expires Cache-Control; } location @redir { resolver 8.8.8.8; # we need to assign $upstream_http_location to another var in order to use it with proxy_pass set $target $upstream_http_location; proxy_pass $target; proxy_cache predirects; proxy_cache_key $upstream_http_location; proxy_cache_valid 200 301 302 307 1d; proxy_cache_min_uses 1; proxy_ignore_headers Set-Cookie Expires Cache-Control; } It works for the first request or without the 30x codes for proxy_cache_valid in the /proxy part, but $target and $upstream_http_location are empty, if the response was cached. Is there a nice solution to cache both requests? Thanks!

    Read the article

  • Unable to install drivers of any kind of device on windows 8.1

    - by saadj55
    Windows 8.1 is unable to install drivers. First, I wanted to install Android ADB drivers for my Chinese Android Phone. I downloaded the relevant drivers from the manufacturers site and enabled USB debugging on my phone and tried to install it but Windows failed to find any drivers in the driver folder. After searching on google, I got to know that I will have to edit the win_usb.inf file in order to install the drivers so I did the editing part also. Added these line: %SingleAdbInterface% = USB_Install, USB\VID_0BB4&PID_0C03 %CompositeAdbInterface% = USB_Install, USB\VID_0BB4&PID_0C03 to both NTamd and NTx86 sections but the problem persists. Phone's current hardware id is: VID_0BB4&PID_0C03 I noticed that I need hardware ID with &MI_01 appended at the end. But Windows detects the device with the above hardware ID. I have the webcam drivers so I uninstalled WebCam Drivers to check if they can be installed back but I am also not able to install drivers for it. Windows is detecting the camera without that &MI_01 part. Please help. I cannot install webcam nor my android phone.

    Read the article

  • Let varnish send old data from cache while it's fetching a new one?

    - by mark
    I'm caching dynamically generated pages (PHP-FPM, NGINX) and have varnish in front of them, this works very well. However, once the cache timeout is reached, I see this: new client requests page varnish recognizes the cache timeout client waits varnish fetches new page from backend varnish delivers new page to the client (and has page cached, too, for the next request which gets it instantly) What I would like to do is: client requests page varnish recognizes the timeout varnish delivers old page to the client varnish fetches new page from backend and puts it into the cache In my case it's not site where outdated information is such a big problem, especially not when we're talking about cache timeout from a few minutes. However, I don't want punish user to wait in line and rather deliver something immediate. Is that possible in some way? To illustrate, here's a sample output of running siege 5 minutes against my server which was configured to cache for one minute: HTTP/1.1,200, 1.97, 12710,/,1,2013-06-24 00:21:06 ... HTTP/1.1,200, 1.88, 12710,/,1,2013-06-24 00:21:20 ... HTTP/1.1,200, 1.93, 12710,/,1,2013-06-24 00:22:08 ... HTTP/1.1,200, 1.89, 12710,/,1,2013-06-24 00:22:22 ... HTTP/1.1,200, 1.94, 12710,/,1,2013-06-24 00:23:10 ... HTTP/1.1,200, 1.91, 12709,/,1,2013-06-24 00:23:23 ... HTTP/1.1,200, 1.93, 12710,/,1,2013-06-24 00:24:12 ... I left out the hundreds of requests running in 0.02 or so. But it still concerns me that there are going to be users having to wait almost 2 seconds for their raw HTML. Can't we do any better here? (I came across Varnish send while cache , it sounded similar but not exactly what I'm trying to do.)

    Read the article

  • How do you permanently disable the 'This Connection is Untrusted' page on Firefox

    - by TheIronChef9
    I'm going insane. Can someone please help me to COMPLETELY DISABLE the 'This Connection is Untrusted' page on Firefox. Facts: I am running Firefox 23.0 on an Ubuntu machine (downloaded and installed ubuntu today) It is a work computer and I have to use my employer's proxy While visiting Webpages/webapps like Gmail or Google brings up the 'This Connection is Untrusted' page and I have to go through the whole tedious task of selecting 'I understand the Risks' and add Exceptions, etc. etc. The fact is, I don't care about the risks. I would rather this computer melt into the ground than have to see that page ever again. I want to dance naked in untrusted pages and not give a damn about the consequences. I just never want to see that page again. Ever. For some sites (eg. wikipedia), the css doesn't load and I end up seeing them in plain text. As a result these sites are completely useless. Wasted hours trying to solve this for stackoverflow.com. These issues happen on the Firefox on my Windows XP machine as well (also using the same proxy). I don't want to export/import certificates or create exceptions for every site that shows this bloody page. I just want this page gone. I don't want Firefox to tell me what's safe and what's not. Also, my system time and date are correct. I've also tried the lies on this page too with no good results. Edit: I've also tried the whole going into the Advance-Certificates-validation setup page and unchecked 'Use the Online Certificate Status Protocol (OCSP) to confirm the current validity of certificates' checkbox. Nothing happened even after restarting firefox or rebooting. I need help. Thanks.

    Read the article

  • .htaccess with addondomain and https ssl

    - by admon
    I have main domain and addon domain. Question. 1)When surfing to: ftp.addondomain.com or mail.addondomain.com For some reason it goes to the main domain. (normally this should not be problem but i still want completely separation) Do you know the syntax to redirect in the .htaccess file this: (.*).addondomain.com - addondomain.com and where do i put the code? in the addondomain .htaccess or in the main domain attaccess I.E any_words.addondomain.com should be forwarded to the addondomain.com so these: dsdhf.addondomain.com ftp.addondomain.com mail.addondomain.com ... all will be forwarded to: addondomain.com (i.e without the prefix). 2)Same question for https:// Main domain has SSL addon domain does not have ssl. For some reason when surfing to: https:// addondomain.com you get to: http:// maindomain.com (the address bar shows https:// addondomain.com but the site pages - the page you see is the page of the main domain) I would like that if user surfs to https:// addondomain.com then (since there is no ssl for the addon domain) then user will get to: http:// addondomain.com Or alternatively user will get error message. I do not want him to be redirected to the main domain. Please if you can, write me what to add to the .htaccess and i will add it. Please also let me know where to write the code. I.E in the addondomain .htaccess or in the main domain attaccess Thanks.

    Read the article

  • is there a way to run a command before puppet implements a change?

    - by Patrick
    I want to have puppet run a specific command before performing any type of change. I am aware of the prerun_command option in the main puppet.conf, but this is not what I'm looking for. I want the command to only run if something is about to change, not on every puppet run. Here's the scenario. Let's say I have a bunch of web servers behind a load balancer. I then want puppet to update the web site files. But in order to prevent issues where some files have been updated, but other files haven't, and the mixed versions causing problems, I want to take the server out of the load balancer pool. I could write a script which when run will tell the load balancer to remove the box from the pool. Then puppet can do the change, and use postrun_command to put the box back in the pool once complete. But I need a way to run that script to remove the server from the pool. The only solution I can think of is to keep 2 copies of the files on the box. One a staging copy, and when puppet updates that, use a notify action to trigger the removal script, and then copy from staging into the live location. But I was hoping for something a little more generic that would work on any change being performed (upgrading a package, restarting a service, creating a user, anything).

    Read the article

  • How do I back up Hyper-V VMs with Windows Server backup on Windows Server 2008 R2?

    - by Chris
    I've searched this site and google, and I CAN find information about how to back up Hyper-V virtual machines by using Windows Server Backup from the Hyper-V host in Windows Server 2008. You have to set up a registry key to enable the Hyper-V VSS writer, and then you can take online backups of your VMs. However, all the information I have found is about a year old, and none of it has been updated for Windows Server 2008 R2. I tried to run the "FixIt" .msi found here: http://support.microsoft.com/kb/958662 ... but it said that it was not applicable to my operating system. So I am thinking either Windows Server 2008 R2 already has its VSS service for Hyper-V enabled, or it still needs to be enabled but the FixIt package doesn't feel comfortable operating on an OS that wasn't RTM at the time. I went ahead and scheduled a windows server backup for 9pm tomorrow. It said it would take 86 GB, which means it MUST be counting those VMs. But will this backup fail? Can anyone confirm whether you have to apply the same registry changes for R2?

    Read the article

  • visually documenting web server configuration and infrastructure

    - by Alex Ciarlillo
    I have just finished a large re-organization and update of our institutions web server(s). This server hosts 3 virtual hosts, 3-4 blogs, 2 wikis, some legacy static HTML pages, and many hosted documents (PDF, .jpg, .xls). I have organized the site into a structure of something like: /var/www/sites/vhost1, vhost2, vhost3 .../wordpress/blogX .../mediawiki/wikiX Data is in a seperate directory structure so I can run a cron task over it to make sure it is all writeable and such. I then symlink to these data directories for each application. /var/www/data/vhost1, vhost2, vhost3 .../wordpress/blogX/uploads .../mediawiki/wikiX/images All Apache configs are in /etc/httpd/conf.d/vhosts.d/vhost1,2,3.conf On top of this there is also a testing server which mirrors this setup. Once changes are fully tested, they are rsynced down to the live server. All the wordpress installs and mediawiki installs are straight form SVN and updates are done by switching branches or "svn up". So my question is how can I best document to share with a) co-workers, b) possible future replacement, c) myself 6 months from now. Obviously I can make a wiki page, excel document, whatever and fill it with text, but I am looking for a more visual representation that I can use to explain the architecture to less-technical people. Ideally it would be awesome if this visual representation could then be expanded to get more technical details.

    Read the article

  • Different subnets routing with just one layer 3 switch

    - by GustavoFSx
    Our current network looks like this: Location 1: 2 Layer 2 switches | subnet 192.168.1.0/24 | Firewall for our VPN Location 2: 1 Layer 2 switch | subnet 192.168.3.0/24 | Firewall for our VPN Location 3: 1 Layer 2 switch | subnet 192.168.5.0/24 | Firewall for our VPN We just got a direct fiber connection between location 1 and 2, we also got a new HP V1910 24G layer 3 switch. I tried to follow the instructions on this site, but I can't get it to work. I think our network should look like this: Location 1: HP Switch FIBER to L2 | subnet 192.168.1.0/24 | Firewall for our VPN Location 2: 1 Layer 2 switch | subnet 192.168.3.0/24 | FIBER to L1 Location 3: 1 Layer 2 switch | subnet 192.168.5.0/24 | Firewall for our VPN So, how can I get routing working on our location 2? It's old gateway was a firewall device on ip 192.168.3.1. I'm thinking on creating a VLAN Interface on 192.168.3.1 on the switch for the Location 2. But how will I handle that on the HP switch that has a direct fiber connection with that switch? Please help, I'm not very good with networking.

    Read the article

  • Scaling a node.js application, nginx as a base server, but varnish or redis for caching?

    - by AntelopeSalad
    I'm not close to being well versed in using nginx or varnish but this is my setup at the moment. I have a node.js server running which is serving either json, html templates, or socket.io events. Then I have nginx running in front of node which is serving all static content (css, js, etc.). At this point I would like to cache both static content and dynamic content to memory. It's to my understanding that varnish can cache static content quite well and it wouldn't require touching my application code. I also think it's capable of caching dynamic content too but there cannot be any cookie headers? I do use redis at the moment for holding session data and planned to use it for other things in the future like keeping track of non-crucial but fun stats. I just have no idea how I should handle caching everything on the site. I think it comes down to these options but there might be more: Throw varnish in front of nginx and let varnish cache static pages, no app code changes. Redis would cache dynamic db calls which would require modifying my app code. Ignore using varnish completely and let redis handle caching everything, then use one of the nginx-redis modules. I'm not sure if this would require a lot of app code changes (for the static files). I'm not having any luck finding benchmarks that compare nginx+varnish vs nginx+redis and I'm too inexperienced to bench it myself (high chances of my configs being awful). I'm basically looking for the solution that would be the most efficient in terms of req/sec and scalable in the future (throw new hardware at the problem + maybe adjust some values in a config = new servers up and running semi-painlessly).

    Read the article

  • How to make my Ubuntu an internet gateway for my Android phone

    - by yacine
    I want to use the internet of my school on my Android, the problem is they have a Squid proxy, and many applications on my phone don't use the proxy at all. The obvious solution is to install a transparent proxy on my Android to force all applications to connect through it. The problem is that I need to root the phone to make it work, and I don't want to do it because it's not really my phone and rooting is a little risky- Another solution, which is safer, is to make my computer run as a gateway, so I put my Ubuntu IP in the gateway parameter of the phone. I'm running a small proxy on my ubuntu (cntlm), so I redirect the Android traffic to it. I did it with "iptables" as follows: iptables -t nat -A PREROUTING -s 10.0.1.118 -p tcp -j REDIRECT --to-ports 8888 iptables -t nat -A PREROUTING -s 10.0.1.118 -p udp -j REDIRECT --to-ports 8888 10.0.1.118 is the IP of the phone, 8888 is the port of cntlm (proxy on my PC). Now, on the phone: When I enter www.google.com on the navigator I get nothing (web site not found, error message of Firefox). But, when I enter http://74.125.143.101 (IP of Google) I get an error message from the school proxy (so it worked in some way – my PC redirected the traffic of the phone to the Squid proxy). The error message is : The requested URL could not be retrieved while trying to process the request get / http/1.1 host 74.125.143.101 user-Agent ... ... I think the problem is in the "GET" header,it should be GET 74.125.143.101 HTTP/1.1. But I don't understand what's happening, and I'm a certified CCNA.

    Read the article

  • Why am I getting a Sharepoint error on a simple "hello world" web page?

    - by Fetchez la vache
    I've been granted admin access to an internal IIS server on which I need to set up a web site. Before doing anything technical I wanted to ensure that I could access the server, but when attempting to access a simple page (that does not refer to Sharepoint) at http://localhost/index.html when logged onto the server directly, I am getting Parser Error Description: An error occurred during the parsing of a resource required to service this request. Please review the following specific parse error details and modify your source file appropriately. Parser Error Message: Could not load file or assembly 'Microsoft.SharePoint' or one of its dependencies. The system cannot find the file specified. Source Error: Line 1: <%@ Assembly Name="Microsoft.SharePoint"%><%@ Application Language="C#" Inherits="Microsoft.SharePoint.ApplicationRuntime.SPHttpApplication" %> Source File: /global.asax Line: 1 Assembly Load Trace: The following information can be helpful to determine why the assembly 'Microsoft.SharePoint' could not be loaded. WRN: Assembly binding logging is turned OFF. To enable assembly bind failure logging, set the registry value [HKLM\Software\Microsoft\Fusion!EnableLog] (DWORD) to 1. Note: There is some performance penalty associated with assembly bind failure logging. To turn this feature off, remove the registry value [HKLM\Software\Microsoft\Fusion!EnableLog]. -------------------------------------------------------------------------------- Version Information: Microsoft .NET Framework Version:2.0.50727.5456; ASP.NET Version:2.0.50727.5456 To be quite honest I know zip about Sharepoint, so why am I getting a sharepoint error on a basic "hello world" html page? Cheers :) Update: I've since supposedly uninstalled Sharepoint, but am still getting this error. Any ideas welcome!

    Read the article

  • Why can't I connect to my home SSH (SFTP) server? What am I doing wrong?

    - by Rolo
    I am new to this topic of creating a SFTP server on one's computer. I would like to be able to access the folder on my Windows XP computer via SFTP from another computer or a phone. The following is what I have done so far: I have installed SSH Windows and everything is setup correctly because I can access it (the folder on my pc) via WinSCP. I however cannot access it from my phone. It doesn't connect. The phone can be on the same wireless network as the Windows XP computer, but I would prefer to be able to access this when not in the same network. Now, from what I have read and understood, the following is the information needed to connect: 1) Host Name: This would be my computer's ip address which I access by typing ipconfig in a cmd prompt (I access this easily on my computer because I simply put in localhost or 127.0.0.1) 2) Port Number: That would be port 22 (I have also added this to my router in the port forwarding section). 3) Username: This would be my Windows XP username. This however is my full name, including my middle initial followed by a period. I am wondering if this is maybe causing problems in accessing it from my phone, since the name has spaces and punctuation (the period). 4) Password: The password of my Windows XP computer Extra Info: When I say phone, I mean an Android phone and I am using an ftp / sftp app to access my pc via the phone's cellular network (I also tried the wireless, but that didn't work as well). I have tried more than one program. On one program it tells me Connection timed out and on another it tells me "timeout:socket is not established" Also, I know that I can use the site noip, but I prefer to connect this way first. Also, because I am new to this, I would like to look into what exactly noip is doing and if they would be seeing my files as they are transferred from phone to pc. Thanking you in advance for your help.

    Read the article

  • Which components should I invest in.. for a backup machine.

    - by Senthil
    I am a freelance developer. I have a PC, a laptop and an old testing and file server machine. I might add one or two in future. I want to have an on-site backup machine that can handle backups of ALL these machines - file backups, MySQL backups, backup of subversion repository, etc.. When building the machine, which components should I invest more in? Examples: The cabinet should have lots of room for expansion. Hard disk size should be large. But I guess hard disk speed need not be high (?) But other components like, RAM, PSU, Processor, Network card, Cooling, etc.. how much relative importance do these have in a backup machine? Which of these components should be high-end or large, and which ones need not be? Some Idea of the load: There will TBs of data. File backups and subversion repository backups will at least be done daily. MySQL backups done weekly. assume 3 machines at the moment and somewhere around 10 machines in the future.

    Read the article

  • My scanner isn't responding. It is connected to a SCSI interface using Windows XP

    - by Bob
    I have a Microtek ScanMaker 9600XL scanner. The best part about it is that it is 12x17 inches. Wowza! The worst part about it is that I've had it working at one point, with the same cable, same card, same computer, but have since re-installed Windows XP on it. Currently it will turn on, and blink the Power and Ready lights. They should be solid. I've done my best to find documentation for this, but all I've really gotten is the content on the Microtek site. I've tried turning the scanner on, then the PC. Turning the PC on, then the scanner. When I try launching the software it pops up a dialog saying "ScanWizard Pro can't find any scanners! Use SCSI Check to find a scanner." I know the scanner has a pair of little buttons on the back. These cycle up/down a counter. I think it goes 0-7. Any thoughts on what that does, or how to proceed troubleshooting? I think my next step is to try each of those numbers, and do both pc booted first, and scanner booted first for each of those numbers...

    Read the article

  • Sharepoint web part fails intermittently

    - by pringly
    I have a MOSS 2007 environment, 2 web servers and a DB server, load balanced between the two web servers. I deployed a web part recently, which worked fine for a while, but failed on web server 2 after a day. When it fails, it gets the error message: 'A Web Part or Web Form Control on this Page cannot be displayed or imported. The type could not be found or it is not registered as safe’ Once it has failed, it will stay that way until an IIS reset is done. The other web server never fails, I tried to force the second web server to fail to recreate the issue and have been unable to do it. I tried placing it under heavy http traffic and it handled it fine. Put it back in the pool and it failed again after about 7 hours. So, if i remove the .dll for the webpart from the affected web server, the webpart doesnt stop working. Is this normal behavior? I checked the bin directory for the site and the global assembly and it there is no other copy of the .dll anywhere else on the server. Also, when checking the web part gallery, if the web part has failed it will appear in the gallery, but by trying to add a new webpart, the .dll wont be listed. I have no idea how to continue troubleshooting from here or even fix it, any ideas?

    Read the article

  • Mac always boots with incorrect display gamma (for years now including Lion)

    - by Alex Wayne
    I think somewhere, something got installed but I have no idea what or how to fix it :( Basically, my old MacBook Pro running 10.5 Leopard had a problem where on boot it would show everything on the screen in a very sort of crunched color space. Everything below 15% white would just be pure black, everything above 85% white would be pure white and all colors look to be a touch more saturated. It's garish. To fix it, I found that I could boot into almost any fullscreen 3D game. When the game launches, the colors would still be off, but when I then quite the game and return the desktop everything is normal again. I've noticed Blizzard games work most reliably for this (World of Warcraft or Starcraft2). This problem has followed me through the years. When I upgraded to an iMac I migrated everything over to it, and the issue now happens on the iMac too. I then got a new MacBook Pro for work and migrated my iMac over to that, and it has the problem too. I had thought that it was an OS bug, but upgrading to 10.6 Snow Leopard didn't fix it and neither did 10.7 Lion. Furthermore I can't find any reference on any forum or help site where anyone else has this problem. If anyone has any idea what processes or settings or apps I should look at to figure out why this is happening I should would appreciate it! It looks sort of irresponsible when I open my laptop in the office to work and then boot up Starcraft 2 full screen...

    Read the article

  • Setting up a virtual ftp directory that points to another computer

    - by AngryHacker
    I have II5 sitting on an old Windows 2000 Professional box. It has an FTP site there that allows me to access files. It works great, no problem at all. However, now I need to setup a virtual directory that points to a share on another computer on the network (running Windows XP Tablet Edition). The share requires a user name and password. The network is a simple workgroup (i don't have any domains or any of that). What is the correct procedure for that? I've tried setting a share via UNC and typing in the UserID/Password when asked. But when I finished, the virtual machine showed up as an error in the IIS Manager and couldn't access it. I mapped the share onto a drive and then tried to setup a virtual directory with this drive. Same result. Is there something simple I am missing? Would upgrading any part of the picture help at all?

    Read the article

  • ProxyPass for specific vhost

    - by Steve Robbins
    I have a web server that it set up to dynamically server different document roots for different domains <VirtualHost *:80> <IfModule mod_rewrite.c> # Stage sites :: www.[document root].server.company.com => /home/www/[document root] RewriteCond %{HTTP_HOST} ^www\.[^.]+\.server\.company\.com$ RewriteRule ^(.+) %{HTTP_HOST}$1 [C] RewriteRule ^www\.([^.]+)\.server\.company\.com(.*) /home/www/$1/$2 [L] </IfModule> </VirtualHost> This makes it so that www.foo.server.company.com will serve the document root of server.company.com:/home/www/foo/ For one of these sites, I need to add a ProxyPass, but I only want it to be applied to that one site. I tried something like <VirtualHost *:80> <Directory /home/www/foo> UseCanonicalName Off ProxyPreserveHost On ProxyRequests Off ProxyPass /services http://www-test.foo.com/services ProxyPassReverse /services http://www-test.foo.com/services </Directory> </VirtualHost> But then I get these errors ProxyPreserveHost not allowed here ProxyPass|ProxyPassMatch can not have a path when defined in a location. How can I set up a ProxyPass for a single virtual host?

    Read the article

  • ASP.NET, IS7 and IE8 caching?

    - by jdege
    We're suddenly having problems with some of our sites having old versions of .css and .js files show up in the browser. Generally, these problems go away, when the user clears cache in the browser. Is there something we can do either in the code or in IIS7, to convince the browser to not used the cached files? In our weirdest case, we have one customer whose users hit our site, and get an old version of a js file. They clear cache, load the page, get the current version, and the page runs fine. Then they load the file again, and suddenly have the old version, again. Any ideas as to how that might be happening? I can think of three: The browser is somehow holding on to the old version, when we clear cache, and is putting it back in the cache, before the second page load. One of our servers has an old version of the file, and while the first page load after a clear cache pulls it from one of the servers with the current version, second and subsequent page loads pull it from the server that has the old version. The first load after a clear cache goes straight to our servers, while subsequent loads pull the file from the cache on the customer's web proxy. I have to say, all three of those scenarios seem outlandishly unlikely, but it's a repeatable behavior. Any ideas?

    Read the article

  • The requested operation has failed! (cannot find answer)

    - by Geoff
    I know this problem is plastered all over the web but I've been searching and trying for hours with no luck. Can someone please give me some help? I originally installed Apache 2.0.64 along with PHP 5.2.17, I went through all of the steps in this tutorial with no luck, I found that the culprit was the LoadModule line. After looking on the internet I found a whole bunch of stuff but a lot of it was referring to PHP 5 and Apache 2.2. Since there seemed to be more info on apache 2.2 I removed apache 2.0.64 and installed 2.2. I added the code to LoadModule in the conf file but I got the same problem. I then followed the steps in this tutorial because it was slightly different with some things I hadn't tried yet but still I get the same problem. If I comment out LoadModule... it works fine but otherwise I get "The requested operation has failed!". This is what I ended up keeping since it works only having to comment one line. LoadModule php5_module "c:/php/php5apache2_2.dll" <IfModule mod_php5.c> AddType application/x-httpd-php .php PHPIniDir "c:/php" DirectoryIndex index.php </IfModule> EDIT: How can I stop getting this error message? UPDATE: Also, please note that I took note of the message in the PHP site that stated if PHP 5.2 was to be run with Apache to use the VC6 and not VC9. I had VC9 so I replaced it with VC6, the file is labeled php-5.2.17-nts-Win32-VC6-x86.zip

    Read the article

  • Creating MS Word 2010 Relative Links?

    - by leeand00
    Okay here is what I've tried so far for creating relative links in my MS Word Documents. In my document from the ribbon I select the File tab. I then select Info from the side bar. Click the properties drop down from the right hand column. (a bit difficult to find initially, since it looks like text not a drop down, but it's there). Click Advanced Properties The <document-name>.docx Properties Dialog Appears I enter .\ to specify that I want a relative path for the links in my document. I click OK. I go back into my document select some text and attempt to make a link out of it clicking the Insert tab of the ribbon, and then clicking Hyperlink. I then select a document from the current folder, and strip the full path from it, leaving just the name of the .docx file to which I wish to link. Then I click OK. The link appears, I try to click it using Ctrl+Click. I am informed that the address of the site is not valid. Check the address and try again. What could I possibly be doing wrong here? I just want a relative link. It's so easy in to do this in HTML.

    Read the article

  • No Network Connection in WinXP image from Microsoft running on VirtualBox 3.1.6 OSE (Ubuntu 10.04) due to missing CD Rom

    - by Bevor
    I'd like to test local websites in IE7 and IE8.To do that I thought about using the free Microsoft images: http://www.microsoft.com/windowsxp/using/networking/setup/default.mspx I converted the VHDs to VDIs to make them run in VirtualBox. ( http://www.qc4blog.com/?p=721 ) This works fine. The problem is that in this Windows XP installation there is no Network Adapter configured. Actually nothing at all is configured because it needs the Windows XP CD Rom to do that. If I would have a Windows XP CD Rom, I would not need to run the Microsoft image, so is there some kind of workaround to get an internet connection? Meanwhile I set "bridged" in VirtualBox. But this doesn't help because "ipconfig /all" in the guest system doesn't show any data because nothing is configured. How can I get a connection to my local Apache (Host system). http://localhost would be enough. By the way: I can't install the "Guest additions". When I do that, the 3 days trial period of the guest system is suddenly gone, so I can't use it anymore and it is senseless. Any ideas? Update: I've tried the Vista image and it gets an internet connection. From Vista image I can get to my site with 192.168.1.3/mywebsite in the browser url. So actually I don't care about the WinXP issue anymore but I would be glad if anyone still knows a solution.

    Read the article

  • What character can be safely used for naming files on unix/linux?

    - by Eric DANNIELOU
    Before yesterday, I used only lower case letters, numbers, dot (.) and underscore(_) for directories and file naming. Today I would like to start using more special characters. Which ones are safe (by safe I mean I will never have any problem)? ps : I can't believe this question hasn't been asked already on this site, but I've searched for the word "naming" and read canonical questions without success (mosts are about computer names). Edit #1 : (btw, I don't use upper case letters for file names. I don't remember why. But since a few month, I have production problems with upper case letters : Some OS do not support ascii!) Here's what happened yesterday at work : As usual, I had to create a self signed SSL certificate. As usual, I used the name of the website for the files : www2.example.com.key www2.example.com.crt www2.example.com.csr. Then comes the problem : Generate a wildcard self signed certificate. I did that and named the files example.com.key example.com.crt example.com.csr, which is misleading (it's a certificate for *.example.com). I came back home, started putting some stars in apache configuration files filenames and see if it works (on a useless home computer, not even stagging). Stars in file names really scares me : Some coworkers/vendors/... can do some script using rm find xarg that would lead to http://www.ucs.cam.ac.uk/support/unix-support/misc/horror, and already one answer talks about disaster. Edit #2 : Just figured that : does not need to be escaped. Anyone knows why it is not used in file names?

    Read the article

  • SSO to multiple websites from Sharepoint website

    - by Aico
    We have an intranet based on Sharepoint 2010. In this intranet we have several links to other webservers within the same Active Directory, for example a link to our Outlook Web Access site on our Exchange 2010 environment. We have three different setups which visit this Sharepoint environment and the other webservers: Windows 7 clients that are a member of the Active Directory Home pc's that connect through a SSL VPN appliance Standalone thin clients (Windows 7 embedded) within the corporate network The goal is to let people only sign in once. In the first group this isn't a problem because the AD Integrated Authentication works fine and the Windows logon is passed on to Sharepoint and the other webservers. The second group is also working fine because of the LDAP integration that the SSL VPN appliance uses. The third group is however experiencing issues. They need to enter their credentials everytime they click a link to another webserver. They first need to enter credentials for accessing the Sharepoint environment. When clicking the link for their webmail they have to re-enter their credentials, and so on. Can someone tell me what the best solution would be to also get SSO working fine for the third group? Some extra information: We also have a Forefront TMG server in our environment. I read somewhere that Forefront might be part of a solution for this problem, but not sure how. Maybe someone here can help me? Look forward to some help. Best regards, Aico

    Read the article

< Previous Page | 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041  | Next Page >