Search Results

Search found 19093 results on 764 pages for 'max path'.

Page 636/764 | < Previous Page | 632 633 634 635 636 637 638 639 640 641 642 643  | Next Page >

  • Debugging nginx URL rewrite: How do I figure out where the problem is?

    - by pjmorse
    I have a specific URL pattern on a site which needs to be redirected to the HTTPS version. This is a Django site; Nginx checks each URL in memcached, and if it doesn't find a cached version it proxies the request to Apache/mod_python for Django to render the page. The relevant configuration block is rewrite ^/certificate https://mysite.com/certificate ; rewrite ^/([a-zA-Z]{2})/certificate https://mysite.com/certificate ; ...and it doesn't appear to be working at all. Nginx is: $ nginx -V nginx version: nginx/0.7.65 built by gcc 4.2.4 (Ubuntu 4.2.4-1ubuntu4) TLS SNI support disabled configure arguments: --prefix=/usr/local/nginx --pid-path=/var/run/nginx.pid --with-http_gzip_static_module --with-http_ssl_module How can I figure out if the problem is my patterns not matching, or a more obscure configuration problem? (The site is localized to three languages, and the localization is in the URL string, e.g. /US/news/, /DE/about, etc. It tracks localization in the session as well, defaulting to US, so if you just requested /news Django will rewrite to /US/news unless the user has a cookie indicating they're using a different localization. Django handles this, though, not Nginx.)

    Read the article

  • nginx won't serve an error_page in a subdirectory of the document root

    - by Brandan
    (Cross-posted from Stack Overflow; could possibly be migrated from there.) Here's a snippet of my nginx configuration: server { error_page 500 /errors/500.html; } When I cause a 500 in my application, Chrome just shows its default 500 page (Firefox and Safari show a blank page) rather than my custom error page. I know the file exists because I can visit http://server/errors/500.html and I see the page. I can also move the file to the document root and change the configuration to this: server { error_page 500 /500.html; } and nginx serves the page correctly, so it's doesn't seem like it's something else misconfigured on the server. I've also tried: server { error_page 500 $document_root/errors/500.html; } and: server { error_page 500 http://$http_host/errors/500.html; } and: server { error_page 500 /500.html; location = /500.html { root /path/to/errors/; } } with no luck. Is this expected behavior? Do error pages have to exist at the document root, or am I missing something obvious? Update 1: This also fails: server { error_page 500 /foo.html; } when foo.html does indeed exist in the document root. It almost seems like something else is overwriting my configuration, but this block is the only place anywhere in /etc/nginx/* that references the error_page directive. Is there any other place that could set nginx configuration?

    Read the article

  • Explorer constantly hanging

    - by user978122
    So I'm running Windows 7 Ultimate (x64) an AMD 8150 on an Asus Crosshair V motherboard with all the latest and greatest patches, and I am experiencing frequent Explorer freezes. I've included the information I've grepped from the Event Viewer below: The program Explorer.EXE version 6.1.7601.17567 stopped interacting with Windows and was closed. To see if more information about the problem is available, check the problem history in the Action Center control panel. Process ID: 13a4 Start Time: 01cdb2968999c6fd Termination Time: 0 Application Path: C:\Windows\Explorer.EXE Report Id: c000ba44-1e8b-11e2-9ae7-000272ddf2b0 - System - Provider [ Name] Application Hang - EventID 1002 [ Qualifiers] 0 Level 2 Task 101 Keywords 0x80000000000000 - TimeCreated [ SystemTime] 2012-10-25T10:08:44.000000000Z EventRecordID 14626 Channel Application Computer RyanMain-PC Security - EventData Explorer.EXE 6.1.7601.17567 13a4 01cdb2968999c6fd 0 C:\Windows\Explorer.EXE c000ba44-1e8b-11e2-9ae7-000272ddf2b0 430072006F00730073002D0074006800720065006100640000000000 -------------------------------------------------------------------------------- Binary data: In Words 0000: 00720043 0073006F 002D0073 00680074 0008: 00650072 00640061 00000000 In Bytes 0000: 43 00 72 00 6F 00 73 00 C.r.o.s. 0008: 73 00 2D 00 74 00 68 00 s.-.t.h. 0010: 72 00 65 00 61 00 64 00 r.e.a.d. 0018: 00 00 00 00 .... Any idea what "Cross-thread" here means?

    Read the article

  • Samba/Winbind issues joing to Active directory domain

    - by Frap
    I'm currently in the process of setting up winbind/samba and getting a few issues. I can test connectivity with wbinfo fine: [root@buildmirror ~]# wbinfo -u hostname username administrator guest krbtgt username [root@buildmirror ~]# wbinfo -a username%password plaintext password authentication succeeded challenge/response password authentication succeeded however when I do a getent I don't get any AD accounts returned [root@buildmirror ~]# getent passwd root:x:0:0:root:/root:/bin/bash bin:x:1:1:bin:/bin:/sbin/nologin daemon:x:2:2:daemon:/sbin:/sbin/nologin adm:x:3:4:adm:/var/adm:/sbin/nologin lp:x:4:7:lp:/var/spool/lpd:/sbin/nologin sync:x:5:0:sync:/sbin:/bin/sync shutdown:x:6:0:shutdown:/sbin:/sbin/shutdown halt:x:7:0:halt:/sbin:/sbin/halt mail:x:8:12:mail:/var/spool/mail:/sbin/nologin uucp:x:10:14:uucp:/var/spool/uucp:/sbin/nologin operator:x:11:0:operator:/root:/sbin/nologin puppet:x:52:52:Puppet:/var/lib/puppet:/sbin/nologin my nsswitch looks like this: passwd: files winbind shadow: files winbind group: files winbind #hosts: db files nisplus nis dns hosts: files dns and I'm definitely joined to the domain: [root@buildmirror ~]# net ads info LDAP server: 192.168.4.4 LDAP server name: pdc.domain.local Realm: domain.local Bind Path: dc=DOMAIN,dc=LOCAL LDAP port: 389 Server time: Sun, 05 Aug 2012 17:11:27 BST KDC server: 192.168.4.4 Server time offset: -1 So what am I missing?

    Read the article

  • Cron job checking for changes in Git repository

    - by HNygard
    We have just moved our server configs to a Git repository. Therefore there should not be any changes in any of the repository folders. I was thinking about how I could set up a cron job to check for any uncommited changes. How could a cron job be set up to check for changes in a Git repository? Greping the output of the git status command might just do it. Grep and cron jobs are not my strong side. Here are some sample outputs from git status: Standing the folder containing the git repository (e.g. /path/gitrepo/) with changed files: $ git status # On branch master # Changes not staged for commit: # (use "git add <file>..." to update what will be committed) # (use "git checkout -- <file>..." to discard changes in working directory) # # modified: apache2/sites-enabled/000-default # # Untracked files: # (use "git add <file>..." to include in what will be committed) # # apache2/conf.d/test no changes added to commit (use "git add" and/or "git commit -a") Standing in the folder when there is no changes: $ git status # On branch master nothing to commit (working directory clean) Update: Synced up with origin is not important. There should be no local changes. Local files that must be in place go into the .gitignore file. In addition to the server configs there are also git repos for content (static web sites, web apps, wordpress, etc). None of the repositories should have local changes. We might use Puppet in the long run since its being used for development of one of the web apps.

    Read the article

  • restrict access to IIS virtual directory from root website

    - by senthilkumar-c
    Hi, I have two domains (domain1.com and domain2.com). Both of them use the same Windows hosting service with IIS7. One of the domains is being called the "primary domain" by my hosting provider and it always points to the root folder that I was given. For the other domain, I have created a virtual directory in IIS and pointed it there. The folder structure is like this - root --Default.aspx --domain2folder ----Default.aspx So, if I type domain1.com, I see the regulakr Default.aspx. But if I type domain2.com, I am shown the contents of domain2folder as if it were a separate web application - I think that is what IIS virtual directory is meant for. Well and good. But the problem is, when I type http://domain1.com/domain2folder/, I see the domain2's website! But I don't want that to be shown when I use the path like that from domain1. Only if they use domain2.com, user should be able to see those contents. How can I do that? Hope I am making sense. Thanks.

    Read the article

  • trouble executing php scripts with nginx

    - by lovesh
    My nginx config looks like this server { listen 80; server_name localhost; location / { root /var/www; index index.php index.html; autoindex on; } location /folder1 { root /var/www/folder1; index index.php index.html index.htm; try_files $uri $uri/ index.php?$query_string; } location /folder2 { root /var/www/folder2; index index.php index.html index.htm; try_files $uri $uri/ index.php?$query_string; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } } The problem with the above setup is that i am not able to execute php files. Now as per my understanding of nginx config rules, when i am in my webroot(/) which is /var/www the value of $document_root becomes /var/www so when i request for localhost/hi.php the fastcgi_param SCRIPT_FILENAME becomes /var/www/hi.php and that is the actual path of the php script. Similarly when i request for localhost/folder1/hi.php the $document_root becomes /var/www/folder1 because this is specified as the root in folder1's location block so again the fastcgi_param SCRIPT_FILENAME becomes /var/www/folder1/hi.php. But because the above configuration does not work so there is something wrong with my understanding. Please help?

    Read the article

  • How do I configure Jetty (via jettyrunner) so that it names a character set in the Content-Type response header?

    - by Pointy
    I use Jetty (via the oh-so-handy Jetty Runner) for day-to-day web application testing. One thing I've recently stumbled on is the fact that I don't get a character set called out in the "Content-Type" response header all the time. I do get it in response to my application's XMLHttpRequest transactions, but not for plain old pages loaded by <a> links or whatever. I've read a little bit about how to set up a Jetty config file, but I've never been able to completely understand that; all servlet containers are complicated, and while Jetty is pretty simple it's just weird enough that I don't grok the overall idea. Thus, all I do to launch my app is to run the Jetty Runner .jar file with a couple of simple arguments to set up the port number and logfile path, and then I just give it the .war file to run. It works great — except for the missing character set :-) Anybody have a quick sample config file that might fix this? edit — oh if it matters, I'm running Jetty 7.0.0 RC3; I've also tried with a slightly newer version (still 7.something) with exactly the same issue. All my testing is on Ubuntu.

    Read the article

  • WINDOWS: Your computer hangs. You can windows + R (run dialog) but performance is so halted taskMGR

    - by John Sullivan
    The question is, what process are available to try to recover from total system instability before pulling the plug when we can do nothing but programs or batches in the path from the run dialog (windows + r key), and performance is so dead that taskMGR / procEXP / other programs with visual guis are not usable? I am not a windows expert, but ideally someone out there has written a program that does more or less stuff like this: Immediately set (or perhaps I can set from the run prompt) its priority to extremely high, evaluate performance bottlenecks. E.g. is CPU 100%? If so identify offending program(s) or problems. Attempt / log fixes, then provide crude feedback asking the user if his performance has stabilized enough to abort, wait a few seconds, if no feedback continue, etc. etc. Eventually try to do any "system cleanup" if the program decides it cannot recover and perhaps finally provide a series of beeps to the user, or what have you, to say "OK, I give up, time to pull the plug". Ideally create a log, when able. These kinds of horrible hangs are a situation where surely trying something, anything, is better than nothing -- as long as that something is intelligent -- when the alternative is ripping out the power coord. Again, I am not a windows expert, so perhaps there is a much more elegant "hands on" approach I am not aware of.

    Read the article

  • Windows Server 2003 IPSec Tunnel Connected, But Not Working (Possibly NAT/RRAS Related)

    - by Kevinoid
    Configuration I have setup a "raw" IPSec tunnel between a Windows Server 2003 (SBS) machine and a Netgear FVG318 according to the instructions in Microsoft KB816514. The configuration is as follows (using the same conventions as the article): NetA | SBS2003 | FVG318 | NetB 10.0.0.0/24 | 216.x.x.x | 69.y.y.y | 10.0.254.0/24 Both the Main Mode and Quick Mode Security Associations are successfully completed and appear in the IP Security Monitor. I am also able to ping the SBS2003 server on its private address from any computer on NetB. The Problem Any traffic sent from a computer on NetA to NetB, or from SBS2003 to NetB (excluding ICMP Ping responses), is sent out on the public network interface outside the IPSec tunnel (no encryption or header authentication, as if the tunnel were not there). Pings sent from a computer on NetB to a computer on NetA successfully reach computers on NetA, but the responses are silently discarded by SBS2003 (they do not go out in the clear and do not generate any encrypted traffic). Possible Solutions Incorrect Configuration I could have mistyped something, somewhere, or KB816514 could be incorrect in some way. I have tried very hard to eliminate the first option. Have re-created the configuration several times, tried tweaking and adjusting all the settings I could without success (most prevent the SA from being established). NAT/RRAS I have seen multiple posts elsewhere suggesting that this could be due to interaction between NAT and the IPSec filters. Possibly the NetA private addresses get rewritten to 216.x.x.x before being compared with the Quick Mode IPSec filters and don't get tunneled because of the mismatch. In fact, The Cable Guy article from June 2005 "TCP/IP Packet Processing Paths" suggests that this is the case, (see step 2 and 4 of the Transit Traffic path). If this is the case, is there a way to exclude NetA-NetB traffic from NAT? Any thoughts, ideas, suggestions, and/or comments are appreciated.

    Read the article

  • Alias for Drupal "Sites" folder with Apache on Windows Server 2008

    - by sgtbeano
    I'm having to move a number of sites from a LAMP stack to a WAMP one, provided by Zend, and I've hit a problem. Our architecture is a number of loadbalanced web servers which have their own local webapp drives which are kept in sync with one server performing as the master copy. There is then a separate DFS share provided to all web servers from our pillar san. Usually a Drupal install under our LAMP cluster would have the main Drupal web app in a local HTDOCS mount for each server and the SITES directory within Drupal would then be symlinked out to the DFS or NFS share so that there is a common FILES and TMP directory. The problem I'm having is that there seems to be no equivalent of symlinks on Win Server 2008, shortcuts have a .ink at the end making Apache see them as a distinct file. So I've tried using an alias call in the vhost file like this; <Location /drupal-626/sites> Order deny, allow Allow from all </Location> Alias /drupal-626/sites "Z:\Path to alternate sites directory" The root for this test is; http://main-domain-url/drupal-626/ Unfortunately this isn't work so I'm wondering if any of you have a solution which would work? Many thanks for taking the time to read this.

    Read the article

  • Eclipse Indigo freezes on 'Open Type' search

    - by NickGreen
    When I'm trying to search for a Java class with Ctrl-shift-T (Open Type popup), Eclipse freezes when I'm typing 1 character. It usually takes about 8 seconds to 'unfreeze', but sometimes it won't come back at all.. When it freezes, I see that the eclipse process takes about 1Gig of mem and the CPU is about 100%! I've tried creating a new workspace, adjusting the eclipse.ini (perm size, different memory values), starting with -clean and at last reinstall the whole IDE. Nothing helps.. My eclipse.ini: -startup plugins/org.eclipse.equinox.launcher_1.2.0.v20110502.jar --launcher.library plugins/org.eclipse.equinox.launcher.gtk.linux.x86_64_1.1.100.v20110505 -product org.eclipse.epp.package.jee.product --launcher.defaultAction openFile -showsplash org.eclipse.platform --launcher.XXMaxPermSize 768m --launcher.defaultAction openFile -vmargs -server -Dosgi.requiredJavaVersion=1.5 -Xmn128m -Xms1024m -Xmx1024m -Xss2m -XX:PermSize=128m -XX:MaxPermSize=128m -XX:+UseParallelGC -Djava.library.path=/usr/lib/jni I'm using the following plugins: JRebel and m2e. I'm desperate for a solution because this problems causes me a great deal of time loss. System: Ubuntu 12.04 LTS 64 bit, 4GB mem, Intel core i7 860 @ 2.8 Ghz. Hope somebody knows a solution. Thank you for your time.

    Read the article

  • 403 Forbiden on Apache (CentOS) Server

    - by pouya
    These are my VM setup: HOST: windows 7 ultimate 32bit GUEST: CentOs 6.3 i386 Virtualization soft: Oracle virtualBox 4.1.22 Networking: NAT -> (PORT FORWARD: HOST:8080 => GUEST:80) Shared Folder: centos all the project files goes into shared folder and for each project file a virtualhost conf file is created in /etc/httpd/conf.d/ like /etc/httpd/conf.d/$domain I wasn't able to see anything in my browser before disabling both windows firewall and iptables in centos after that if i type for example: http://www.$domain:8080/ all i see is: Forbidden You don't have permission to access / on this server. Apache/2.2.15 (CentOS) Server at www.$domain.com Port 8080 A sample Virtual Host conf file: <VirtualHost *:80> #General DocumentRoot /media/sf_centos/path/to/public_html ServerAdmin webmaster@$domain ServerName www.$domain ServerAlias $domain *.$domain #Logging ErrorLog /var/log/httpd/$domain-error.log CustomLog /var/log/httpd/$domain-access.log combined #mod rewrite RewriteEngine On RewriteLog /var/log/httpd/$domain-rewrite.log RewriteLogLevel 0 </VirtualHost> centos shared folder is availabe to guest at /media/sf_centos These are file permissons for sf_centos: drwxrwx--- root vboxsf vboxsf group includes: apache and root So these are my questions: 1- How to solve Forbidden Problem? 2- How to setup both host and guest firewalls? 3- How can i improve this developement environment to simulate production environment as much as possible specially security improvements?

    Read the article

  • Bypass network stack. Which options do we have? Pros and cons of each option [on hold]

    - by javapowered
    I'm writing trading application. I want to bypass network stack in Linux but I don't know how this can be done. I'm looking for complete list of options with pros and cons of each of them. The only option I know - is to buy solarflare network card which supports OpenOnLoad. What other options should I consider and what is pros and cons of each of them? Well the question is pretty simple - what is the best way to bypass network stack? upd: OpenOnload It achieves performance improvements in part by performing network processing at user-level, bypassing the OS kernel entirely on the data path. Intel DDIO to allow Intel® Ethernet Controllers and adapters to talk directly with the processor cache of the Intel® Xeon® processor E5. What's key difference between these techologies? Do they do roughly the same things? I much better like Intel DDIO because it's much easy to use, but OpenOnload required a lot of installation and tuning. If good OpenOnload application is much faster than good Intel DDIO application?

    Read the article

  • sub application and virtual directory file permissions

    - by Zeus
    I have a website setup in IIS7, exampledomain.com. Under the application exampledomain.com lives a sub application cms. In a rather convoluted way, we have content in our cms system in this sub-app, under cms\content\{generatedfoldername}. So to access an image in this content, the full URL would be http://www.exampledomain.com/cms/cms/content/{generatedfoldername}/image.jpg, (yes, cms twice...) and this works just fine. Now, we have a virtual directory under the parent website, called stuff which points at the content of the cms. So I should be able to get to the image using the url http://www.exampledomain.com/stuff/{generatedfoldername}/image.jpg. Unfortunately this gives a server 500 error "There is a problem with the resource you are looking for, and it cannot be displayed." Whilst you do have to log into the cms system to access any of the admin pages within, I don't think the image files are protected by login, or else the first example URL wouldn't work, right? Also it's a server 500 error, rather than a 403. I'm sure I must be missing something obvious here- will the virtual directory be using the permissions defined in the parent application, or the subapplication to which it is pointing? Or is there some other permissions I may have missed? Sorry, that was a bit long, thanks for reading all the way down here! (I also must point out that I'm pretty new to the server management stuff.) edit: also, we have <location path="." inheritInChildApplications="false"> specified in the webconfig of the parent app, so it's hopefully not the issue described in this config file hierarchy article.

    Read the article

  • Apache forwarding to tomcat shows a blank page

    - by MNS
    I have an application running on tomcat at http ://www.example.com:9090/mycontext. The host name in server.xml points to www .example.com. I do not have localhost anymore. I am using apache to forward requests to tomcat using mod_proxy. Things work fine as long as the ProxyPath is /mycontext. The server name setup in virtual host is www .abc.com and http ://www.abc.com/mycontext works fine. However I would like to ignore the context path and simply use http://www.abc.com/ to forward requests to http://www.example.com:9090/mycontext. When I do this, apache shows me a blank page. What am I missing here? I have not changed anything in server.xml except the default host to www .example.com. <VirtualHost *:80> ServerName www.abc.com ProxyRequests Off ProxyPreserveHost On <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass / http://www.example.com:9090/mycontext ProxyPassReverse / http://www.example.com:9090/mycontext </VirtualHost> Thanks

    Read the article

  • Using screen to monitor non-interactive scripts (or some other solution)

    - by Michael
    I have some autonomous scripts that run commands on remote machines over ssh. These scripts rely on getting stdout, stderr, and the return code of each command run. I want to be able to monitor the progress of the scripts on each target machine so that I can see if something has hung and possibly intervene if necessary. My initial idea was to have the scripts run commands in a screen session, so that the person monitoring could simply attach to the session with screen -x. However, it was hard to do that from a script since screen is an interactive program. I can send a command to the screen session with screen -S session -X stuff "command^M", but then I don't get the output and return code that I need back. My second idea was to put script /path/to/log in ~/.bash_profile and log the entire session to a file. Then the monitoring person could simply tail the log file. However, this doesn't provide the interactivity that I was looking for. Any ideas on how to solve this problem?

    Read the article

  • ProxyPass for specific vhost

    - by Steve Robbins
    I have a web server that it set up to dynamically server different document roots for different domains <VirtualHost *:80> <IfModule mod_rewrite.c> # Stage sites :: www.[document root].server.company.com => /home/www/[document root] RewriteCond %{HTTP_HOST} ^www\.[^.]+\.server\.company\.com$ RewriteRule ^(.+) %{HTTP_HOST}$1 [C] RewriteRule ^www\.([^.]+)\.server\.company\.com(.*) /home/www/$1/$2 [L] </IfModule> </VirtualHost> This makes it so that www.foo.server.company.com will serve the document root of server.company.com:/home/www/foo/ For one of these sites, I need to add a ProxyPass, but I only want it to be applied to that one site. I tried something like <VirtualHost *:80> <Directory /home/www/foo> UseCanonicalName Off ProxyPreserveHost On ProxyRequests Off ProxyPass /services http://www-test.foo.com/services ProxyPassReverse /services http://www-test.foo.com/services </Directory> </VirtualHost> But then I get these errors ProxyPreserveHost not allowed here ProxyPass|ProxyPassMatch can not have a path when defined in a location. How can I set up a ProxyPass for a single virtual host?

    Read the article

  • Finding bluetooth link key in Win7, to double pair a device on dualboot computer

    - by Ilari Kajaste
    How can I dig up the bluetooth link key for a paired device in Win7? Is this something that is dependent on the bluetooth stack I'm using (Toshiba), or is there a generic place to store these in Win7? Note: I'm not talking about the six-digit code usually typed by the user during pairing - that is worthless since it's discarded after pairing process. What I mean is the 128-bit link key that the devices exchange during pairing, and use thereafter to encrypt all their bluetooth traffic. Background: I dualboot Win7 / Ubuntu on my laptop, and I would like to have my phone paired to both OS's. Since the dualbooting computer has only one bluetooth adapter and thus only one bluetooth address, I cannot do two pairings to the phone, since on the second pairing (windows) the phone just replaces the previous pairing (linux) to the same bluetooth address. A thread on Ubuntu forums pointed me to what I have to do - pair first on linux, then on windows, and then replace the link key on linux side with the one windows negotiated. I can find the linux side pairing key from /var/lib/bluetooth/[BD_ADDR]/linkkeys - no problems there. However, on windows side I can't find the key. According to the forum post, on windows side the key should be in SYSTEM\ControlSet002\services\BTHPORT\Parameters\Keys\[BD_ADDR] but while that registry key does exist, it has no subkeys. (And a similar registry path in ControlSet001 didn't have any subkeys either.) One thing I've been instructed to do is to capture all events during pairing with Sysinternals Process Monitor. I did this, but I haven't been able to find any useful information from the captured events, not even by exporting the data to a huge XML and grepping that with the BD_ADDRs (with or without colons). So how could I find the link key for a paired device in Win7? Some reference information: Wikipedia: Bluetooth, Security Now: Bluetooth security

    Read the article

  • Hyper-V vss-writer not making current copies [migrated]

    - by Martinnj
    I'm using diskshadow to backup live Hyper-V machines on a Windows 2008 server. The backup consists of 3 scripts, the first will create the shadow copies and expose them, the second uses robocopy to copy them to a remote location and the third unexposes the shadow copies again. The first script – the one that runs correctly but fails to do what it's supposed to: # DiskShadow script file to backup VM from a Hyper-V host # First, delete any shadow copies of the drives. System Drives needs to be included. Delete Shadows volume C: Delete Shadows volume D: Delete Shadows volume E: #Ensure that shadow copies will persist after DiskShadow has run set context persistent # make sure the path already exists set verbose on begin backup add volume D: alias VirtualDisk add volume C: alias SystemDrive # verify the "Microsoft Hyper-V VSS Writer" writer will be included in the snapshot # NOTE: The writer GUID is exclusive for this install/machine, must be changed on other machines! writer verify {66841cd4-6ded-4f4b-8f17-fd23f8ddc3de} create end backup # Backup is exposed as drive X: make sure your drive letter X is not in use Expose %VirtualDisk% X: Exit The next is just a robocopy and then an unexpose. Now, when I run the above script, I get no errors from it, except that the "BITS" writer has been excluded because none of its components are included. That's okay because I really only need the Hyper-V writer. Also I double checked the GUID for the writer, it's correct. During the time when the Hyper-V writer becomes active, 2 things will happen on the guest machines: The Debian/Linux machine will go to a saved state and restore when done, all fine. The Windows guests will "creating vss snapshop-sets" or something similar. Then X: gets exposed and I can copy the .vhd files over. The problem is, for some reason, the VHD files I get over seems to be old copies, they miss files, users and updates that are on the actual machines. I also tried putting the machines in a saved sate manually, didn't change the outcome. I hope someone here has an idea of how to solve this.

    Read the article

  • IIS 6.0 https not working "connection was reset"

    - by cad
    Application Server Windows Server 2003 SP2 with IIS 6.0 IIS has a "Default Web Site" (port 18000, ssl 443, ID=1) with a certificate created by me. I have an specific site called "scj.galaxy.Weekly" (port 80, ssl 443, ID=1272369728) that is working fine. I have an entry in windows/system32/drivers/etc/hosts that links galaxy.Weekly.scjdev.ds to the server ip in both my local machine and in the application Server. These sites works: http://scj.galaxy.weekly/test.html works http://scj.galaxy.weekly/test.aspx works But https://scj.galaxy.weekly/test.html fails Error message is: The connection was reset The connection to the server was reset while the page was loading. The certificate was working fine for months. It was created with something similar to this: Selfssl /N:CN=*.scjdev.ds /V:3650 /S:1 /P:443 I have tried several options and none of them are working: 1) Create a certificate only in "Default Web Site" and link it to SecureBindings with command prompt cscript adsutil.vbs set /w3svc/1272369728/SecureBindings ":443:galaxy.Weekly.scjdev.ds" 2) Create a certificate only in "Galaxy Site" and link it to SecureBindings 3) Create a certificate in both and link them to secureBindings. Probably I am missing an step or something, but I can't see it. Here is the relevant config of Galaxy Site: <IIsWebServer Location ="/LM/W3SVC/1272369729" AuthFlags="0" LogPluginClsid="{FF160663-DE82-11CF-BC0A-00AA006111E0}" SSLCertHash="c36a514a0be90fbc121d9c19bb052842289d5aee" SSLStoreName="MY" SecureBindings=":443:galaxy.Weekly.scjdev.ds" ServerAutoStart="TRUE" ServerBindings=":80:galaxy.Weekly.scjdev.ds" ServerComment="galaxy.Weekly.scjdev.ds" > </IIsWebServer> <IIsWebVirtualDir Location ="/LM/W3SVC/1272369729/root" AccessFlags="AccessRead | AccessScript" AppFriendlyName="Default Application" AppIsolated="2" AppRoot="/LM/W3SVC/1272369729/Root" AuthFlags="AuthAnonymous | AuthNTLM" DefaultDoc="Default.aspx" DirBrowseFlags="EnableDirBrowsing | DirBrowseShowDate | DirBrowseShowTime | DirBrowseShowSize | DirBrowseShowExtension | DirBrowseShowLongDate" Path="D:\Webs\Galaxysite" ScriptMaps="some config... " > </IIsWebVirtualDir>

    Read the article

  • Microsoft Server 2003 Explorer shows duplicate local shares

    - by user52167
    Hi folks, I am new here and I could really use some advice please. I am having a problem with our file server. When I try to browse the shared folders using explorer, several of the shared folders all appear to have the same name. Whenever I attempt to rename one of the affected folders, all the affected folders name also change. Our File Server is Windows Server 2003 R2. I am logged on directly to the server using remote desktop. When I open the folder all is as it should be, the proper content is there and the address bar displays the correct folder name and path. The share names are correct, so everything that needs to access the shared folder/files can do so. Also when I browse to the folder using the command-line all it as it should be there too. The only issue seems to be the incorrect display name when browsing using explorer. Can anyone offer any advice or help as to how to resolve this issue please? It would be most appreciated. Thanks

    Read the article

  • Last step in HDD Recovery (fixing windows)

    - by Atom Computing
    My dad’s hard drive corrupted which was a result of many bad sectors. Anyway, I made a clone of the drive and have now repaired it totally (recreating the MBR and MFT) and doing a series of ChkDsk's on it. I can now see all the files and folder on it and it is all intact. I currently have it as a slave in my computer (where I was doing all the repairs). When putting it back into the computer, it comes up with "A disk read error occurred: Press Ctrl + Alt + Del to Restart". I don’t know why this is happening but think it might have something to do with file permissions. I have tried a start-up recovery on the Vista boot CD and it found no problems. When trying to apply file permissions (and creating file perms for the SYSTEM group (as it didn’t have any for SYSTEM group)) it couldn't apply them for some of the System32 folder files. I have tried applying them as admin and with as powerful privileges I can get. All to no avail. When it is in my PC I can boot it up (I added it into my bootloader) and it boots up fine except when it logs in it comes up with the error - "Rundll32.exe - Windows cannot access the specified device, path or file. You may not have the appropriate permissions to access the item" This message keeps coming back and nothing loads at all. Any help would be greatly received as I have got so far with the data recovery and want to avoid a reformat at all costs due to the vast number of programs installed and I don’t have much time on my hands! Thanks

    Read the article

  • screen scraper templates for various websites

    - by intuited
    I'm looking specifically for a convenient way to locally archive posts from this and other similar sites. I'd like to separate the question itself from the answers, or maybe crop the question and store it, keeping the page title. Obviously I don't need to store the menu or the various other site interface chrome. The best way to do this would seem to be to associate an XSLT template with a match on the URL and use that template to pull the various relevant informations and format them. My two-part question: Is there a tool specifically built for this task? I.E. something that takes a URL and checks it against a map of path-matching expressions to templates, and outputs the result of applying the template to that resource? xmlto seems to be most of the way there, and could probably just be called from a script that does the pattern-matching, but something already integrated would be more convenient. Is such a URL_pattern-to-XSLT_template map publicly available somewhere? Question 2.5: Is it legal to do this with sites like this one that have public licenses on their content?

    Read the article

  • Fix elements within objects at their position? (Prevent movement when resizing)

    - by Skadier
    I would like to know if it is possible to fix elements at their absolute position within custom elements in InDesign CS5? I created a kind of speech bubble and I would like to place a stripline within this bubble to separate two content areas. Just a little scheme to show the desired layout as Pseudo-Markup :D <speech-bubble> <textbox>HEADER SECTION</textbox> <stripline> <textbox>Some other text</textbox> </speech-bubble> I created something like this but with two separate elements which aren't connected. So I have to select both of them in order to move the whole bubble. Then I tried to connect them using Object->Paths->Create linked path but then the stripline moves and the HEADER SECTION moves too. All in all I would like to have a speech bubble which can be resized in order to hold more text but it shouldn't make the HEADER_SECTION larger or move the stripline. Hope you understand what I mean :D Thanks in advance!

    Read the article

< Previous Page | 632 633 634 635 636 637 638 639 640 641 642 643  | Next Page >