Search Results

Search found 87891 results on 3516 pages for 'server migration'.

Page 1019/3516 | < Previous Page | 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026  | Next Page >

  • Running remotely an app from a shared folder with PsExec

    - by Stephane
    I am actually not sure that this is possible. let's see: I have a script that runs on a Build server. Let's name this server A. It drops the bins to a shared folder on server B. And I want to run the program on server C. So using caspol I can allow the executable to be ran remotely. that means from B I can run \C\shared\my.exe What I want to do is from A run \C\shared\my.exe on B. SysInternals\PsExec.exe -u username -p password -accepteula \\ServerC -i 0 -d -w \\ServerB\Nightly\Server \\ServerB\Nightly\Server\server.exe The user has all the necessary rights. But, the -w (working directory) options apparently wants a path relative to the server I point to. Any idea?

    Read the article

  • How do I set up an anonymous autoversioning mod_dav_svn server?

    - by Chris R
    I would like to create a DAV SVN server with autoversioning that has no access control of any kind. I experimented with several variations on this, but every one of them runs into this error in the end: "Anonymous lock creation is not allowed." So, as a fallback option I would like to configure my SVN Location to have default credentials. Is this possible? Is there a better way to do what I'm trying to do?

    Read the article

  • Spam in Whois: How is it done and how do I protect my domain?

    - by user2964971
    Yes, there are answered questions regarding spam in Whois. But still unclear: How do they do it? How should I respond? What precautions can I take? For example: Whois for google.com [...] Server Name: GOOGLE.COM.ZOMBIED.AND.HACKED.BY.WWW.WEB-HACK.COM IP Address: 217.107.217.167 Registrar: DOMAINCONTEXT, INC. Whois Server: whois.domaincontext.com Referral URL: http://www.domaincontext.com Server Name: GOOGLE.COM.ZZZZZ.GET.LAID.AT.WWW.SWINGINGCOMMUNITY.COM IP Address: 69.41.185.195 Registrar: TUCOWS DOMAINS INC. Whois Server: whois.tucows.com Referral URL: http://domainhelp.opensrs.net Server Name: GOOGLE.COM.ZZZZZZZZZZZZZ.GET.ONE.MILLION.DOLLARS.AT.WWW.UNIMUNDI.COM IP Address: 209.126.190.70 Registrar: PDR LTD. D/B/A PUBLICDOMAINREGISTRY.COM Whois Server: whois.PublicDomainRegistry.com Referral URL: http://www.PublicDomainRegistry.com Server Name: GOOGLE.COM.ZZZZZZZZZZZZZZZZZZZZZZZZZZ.HAVENDATA.COM IP Address: 50.23.75.44 Registrar: PDR LTD. D/B/A PUBLICDOMAINREGISTRY.COM Whois Server: whois.PublicDomainRegistry.com Referral URL: http://www.PublicDomainRegistry.com Server Name: GOOGLE.COMMAS2CHAPTERS.COM IP Address: 216.239.32.21 Registrar: CRAZY DOMAINS FZ-LLC Whois Server: whois.crazydomains.com Referral URL: http://www.crazydomains.com [...] >>> Last update of whois database: Thu, 05 Jun 2014 02:10:51 UTC <<< [...] >>> Last update of WHOIS database: 2014-06-04T19:04:53-0700 <<< [...]

    Read the article

  • Running remotely an app from a shared folder with PsExec

    - by Stephane
    I am actually not sure that this is possible. let's see: I have a script that runs on a Build server. Let's name this server A. It drops the bins to a shared folder on server B. And I want to run the program on server C. So using caspol I can allow the executable to be ran remotely. that means from B I can run \C\shared\my.exe What I want to do is from A run \C\shared\my.exe on B. SysInternals\PsExec.exe -u username -p password -accepteula \\ServerC -i 0 -d -w \\ServerB\Nightly\Server \\ServerB\Nightly\Server\server.exe The user has all the necessary rights. But, the -w (working directory) options apparently wants a path relative to the server I point to. Any idea?

    Read the article

  • Web application and remote storage of files

    - by Matt
    Hi have a web application that can store lots and lots of files on the server. i.e. users upload data to it. The files are stored below a particular storage path. The web host will be an IBM xseries 345. However, the disks are really expensive so we would like to put the files onto a less expensive server. Now here is the question. Should I use an NFS mount on the IBM server of a path on the storage server? Or should I write some scripts to upload the files to the storage server instead. Both the storage server and the web host are on the same network. Only the web server is visible to the world. Is NFS performance suitable for an expected low to moderately loaded server?

    Read the article

  • How do you mimic a UK server on a VM?

    - by Strop
    The product I'm working on has both a US and UK version. We have a service that is not picking up the correct resx file on our UK test server. After tracking it down through code, we have the correct CurrentCulture set up to "en-GB" but the CurrentUICulture is still "en-US". Obviously we don't have something set correctly. How do you correctly set up an UK VM from the US?

    Read the article

  • How do I fix the Apache error "client denied by server configuration"?

    - by explorex
    I am using cPanel and Apache, and I am seeing the following error in my error_log: [Wed Feb 02 09:06:04 2011] [error] [client 110.34.4.242] client denied by server configuration: /home/websmart/public_html/.htaccess My project is based on PHP 5.3 using the zend framework. My .htaccess file contains: RewriteEngine On RewriteCond %{REQUEST_FILENAME} -s [OR] RewriteCond %{REQUEST_FILENAME} -l [OR] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^.*$ - [NC,L] RewriteRule ^.*$ index.php [NC,L] Can anyone tell me what causes this error and how should I alter my .htaccess file to resolve this?

    Read the article

  • Which web server architecture do you think is better?

    - by ngache
    use apache to server dynamic requests that need to be processed by php,and use nginx to serve static files use nginx to serve all requests So the key point is: which of them is more efficient in serving dynamic requests(we have no doubt that nginx is much better than apache in serving static files)?

    Read the article

  • Apache stops responding to http requests -- https continues to work

    - by Apropos
    Okay. Very strange problem that I'm having here. I just recently updated to Apache 2.4.2 from 2.2.17, mostly to try to get name-based SSL VirtualHosts working (although they should have been working on 2.2.17). Server is Win2008 R2 (so x64 by definition) running with PHP 5.4.3 and MySQL 5.1.40 (outdated, I know). When I launch the server, it initially works fine. Responds to all requests, VirtualHosts all in order. However, after an uncertain amount of time (appears to only take a few minutes for the most part, but sometimes takes hours), it stops responding to regular HTTP requests (on any VirtualHost). HTTPS continues to work. No errors in the log, and nothing in the access logs when I attempt to connect. I'm having a hard time finding the source of this error given its intermittent nature. When removing all SSL-based VirtualHosts, it seemingly increased stability (still responding to HTTP requests twelve hours later). This could be mere coincidence, though. Entirety of SSL VirtualHost is as follows, should there happen to be a problem with it. <VirtualHost *:443> DocumentRoot "C:\Server\www\virtualhosts\mysite.net" ErrorLog logs/ssl.mysite.net-error_log CustomLog logs/ssl.mysite.net-access_log common env=!dontlog SSLEngine on SSLProtocol all -SSLv2 SSLCipherSuite ALL:!ADH:!EXPORT:!SSLv2:RC4+RSA:+HIGH:+MEDIUM SSLCertificateFile C:/Server/bin/apache/apache2.4.2/conf/ssl/server.crt SSLCertificateKeyFile C:/Server/bin/apache/apache2.4.2/conf/ssl/server.key SSLCertificateChainFile C:/Server/bin/apache/Apache2.4.2/conf/ssl/sub.class1.server.ca.pem SSLCACertificateFile C:/Server/bin/apache/Apache2.4.2/conf/ssl/ca.pem </VirtualHost> Any ideas what I'm missing?

    Read the article

  • How can non-admins view which files are open on a file server?

    - by Josh
    I'm on a windows workstation, and I want a list of which files are open over the network on a windows server. The Shared Folders MMC Snap-in does this visually, and SysInternals' PSFile does it from the command line, but by default only for admins. I want to let regular users do this, too. What permissions do I need to grant them?

    Read the article

  • How to get email notification when process on ubuntu screen stopped?

    - by Manoj2411
    I have a application running on Ubuntu 12.04.2 LTS and I am running an application server like mailman server or faye server on ubuntu screen. The problem is, at times the process that is running on screen gets stopped and my application crashes because of that. Now I want to be notified whenever that 'faye server' or 'mailman server' is stopped that is running on screen. I am using Digital Ocean and I have already setup postfix server.

    Read the article

  • How to implement dynamic web blacklists in ISA Server 2006?

    - by Massimo
    I'm looking for a way to implement web site blacklisting in ISA server 2006. I know how to manually define a destination set and block access to it, and I also know how to import XML lists. What I'm looking for is some publicly available and actively updated blacklist (i.e. "porn sites", or "gamble sites") from some trustworthy source, and for a way to automatically get updated versions when they are released and use them in ISA. Can this be done, and how?

    Read the article

  • ScriptAlias makes requests match too many Location blocks. What is going on?

    - by brain99
    We wish to restrict access on our development server to those users who have a valid SSL Client certificate. We are running Apache 2.2.16 on Debian 6. However, for some sections (mainly git-http, setup with gitolite on https://my.server/git/) we need an exception since many git clients don't support SSL client certificates. I have succeeded in requiring client cert authentication for the server, and in adding exceptions for some locations. However, it seems this does not work for git. The current setup is as follows: SSLCACertificateFile ssl-certs/client-ca-certs.crt <Location /> SSLVerifyClient require SSLVerifyDepth 2 </Location> # this works <Location /foo> SSLVerifyClient none </Location> # this does not <Location /git> SSLVerifyClient none </Location> I have also tried an alternative solution, with the same results: # require authentication everywhere except /git and /foo <LocationMatch "^/(?!git|foo)"> SSLVerifyClient require SSLVerifyDepth 2 </LocationMatch> In both these cases, a user without client certificate can perfectly access my.server/foo/, but not my.server/git/ (access is refused because no valid client certificate is given). If I disable SSL client certificate authentication completely, my.server/git/ works ok. The ScriptAlias problem Gitolite is setup using the ScriptAlias directive. I have found that the problem occurs with any similar ScriptAlias: # Gitolite ScriptAlias /git/ /path/to/gitolite-shell/ ScriptAlias /gitmob/ /path/to/gitolite-shell/ # My test ScriptAlias /test/ /path/to/test/script/ Note that /path/to/test/script is a file, not a directory, the same goes for /path/to/gitolite-shell/ My test script simply prints out the environment, super simple: #!/usr/bin/perl print "Content-type:text/plain\n\n"; print "TEST\n"; @keys = sort(keys %ENV); foreach (@keys) { print "$_ => $ENV{$_}\n"; } It seems that if I go to https://my.server/test/someLocation, that any SSLVerifyClient directives are being applied which are in Location blocks that match /test/someLocation or just /someLocation. If I have the following config: <LocationMatch "^/f"> SSLVerifyClient require SSLVerifyDepth 2 </LocationMatch> Then, the following URL requires a client certificate: https://my.server/test/foo. However, the following URL does not: https://my.server/test/somethingElse/foo Note that this only seems to apply for SSL configuration. The following has no effect whatsoever on https://my.server/test/foo: <LocationMatch "^/f"> Order allow,deny Deny from all </LocationMatch> However, it does block access to https://my.server/foo. This presents a major problem for cases where I have some project running at https://my.server/project (which has to require SSL client certificate authorization), and there is a git repository for that project at https://my.server/git/project which cannot require a SSL client certificate. Since the /git/project URL also gets matched agains /project Location blocks, such a configuration seems impossible given my current findings. Question: Why is this happening, and how do I solve my problem? In the end, I want to require SSL Client certificate authorization for the whole server except for /git and /someLocation, with as minimal configuration as possible (so I don't have to modify the configuration each time something new is deployed or a new git repository is added). Note: I rewrote my question (instead of just adding more updates at the bottom) to take into account my new findings and hopefully make this more clear.

    Read the article

  • DNS NS and domain clarification

    - by thejartender
    I am really trying to get my home web server up and I don't seem to be succeeding. My web server withing my host system is running my web application and is viewable at the current isp ip 88.89.190.171 over WAN indicating that the webapp is fine and that router ports are forwarded. I have set up a DNS on this system with a single name server in the network and I manage to ping it with ping ns.thejarbar.org I have registered this private name server at my current hosting provider. My domain (thejarbar.org) is obviously registered and I have pointed it to my name server. My question here is if it is simply a matter of waiting on propagation for me to be able to ping my domain? Another way of asking this is if the fact that my name server is discoverable indicates that I have set it up correctly to be used? I have tested with dig and dig -x on my host and have A records for the name server. The server is not the Authorative server so I am concerned that this may be the reason why my site is not discoverable. Is there anything else I may need to so still? I only have one ns. currently, but should this succeed I will be purchasing a more stable secondary system to host my development applications. This is my best chance at getting work (freelance development) due to illness) and this I feel is the last step I need to succeed. Please note that this is temporarily a home server and I will most likely be using it as part of a professional setup very soon I will likely have to repeat this question therefore in a prefessional context in a few weeks as nothing will be different other than the fact that I am going to have a server running elsewhere. I am using bind9 and Ubuntu 12.10 and my records are: $TTL 3D @ IN SOA ns.thejarbar.org. email. ( 13112012 28800 3600 604800 38400 ); thejarbar.org. IN A 10.0.0.42 @ IN NS ns.thejarbar,org. yuccalaptop IN A 10.0.0.19 ns IN A 10.0.0.42 gw IN A 10.0.0.138 www IN CNAME thejarbar.org. $TTL 3D 0.0.10.in-addr.arpa. IN SOA ns.thejarbar.org. email. ( 13112012 28800 3600 604800 38400 ); 0.0.10.in-addr.arpa. IN NS ns.thejarbar.org. 42 IN PTR thejarbar.org. 19 IN PTR yuccalaptop.thejarbar.org. 138 IN PTR gw.thejarbar.org. My localhost IP is 10.0.0.42 I wish for this to be my host and name server.

    Read the article

  • Nginx - Redirect any Subdomain to File without Rewriting

    - by Waffle
    Recently I have switched from Apache to Nginx to increase performance on a web server running Ubuntu 11.10. I have been having issues trying to figure out how certain things work in Nginx compared to Apache, but one issue has been stumping me and I have not been able to find the answer online. My problem is that I need to be able to redirect (not rewrite) any sub-domain to a file, but that file needs to be able to get the sub-domain part of the URL in order to do a database look-up of that sub-domain. So far, I have been able to get any sub-domain to rewrite to that file, but then it loses the text of the sub-domain I need. So, for example, I would like test.server.com to redirect to server.com/resolve.php, but still remain as test.server.com. If this is not possible, the thing that I would need at the very least would be something such as going to test.server.com would go to server.com/resolve.php?=test . One of these options must be possible in Nginx. My config as it stands right now looks something like this: server { listen 80; ## listen for ipv4; this line is default and implied listen [::]:80 default ipv6only=on; ## listen for ipv6 root /usr/share/nginx/www; index index.php index.html index.htm; # Make site accessible from http://localhost/ server_name www.server.com server.com; location / { # First attempt to serve request as file, then # as directory, then fall back to index.html try_files $uri $uri/ /index.html; } location /doc { root /usr/share; autoindex on; allow 127.0.0.1; } location /images { root /usr/share; autoindex off; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # #error_page 500 502 503 504 /50x.html; #location = /50x.html { # root /usr/share/nginx/www; #} # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { fastcgi_pass unix:/tmp/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } server { listen 80 default; server_name *.server.com; rewrite ^ http://www.server.com/resolve.php; } As I said before, I am very new to Nginx, so I have a feeling the answer is pretty simple, but no examples online seem to deal with just redirects without rewrites or rewriting with the sub-domain section included. Any help on what to do would be most appreciated and if any one has a better idea to accomplish what I need, I am also open to ideas. Thank you very much.

    Read the article

  • issues with Nginx + Passenger Production setup - Loading time/request time delay

    - by Dani Cela
    having a bit of an issue relating to request time. I have NGINX as a proxy server for a ruby on rails app running passenger. I also have a postgresql database server which is running on its own VM separate from my nginx/application server. My issue is that when I try and access my products page which does a lot of database queries, my query takes maybe 3-4 seconds. The second I flood the web server with requests, i will choke out the web server and have requests take almost 20-30 seconds to process. The rails server and database server do not crash, and the usage is not that high. Each server has more than enough memory, even cpu usage on the rails server isn't more than 85%, albeit thats high but its not maxing it out. Is my problem related to my nginx proxy server? I dont really know how to fully explain this so if you have a question please ask it and I can clarify what I mean. EDIT: to see exactly what i mean relating to the database query, see http://207.245.4.215/products

    Read the article

  • How to simulate htaccess for auth with IIS without having access to the server config ?

    - by bgy
    Hi, I need to put a website on windows server IIS, i'd like to use basic authentification like i'd do with Apache, with the .htaccess / .htpasswd I read here & there, that i could do this through the admin tabs of IIS, but i'm not the administrator, and i only have ftp access. It seems to be a 'web.config' file where i could do this. Is there a way to set up such things within a config file ? I'm not used to work with IIS...

    Read the article

  • Is there a middle ground in monthly server costs?

    - by user41655
    Right now I'm paying a monthly fee of 12.95 for a server and I'm definitely getting what I pay for! (no customer support, outages, slowdowns, glitches, etc) I know that the good companies (Rackspace) charge much more. (Hundreds a month) I'm wondering if there is a middle ground somewhere that might be as good as Rackspace but is still better than what I have. 50 a month maybe? If it does exist will the extra $40 a month be worth it?

    Read the article

  • A tale of two user ids: Why does NFS not recognize a new user id?

    - by user76177
    I have two servers running RHEL6. The main server, which I will refer to as server, is a database server. The application server, which I will refer to as client, mounts a directory from server via NFS. There is a user, appuser, on both client and server. However, appuser's id on client is 502. appuser's id on server is 506. Both users need read and write capability on the NFS share. To facilitate this, I made the share owned by appuser on server. Of course, client does not recognize that ownership, since appuser has a different id on client. So I did the following: Changed id of user in /etc/passwd on client to be 506 **Changed ownership of appuser's $HOME on client to be appuser again so that I could log in. Now, when I go to look at the NFS share from the client side, I see that it is owned by 502. 502 is the OLD id for appuser on client. I can't change ownership of the NFS share from client, since that is a volume that physically resides on server. I need to make sure that the NFS share shows ownership of appuser from both server and client. What step have I missed since changing the appuser id on client? NOTE: I have not rebooted client or done anything else yet.

    Read the article

< Previous Page | 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026  | Next Page >