Search Results

Search found 9715 results on 389 pages for 'servers'.

Page 239/389 | < Previous Page | 235 236 237 238 239 240 241 242 243 244 245 246  | Next Page >

  • How to use symbolic links in windows server 2008R2 across the network (mklink)

    - by server info
    I have One Server (Srv1) which holds data with file shares and the storage is full. Now I have second Server (Srv2) which has alot more space. No I would like to transfer all the data von Serv1 to Serv2 and have links to the new destination. I found mklink very useful here but unfortunately it does not work over the network. Which also points the docu out. People heavily rely on the path's so it would be helpful if somone has a pointer for me... how to handle symbolic links a cross the network with Windows Servers. I am running Windows Server 2008. Thanks for any help

    Read the article

  • How do I associate server traffic to a domain hosted on that server?

    - by morley
    I have three or four Linux servers, each of which hosts anywhere from 5 to 50 domains. Each domain has its own folder: /www/projectname/web/ Logs go in: /www/projectname/log However, if there's a traffic spike (or, as I see it on my end, a memory usage spike), I'm not sure how to figure out which domain is responsible for the traffic without running tail -f on each of the projects and making an educated guess based on how fast things scroll. There's got to be a better way! There probably is, but I haven't seen it. And the last time I checked, bandwidth monitors only report system-wide load. So if anyone knows how to do this the right way, please let me know. Thanks!

    Read the article

  • Setting up scripts in Amazon EC2 Cloud

    - by racket99
    Hello, I am currently running a few perl and python scripts on a windows pc and would like to port over to the Amazon EC2 servers running 64-bit LINUX. The scripts are basic web scrapers that go to a variety of websites, get data and then save daily as csv files. I would like to install these in the cloud and get them running in an automated way so that they will run without my intervention. Also given that I don't want to lose all the data if the instance crashes, I should also upload the csv files to Amazon S3. Any idea how I can do this? I am not terribly versed in LINUX nor do I know Perl/Python well. What is the best way for me to tackle thi

    Read the article

  • Configure fallback redis server

    - by snøreven
    I am using redis as a cache server. Can I somehow configure multiple redis servers, that the cache is fully functional (read/write) even if some of them go offline? I looked into master-slave, but the problem I see there is, that if the master fails, and I allow writes to the slaves, they get overwritten once the master is up again. Now the master just serves the old data. The only solution I could come up was disabling write-to-disc, but that sucks as I loose everything if I have to restart the master. And I guess, slaves wouldn't be synced anymore if the master is gone.

    Read the article

  • nginx conditional Accept header

    - by manu_v
    Some mobile devices send the following incorrect requests to our servers : GET / HTTP/1.0 Accept: User-Agent : xxx The empty Accept header causes our Ruby on Rails server to throw back a 500 error. In Apache, the following directive allows us to rewrite the header before sending it to the application RoR server in order to cope with the broken devices : RequestHeader edit Accept ^$ "*/*" early We're currently setting up nginx, but achieving the same work-around is proving difficult. We are able to set : proxy_set_header Accept */*; However, this seems to have to be done inconditionally. Whenever trying to do : if ($http_accept !~ ".") { proxy_set_header Accept */*; } It complains with the message : "proxy_set_header" directive is not allowed here So, using nginx, how can we set the HTTP Accept header to */* when it is empty before sending the request to the application server ?

    Read the article

  • test if master dns has trasnfered copt to slave

    - by su55
    Hello, I setup my master and slave using freebsd. im currently running the Bind 9.X version, so far everything is working successfully. Just one small problem. I cant get the master copy of my dns to transfer it to the slave server. i included transfer-allow {192.168.1.111;}; // this is the slave servers ip i ran the rndc reload command to check but i dont see the copy in the /etc/named/master/? Any help would be necessary and if you like the layout of my dns, I can provide that too.

    Read the article

  • University Assignment: Datacenter/Networking Infrastructure for Hosting Company [closed]

    - by TCB13
    My university assigned me to theorizing a data center for an Web Hosting company. The company should provide the following services: Shared WebHosting; Dedicated Servers; VPS (Virtual Private Server); The bandwith (as resquested) is limited to 10 Gbps. Is there any good book / other info I can read (max 100 pages) about how to design a good data center for hosting, what are the best practices and what should be done from a (logical) network perspective, what security policies should be implemented and how the data center should be built (physically)? Thank you ;)

    Read the article

  • Advantages of a deployment tool over shell

    - by Jimmy
    Currently I have all of my deployment scripts in shell, which installs about 10 programs and configures them. The way I see it shell is a fantastic tool for this: Modular: Only one program per script, this way I can spread the programs across different servers Simple: Shell scripts are extremely simple and don't need any other software installed One-click: I only have to run the shell script once and everything is setup Agnostic: Most programmers can figure out shell, and don't need to know how to use a specific program. Versioning: Since my code is on github a simple git pull and restart all of supervisor will run my latest code. My question is, with all of these advantages, why is it people are constantly telling me to use a tool such as ansible or chef, and not to use shell.

    Read the article

  • ESX - Initialization for vmfs2 failed with -1

    - by ov4
    I recently purchases a bunch of R710's from Dell. I installed VMWare ESX on them. The installation process finishes without a problem; however once ESX boots the following error displays at the botton with red letters: 00:00:02:21.493 cpu11:4119) mod:2971: Initialization for vmfs2 failed with -1 I installed on two different servers and the same error appears. The curious thing is that if I install ESXi 3.5 or 4 there is not problem. What does this problem mean, and how can I resolve it?

    Read the article

  • DHCPPloc.exe or equivalent for Windows 7?

    - by Bart B
    There seems to be some DHCP funnyness going on so I need to run something to show me what's going at a DHCP level. Before I upgraded my machine to Windows 7 I used DHCPloc.exe from the Windows XP support tools, and it worked like a charm. I can't seem to find Support Tools for Windows 7, and trying to use the XP tools in compatibility mode doesn't work (I tried, it fails to open a receiving socket). I need a tool to monitor DHCP traffic, and ideally one that lets me filter it to exclude DHCP traffic from our trusted DHCP servers and only show me un-authorised DHCP traffic.

    Read the article

  • DNS security (hijacking?)

    - by Jongsma
    I am hosting my website on Linode and am also using their DNS/naming servers. (ns1.linode.com etc.) It occurred to me that I never have had to authenticate that the domain is mine when I added it to the domain to the DNS manager, or at any other point. I now wonder whether it would be possible for other Linode users to 'hijack' my domain by simply adding the same domain zone and pointing it to their own server. I wouldn't know how Linode could determine which are the real/authentic records. How can I be sure this doesn't happen?

    Read the article

  • Do all domains on the same shared hosting server have the same IP or ID

    - by silow
    Here's what I've got: siteA.com and siteB.com are hosted on hostgator. They're hosted on the same account of a shared hosting server (not VPS or dedicated). script.php is an external site that each of these 2 sites are accessing. I noticed that when siteA.com or siteB.com access the outside script.php, the script identifies them both as 1a.12.12ab.static.theplanet.com (apparently because hostgator uses theplanet.com servers). The fact that they're identified as the same value isn't surprising because after all they're hosted on the same account /home/user123/public_html. What I'm wondering about is how about other websites that are hosted on the same shared hosting server, but under other accounts. Basically other websites that are under another developer's control, but just happen to share the same hardware (hosting server). Do they also have the exact same identifier 1a.12.12ab.static.theplanet.com or that changes by account?

    Read the article

  • IRC Services with failover support?

    - by insertjokehere
    I run a single server (call it 'server A') IRC 'network', and thank to the generosity of some friends, I have been given a second server ('server B') that I can run an IRCd on in order to provide redundancy in case server A crashes. This is fine, I can set up a round-robin DNS with the servers linked. The problem I have is what to do about services? Does anyone know of a way to get the services to 'fail over' in case of a server failure? Eg, Server A starts off running the services, but suddenly crashes. Server B detects this and starts its own copy of the services (ideally with the same configuration and data as the services on Server B) One solution that comes it mind is to write a bot that each server runs, that sit in a channel periodically checking if the bot from the other server is in the channel. If it is, then all is well. If not, then failover. I would prefer not to have to code this myself though We are currently using Unreal IRCd and Anope services on Linux

    Read the article

  • CHROOT for shell script testing

    - by Josh
    I am looking at setting up a shell script in order to properly document and automate the process I am using to setup a few servers we have. In order to test the shell script through its different stages I was thinking a CHROOT would be ideal, since I can wipe out the "virtual root" and create it on the fly. I have never used CHROOT before, however. I was just curious what are the exact steps I would need to follow to implement this process of creating a chroot (with the basic core functions that would be needed to install apache/php/etc.)? and then destroying it?

    Read the article

  • SSH issues: Read from socket failed: Connection reset by peer

    - by nitins
    I compiled OpenSSH_6.6p1 on one of our server. I am able login via SSH to the upgraded server. But I am not able to connect to other servers running OpenSSH_6.6p1 or OpenSSH_5.8 from this. While connecting I am getting an error as below. Read from socket failed: Connection reset by peer On the destination server in the logs, I am seeing it as below. sshd: fatal: Read from socket failed: Connection reset by peer [preauth] I tried specifying the cipher_spec [ ssh -c aes128-ctr destination-server ] as mentioned in here and was able to connect. How can configure ssh to use the cipher by default? Why is the cipher required here?

    Read the article

  • Recommended resource to understand Internet conventions IPs, CNAMES, *, MX etc

    - by Petras
    I am a programmer who has been creating websites for many years in shared hosting environments. To make the website live, I logged into where the domain was hosted and updated the name servers. Sometimes I didn’t want POP email so I changed an A record. I never really understood what this meant but it worked. Now we have a dedicated server and I have to fill out all this to make it live: Plus I was told I had to complete zones at my domain host: I would really like to know what all this means. What is a * record What is an @ How does the internet work regarding all these conventions? Is there a good approachable book on this topic?

    Read the article

  • Server suddenly running out of entropy

    - by Creshal
    Since a reboot yesterday, one of our virtual servers (Debian Lenny, virtualized with Xen) is constantly running out of entropy, leading to timeouts etc. when trying to connect over SSH / TLS-enabled protocols. Is there any way to check which process(es) is(/are) eating up all the entropy? Edit: What I tried: Adding additional entropy sources: time_entropyd, rng-tools feeding urandom back into random, pseudorandom file accesses – netted about 1 MiB additional entropy per second, problems still persisted Checking for unusual activity via lsof, netstat and tcpdump – nothing. No noticeable load or anything Stopping daemons, restarting permanent sessions, rebooting the entire VM – no change in behaviour What in the end worked: Waiting. Since about yesterday noon, there are no connection problems anymore. Entropy is still somewhat low (128 Bytes peak), but TLS/SSH sessions have no noticeable delay anymore. I'm slowly switching our clients back to TLS (all five of them!), but I don't expect any change in behavior now.

    Read the article

  • SSH rsa key works with external IP not internal IP

    - by Ian
    I am using rackspace cloud hosting. I have 2 servers behind a load balancer. Each server has an external IP and an internal IP. I want to setup a sync job that uses SSH to transfer files. I made an rsa key, and I can successfully SSH from server A into server B, using the external IP of server B, without being prompted for a password. If I try to do the same but use the internal IP, it prompts me for a password. I want to be able to use the key instead of the password. Why is this? Is there something special I have to do during key generation so it works for both IPs? Any help is appreciated.

    Read the article

  • MySQL Cluster Failover doesn't work

    - by Lukasz
    I have two servers, where First server 10.100.15.150: 1. one mgm server 2. one ndbd 3. one mysql api Second server 10.100.15.160: 1. one ndbd 2. one mysql api When i start all 'parts' of cluster it looks : Cluster Configuration [ndbd(NDB)] 2 node(s) id=21 @10.100.15.150 (mysql-5.1.56 ndb-7.1.17, Nodegroup: 0) id=22 @10.100.15.160 (mysql-5.1.56 ndb-7.1.17, Nodegroup: 0, Master) [ndb_mgmd(MGM)] 1 node(s) id=3 @10.100.15.150 (mysql-5.1.56 ndb-7.1.17) [mysqld(API)] 2 node(s) id=11 @10.100.15.150 (mysql-5.1.56 ndb-7.1.17) id=12 @10.100.15.160 (mysql-5.1.56 ndb-7.1.17) When i shutdown first machine - 10.100.15.150, on second the nbdb process also has been shutdown so i cannot use this data node and cluster fail ... How i must configure this cluster to get FailOver working ? Thx

    Read the article

  • Amazon EC2- many micro-instances vs single small/medium instance

    - by shashankaholic
    I have a chat application using stack of Openfire, Tomcat6 and MySQL. Currently, i have installed all these servers on single Linux micro-instance(613 MB memory). Even in low user base 10-20 i am encountering CPU overload which is quite obvious here. As, i am new to Amazon EC2 can somebody suggest me how to scale up my architecture according to traffic use? should i use separate micro instances for every app server(openfire,mysql,tomcat6) should i use single small or medium instance for whole server stack. Some factors in context: high reliance on MYSQL high memory usage due to file transfer web-application interacting with other Amazon service like S3,SES

    Read the article

  • Should I limit end-user gigabit ports to avoid saturating uplink/trunk connections?

    - by Joel Coel
    We have a campus with 16 buildings and older 850nm 1Gbps fiber links between the buildings, that all come to a core switch for our servers that also uses 1Gbps ports. We're finally starting to replace our aging 10/100 end-user switches, and much of what we're looking at are 1 Gbps units. My question is, since the trunk/uplink lines are still 1Gbps, if I were to install 1 Gbps switches for end users, should I limit the ports to 100Mbps until I can also upgrade the trunks to avoid allowing a bad-behaving host to saturate a trunk line (since we're a college, we have plenty of mis-behaving hosts) and thereby create a DoS situation for a building, or will TCP congestion control typically take care of that for me? What if we have a lot of UDP traffic (games, video chats, even a small amount of bittorrent)?

    Read the article

  • Rsync : execute permission required

    - by user651488
    I'm using rsync between two servers to transfer files. The problem is some files are not transferred. I get this error : rsync: readlink "/var/www/index.html" failed: Permission denied (13) So I check permissions on the server and after make tests, I notice a file is transferred only if it has these permissions : R-W ! If the file have these permissions : R--, Rsync can't download it !? Command: /usr/bin/rsync -avzr -e "/usr/bin/ssh -i /home/replication/thishost-rsync-key" [email protected]:/var/www/index.html ./ Is it a bug with Rsync ? I find any information about this problem. Thanks for your help Debian Etch 2.6.30 Rsync 2.6.9 protocol version 29

    Read the article

  • MDaemon vs Exchange (2007-2010). Which way should we choose ?

    - by Deniz
    We are at the verge of a mail server decision. We do currently use 2 mail servers : MDaemon 10 and Exchange 2003. We are planning to use a company and customer wide one point solution. Our main candidates are MDaemon 11 and Exchange 2007 or 2010. We would like to learn other users experiences on those solutions. The server-side experiences, the user-side experiences , TCO, support options etc. And if there where other solutions (maybe MDaemon 11 + Exchange or anything else) you could suggest ?

    Read the article

  • Can't access dfs namespace over vpn

    - by cpf
    Hi Serverfault, I've recently configured 2 servers in AD on the same domain level. They are physically separated and permanently connected through a site-to-site vpn for dfs replication. All well, but when users connect to either site through vpn (from home e.g.) they can't use the domain level method: \\domain.com\data Internally this works perfectly, resolving domain.com when connected through vpn gets the correct IP. I've tried Google to figure things out. What I was able to find was that more people have this issue, no real solution found though. Can anyone explain why this is happening? Especially a solution would be really helpful! Thanks in advance.

    Read the article

  • web based source control management software [closed]

    - by tom smith
    hi. not sure if this is the right place, but hopefully someone might have thoughts on a solution/vendor. Starting to spec out a project that will require multiple (50-100) developers to be able to manipulate source files/scripts for a large scale project. The idea is to be able to have each app go through a dev/review/test process, where the users can select (or be assigned) the role they're going to have for the given app. I'm looking for web-based, version control, issue tracking, user roles/access, workflow functionality, etc... Ideally, the process will also allow for the reviewed/valid app to then be exported to a separate system for testing on the test server/environment. This can be hosted on our servers, or we can do the colo process. I've checked out Alassian/Collabnet, but any thoughts you can provide would me appreciated as well. thanks

    Read the article

< Previous Page | 235 236 237 238 239 240 241 242 243 244 245 246  | Next Page >