Search Results

Search found 12497 results on 500 pages for 'linked servers'.

Page 127/500 | < Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >

  • IBM x3620 Server takes a long time to boot past UEFI to OS

    - by Joel Coel
    I have a pair of IBM System x3620 servers. These servers do fine once they finally reach the point where the operating system takes over, but it takes them forever to get past the new-fangled UEFI boot system... a good five minutes or so; maybe longer. I haven't timed it, but it's the kind of thing where you go get a cup of coffee while you wait and it's still going when you come back. Normally the only time I shut these down is for a monthly maintenance cycle (usually just windows updates), and so it's not a big deal. But in the case where I might have an outage I'd sure like to get that 5 minutes back. Is there anything I can do to tell them to just go ahead and boot already?

    Read the article

  • Odd domain switching behavior in Firefox and Chrome

    - by Jeremy Detrempe
    We have different development severs and a production server. Testing is done in the development servers. As a QA engineer, I'm switching between these servers quite often throughout the day. In Chrome, sometimes I need to reload a page a few times to get it to pull from the newly switched server. In Firefox, sometimes I need to quit the browser in order to get it to pull from the newly switched server. (We have small tags that indicate which server you are pulling from, which is how I know in-browser.) Why does that happen? I'd love to know how that happens (maybe what it's called?) and what the best way to deal with it is. (I know that Firefox has an extension for domain switching; is that the best solution?)

    Read the article

  • sql server log shipping

    - by voam
    I am setting up log shipping with two 2008 sql servers connecting with a vpn. As far as I know as long as the sql agent is able to access the share on the primary/secondary servers everything will work. When I set up the Log shipping on the primary server using SQL Mangagement Studio, before I can set any of the "Secondary Database Settings" it asks me to connect to the secondary server. But I really don't want to open up the connection to the secondary server. I will be initialing this secondary database with a backup so as long as the transaction logs get copies everything should work. How to I work around the GUI not enabling any of the settings for the secondary server until I actually connect to it? Thanks in advance!

    Read the article

  • NAT rules betweek 2 network interfaces (with iptables)

    - by Simone Falcini
    this is the current network that I have: UBUNTU: eth0: ip: 212.83.10.10 bcast: 212.83.10.10 netmask 255.255.255.255 gateway 62.x.x.x eth1: ip: 192.168.1.1 bcast: 192.168.1.255 netmask: 255.255.255.0 gateway ? CENTOS: eth0: ip: 192.168.1.2 bcast: 192.168.1.255 netmask 255.255.255.0 gateway 192.168.1.1 I basically want this: Make specific NAT rules from the internet to specific internal servers depending on the port: Connections incoming to port 80 must be redirected to 192.168.1.2:80 Connections incoming to port 3306 must be redirected to 192.168.1.3:3306 and so on... I also need one NAT rule to allow the servers in the subnet 192.168.1.x to browse the internet. I need to route the requests on eth0 to eth1 to be able to exit to internet. Can I do this on the UBUNTU machine with iptables? Thanks!

    Read the article

  • Windows 2003 Storage Server Hanging on Large File Transfers

    - by user25272
    In one of our offices we have a Dell PowerVault 745N NAS device which acts as the main file server. Its running 32bit Windows 2003 Storage Server SP2 with 3GB RAM. The server holds around 60 users HOME folders, which are mapped via AD. The office clients are a mix of XP SP3, Vista and Windows 7. Occasionally the server will completely hang when transferring large files. When the hang happens the console becomes unresponsive with only the mouse active and blank wallpaper. Sometimes stopping the copy frees the server, sometimes not. The hanging can last around 20 minutes. During this time other servers also become unresponsive with blank wallpaper at the console. If you do manage to get onto another server the taskbar and run commands are unresponsive. This also transcends to the client computers sometimes with explorer crashing. I'm guessing this is due to the HOME folder mapping. Eventually the NAS server with free up and everything will be back to normal. The server is configured as follows: PERC 4/DC DATA 2 - 12 SCSI HDD - RAID5 SHADOWCOPY 2 SCSI HDD - RAID1 CERC SATA DATA 11 4 SATA HDD - RAID5 OS 4 SATA HDD - RAID5 All the drivers and firmware is up to date. I've been through all the diagnostics with Dell and the hardware has come up clean including full HDD tests on the arrays. The server has NOD32 installed as the AV, but the hanging happens when it is uninstalled. There are no errors in the event log when this happens and we don't have any errors logged on any of our ProCurve switches. DNS is fine on the domain and AD from what I can tell is running happily. There are no DFS or NFS shares setup either. All the shares are standard Windows. I've unchecked the allow the computer to turn off this device to save power box under Power Management on the NIC. "Set Link Speed and Duplex to Auto-negotiate 1000 " Increased Receive Descriptors buffer from 256 to 352 (reserves more CPU resource for handling data) I've run network traces using network monitor and have found the following: 417 8.078125 {SMB:192, NbtSS:25, TCP:24, IPv4:23} 192.168.2.244 192.168.5.35 SMB SMB:R; Nt Create Andx - NT Status: System - Error, Code = (52) STATUS_OBJECT_NAME_NOT_FOUND I've tried different cabling; NICs and switch ports all with the same result. Transferring files from other servers on the domain is fine. All I haven't done is run CHKDSK on the drives to look for any file system errors. On the Vista clients I have also run netsh interface tcp set global autotuning=disabled with no result. Could it be that the server has a faulty drive or that the I/O is too much for it to handle? Any ideas why would the hang cause issues with the other servers on the LAN? Many Thanks.

    Read the article

  • Howto maintain an EXT3 filesystem

    - by Reinoud van Santen
    Lately I had several servers which encountered a write error on an EXT3 filesystem and as a result of that remounted the filesystem read-only. Understandably on a production server this causes severe problems. On a reboot the filesystem where fixed but on large partitions this takes a lot of time. After the filesystem was fixed, correcting several errors, the server runs well again. What can I do to minimize the rate at which this happens? I can't seem to find much information on periodically checking the filesystem(s) on a running server. Is it possible to change the way in which EXT3 / the system handles write errors? What would be a sane solution. All servers which this is regarding to are running CentOS Linux 5.4 or 5.5.

    Read the article

  • reverse proxy only from one internal server

    - by hrost
    I have configured a reverse proxy and is working ok for one internal server, for example our mail server. Now, I like to know if it is possible to configure a reverse proxy for only one server /application (in this case our web intranet). Our problem is Intranet call another aplication inside same intranet server and another internal servers, and the only way that I know to publish this resources is make a reverse proxy in our dmz apache for all apllications servers, but I like that from our DMZ reverse apache only intranet will be called, and other applications will be called by intranet server, and not reverse proxy. I like to configure with this system for security reason, and only allow external access to one server. I have configured With Debian Squeeze and apache 2.2 It is possible? How?

    Read the article

  • Appropriate Network switch for small server cluster

    - by Chris Dutrow
    Need to build a small business server cluster for the purpose of crunching data. It will not host a web site that needs to be available 24/7. It does need to support servers that host Redis, a Cassandra database cluster, and a Python web server. Operating system will most likely be Centos 6.4 Other servers in the cluster should be able to communicate very fast with each other, especially the Redis server. This will probably require the use of internal IP addresses. We will need to use multi-data center replication to synchronize the Cassandra cluster with the one that we currently have hosted on the cloud Was looking into network switches and we are unsure of the appropriate specifications that we should be looking for. Does the switch need to be "managed" or can it be "unmanged"? Does the switch need to support IPv6 or just IPv4? Do we need an enterprise level Cisco switch, or can we go with something like a $200 DLink managed (or unmanaged) small business switch? Thanks so much!

    Read the article

  • Locking down a server for shared internet hosting.

    - by Wil
    Basically I control several servers and I only host either static websites or scripts which I have designed, so I trust them up to a point. However, I have a few customers who want to start using scripts such as Wordpress or many others - and they want full control over their account. I have started to do the basics - like on php.ini, I have locked it down and restricted commands such as proc, however, there is obviously a lot more I can do. right now, using NTFS permissions, I am trying to lock down the server by running Application Pools and individual sites in their own user, however I feel like I am hitting brick walls... (My old question on Server Fault). At the moment, the only route I can think of is either to implement an off the shelf control panel - which will be expensive and quite frankly, over the top, or look at the Microsoft guide - which is really for an entire infrastructure, not for someone who just wants to lock down a few servers. Does anyone have any guides that can put me on the correct path?

    Read the article

  • How to store etckeeper repositories on a central server via git

    - by andreash
    Hello, I would like to have one central git repository for all my servers' etckeeper .git repos. Here the suggestion was to use a file in /etc/etckeeper/commit.d, which basically looks like this, assuming that a git repo had been set up in somedir on somehost: #!/bin/sh cd /etc git push faruser@farhost:somedir The problem with this is, that it would be really nice to have all servers in the same repo on the central server. I tried git push faruser@farhost:somedir/server1 but that failed. As you can see, I've never worked with git before ... Any ideas on how this can be accomplished is greatly appreciated :) Cheers, Andreas

    Read the article

  • What's more cost effective, Hosting your web startup on Foss or Windows?

    - by user37899
    Hi, Not coming from the windows world, I'm confused about licensing. I think my knowledge may be out of date. Before we gave up with windows web servers (IIS 2), we used to have to pay Client Access licence's. This worked out quite expensive. Is it cheaper to host 1000's of users on Windows than use Free open source software tools? http://serverfault.com/questions/124329/network-load-balancing-efficience-and-limits. This post suggests I can pay $15 a month, for unlimited users. I certainly hope that this is unbiased view, I am a professional and use the right technology for the right job. I hope i am not feeling the wrath of a windows (or linux for that matter) fanboy. Perhaps a Microsoft certified Licensing person can clear this up. Should i be recommending to startups windows servers and products over lamp? Cheers!

    Read the article

  • Ganglia divide colors by rolles

    - by com
    Sorry for a silly question I am still newbie to Ganglia. In Ganglia I control few important metrics for mysql (seconds behind master and etc.). In addition I have few bunches of mysql servers (every bunch has it's own tasks, but all of the bunches should be tested for seconds behind master). I am interested if this possible to show all metrics on the one page with different colors to different bunches. Right now in metric "seconds behind master" I see all mysql servers with metric "seconds behind master" with colors to different states (red is critical, gray is ok). Can I set a color to a graph according to it's bunch? Thanks!

    Read the article

  • Changing DNS - Forgot to change MX records for one month - Is there a way to retrieve emails that we

    - by Chris Altman
    We have our own domain. Our email is hosted by Google Apps. We switched web servers and name servers to a new provider. In the switch, I forgot to move the MX record. When people tried to send emails to our domain, there was no bounce back. We fixed the MX record and now receive email. Is there anyway to retrieve the emails that were sent in the month when there was no MX record? I doubt it because there was no MX record on our name server. Where would the emails have gone since they did not bounce back?

    Read the article

  • Finding Missing UDP Frames Using Wireshark + Custom Dissector (for CQS)

    - by John Dibling
    How do you use Wireshark to identify missing UDP frames? I have written a custom dissector for the CQS feed (reference page). One of our servers gaps when receiving this feed. According to Wireshark, some UDP frames are never received. I know that the frames were sent because all of our other servers are gap-free. A CQS frame consists of multiple messages, each having its own sequence number. My custom dissector provides the following data to Wireshark: cqs.frame_gaps - the number of gaps within a UDP frame (always zero) cqs.frame_first_seq - the first sequence number in a UDP frame cqs.frame_expected_seq - the first sequence number expected in the next UDP frame cqs.frame_msg_count - the number of messages in this UDP frame And I am displaying each of these values in custom columns, as shown in this screenshot: A typical CQS log will consist of millions of rows, so I can't just eyeball it. Is there any way I can get Wireshark to tell me which frames are missing?

    Read the article

  • Adding a Windows Server 2012 Essentials server to an existing domain, without migrating the AD

    - by TiernanO
    I have an existing Active Directory in house, a mix between a Win2K8R2 and Win2K3 domain, and i would like to test out Windows Server 2012 Essentials BETA on the network. When walking though the install, it gives me the option of a new domain, or migrating from an existing domain. when clicking existing, it tells me i can only have one SBS server running on a domain at a time... So, i dont have any existing SBS servers in house (both are full standard or enterprise editions) but i do plan on keeping at least one of these extra servers running... So, how do i get a 2012 Essentials server to join a domain, and not migrate the existing domain? or if i do migrate, can i still get one of the other boxes to act as secondary controllers?

    Read the article

  • My server appears to have been hacked+ scanssh run by zabbix is it normal?

    - by Niro
    I'm running a few EC2/Scalr instances with zabbix monitoring. I received complaints about one of my servers port scanning other servers. the logs show it is accessing port 22 on consecutive IP addresses. I looked at the processes list and saw scanssh is running under the user Zabbix. My question is- Is scanssh part of zabbix? Is it suppesd to run? I have active autodiscovery on zabbix but it is looking at another IP addresses and definately not port 20. Is it possible that something in the config of zabbix agent is controlling it and not the settings on zabbix server? What can I do to find out if zabbix is somehow misbehaving or it is a hacker? Any advice is highly appreciated.

    Read the article

  • Shared storage solution for our sql server backups

    - by Gokhan
    We have 3 clustered sql servers. We have 5+ multi terrabyte databases and their backup files (compressed using quest litespeed) are hitting over 600gb each, We are required to keep at least a week or two weeks (if we can) of weekly full backups and then 6 days differential backups, and a week or 2 weeks worth of log backups local. We are currently limited to 2TB volumes from our san team, we can have multiple volumes but they are expensive ($200 per raw TB per month) and having to deal with many backup volumes instead of a single big volume is difficult. I think if we could have a shared network storage of 20TB+ raid 10 or so for all our servers for keeping the backups and another department will copy them to tape from the network storage and delete files according to the retention period would be good, if this box would be a build in operating system (even unix a complete file storage system) that would be good. What do you guys think, does this make sense to you, is there any manufacturer that sells a storage product like that which that work in a clustered environment? Thank you

    Read the article

  • Getting error while running apt-get update

    - by pradeepchhetri
    I am getting the following error while running apt-get update on all of the servers. W: A error occurred during the signature verification. The repository is not updated and the previous index files will be used. Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 8B48AA6247928553 W: Some index files failed to download, they have been ignored, or old ones used instead. The solutions available are: To login into each of the server and run the following command to import the gpg key for that repo. sudo gpg --keyserver hkp://subkeys.pgp.net --recv-keys 8B48AA6247928553 sudo gpg --export --armor 8B48AA6247928553 | sudo apt-key add - But this requires logging into all of the individual servers and run the above two commands. I am looking for a way to correct by working on apt-repo server. Is there any way to do that.

    Read the article

  • Ping Unknown Host on CentOS at EC2

    - by organicveggie
    Weird problem. We have a collection of servers running CentOS 5 on EC2. The setup includes two DNS servers and two LDAP servers. DNS has a CNAME pointing at the primary LDAP server. One machine (and only one machine) is giving me problems. I can ssh into the server using LDAP authentication. But once I'm on the machine, ping won't resolve the LDAP host even though DNS seems to work fine. Here's ping: $ ping ldap.mycompany.ec2 ping: unknown host ldap.mycompany.ec2 Here's the output of dig: $ dig ldap.mycompany.ec2 ; <<>> DiG 9.3.6-P1-RedHat-9.3.6-4.P1.el5_5.3 <<>> ldap.studyblue.ec2 ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 2893 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;ldap.mycompany.ec2. IN A ;; ANSWER SECTION: ldap.mycompany.ec2. 3600 IN CNAME ec2-hostname.compute-1.amazonaws.com. ec2-hostname.compute-1.amazonaws.com. 55 IN A aaa.bbb.ccc.ddd ;; Query time: 12 msec ;; SERVER: 10.32.159.xxx#53(10.32.159.xxx) ;; WHEN: Tue May 31 11:16:30 2011 ;; MSG SIZE rcvd: 107 And here is resolv.conf: $ cat /etc/resolv.conf search mycompany.ec2 nameserver 10.32.159.xxx nameserver 10.244.19.yyy And here is my hosts file: $ cat /etc/hosts 10.122.15.zzz bamboo4 bamboo4.mycompany.ec2 127.0.0.1 localhost localhost.localdomain And here's nsswitch.conf $ cat /etc/nsswitch.conf passwd: files ldap shadow: files ldap group: files ldap sudoers: ldap files hosts: files dns bootparams: nisplus [NOTFOUND=return] files ethers: files netmasks: files networks: files protocols: files rpc: files services: files netgroup: files ldap publickey: nisplus automount: files ldap aliases: files nisplus So DNS works the way I would expect. And I can ping the ldap server by ip address. And I can even access the box with SSH using LDAP authentication. Any suggestions?

    Read the article

  • once VPNed into pfSense, unable to hit the public URLs of my websites - they are routed to the pfSense box

    - by Sean
    I have a pfSense box setup as the firewall/router/VPN appliance at my colo. Once I VPN into the colo (either pptp or openvpn, pptp preferred due to multiple clients and ease of configuration), I am able to hit all my servers by their private 10.10.10.x ip and am able to browse the public internet without issue. When I try and hit the URL of a domain hosted by one of my servers, I am prompted for credentials. If I login using the pfSense credentials, I'm connected to pfSense as if I'd used it's internal IP. If I hack my hosts file to point url - server private IP it works fine, but this is obviously not a good solution. To recap: not connected to VPN - www.myurl.com works connected to VPN - www.myurl.com never makes it to the correct server, but is sent only to the pfSense box I'm sure it's something small that I've missed in the pfSense config.

    Read the article

  • How do I force a specific MTU for only certain TCP ports?

    - by Dave S.
    Background I have a set of embedded hardware deployed in the field. These remote machines connect back to my servers at AWS running Ubuntu and I use the iptables mangle chain to lower the MTU to 500 so these devices are happy. For reference, this is the iptables rule I am using: -A POSTROUTING -p tcp --sport 12345 --tcp-flags SYN,RST SYN -o eth0 -j TCPMSS --set-mss 500 Current Problem I'm trying to spin up some servers on the Joyent Cloud using SmartOS, but I can't find any information on selectively changing the MTU like I can on Linux (e.g. all info I've found is on changing it globally, which is not what I want). How would I do it so that all connections on TCP port 12345 get the MTU I want?

    Read the article

  • FTP script download from linux to windows

    - by user53864
    I'm using following FTP script on windows xp to download zip files from ubuntu cloud servers. A zip file is created every day on ubutnu servers and I will download it to windows via this ftp script. I run this script everyday manually as I have to edit the last line(mget /usr/backup_02-11-2010.Zip) of the script to match today's date. I want to edit this script so that it will download only today's zip file at the scheduled time without needing to edit it everyday, when scheduled. It's clear that date is appended to the zip files and is in the format dd-mm-yyyy. Need help... open server-ip-here username-here user-password-here lcd C:\Backup\files bin hash prompt mget /usr/backup_02-11-2010.zip

    Read the article

  • What is the specific advantage of a blade server for virtualisation?

    - by ChrisZZ
    We are planning to implement a VDI Solution. We had some discussions about Blade vs Rack. As we are only planning to implement 75-100 Clients, we calculated, that we would need 2 Servers with Dual 8C Processors - and a shared storage server. This calculation is based on a paper by ORACLE, that says, 12 active virtual machines per core. Now, for buying to servers, a blade does not scale financially. But the Blade has some other advantages: a) The interconnectivity between the blades is super-fast. b) IO Virtualisation Are there other advantages, that we should consider, that would make up for price - and are this advantages so important, that we should think about investing in the blade?

    Read the article

  • Alerting when a RAID Array disk fails locally on VMWare ESX or ESXi System

    - by Tim K
    With ESX and ESXi, we recently had two systems where that the boot partition became degraded due to a failed disk. The only alert we managed to capture was the visual alert on the Dell servers. We failed to received any electronic alerts regarding the failed or degraded array. Does anyone have any experience with monitoring for these types of failures? In both cases, the servers were running in a RAID 5 SCSI configuration (5 disks on one system, 3 disks on another) which if we were running a Windows Server OS, we would have had an alert created in the Eventviewer. Where would I begin to look for this solution. Can it be configured in VCenter or vFoglight?

    Read the article

< Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >