Search Results

Search found 9816 results on 393 pages for 'blade servers'.

Page 208/393 | < Previous Page | 204 205 206 207 208 209 210 211 212 213 214 215  | Next Page >

  • /var/run/httpd.pid missing...

    - by user38043
    Recently one of our web servers httpd stopped working and I haven't been able to find the problem. Today I sat down and went through every directory in the httpd.conf and have found an issue. the /var/run/httpd.pid is missing from the folder. All other files are there and seem to be fine. I cannot create a new file with the same name in vi and I have no idea what could have caused this. I imagine it was caused by a cold reboot at some stage as no other extraordinary processes have been run on this server at the time it went down. I am running CentOS 3. How can I reinstate this file?

    Read the article

  • Remove Downed Exchange Server from First Administrative Group

    - by Campo
    I had a server die. It is gone forever. It is still listed in the servers folder of the first administrative group in the Exchange System Manager. When I click "all tasks Remove Server" I get the following error: The Server "SERVERNAME" cannot be removed because: -One or more users currently use a mailbox on this server. These users must be moved to a mailbox store on a different server or be mail disabled before uninstalling this server. Facility: Exchange System Manager ID no: c103f492 Exchange System Manager Any help would be MUCH appreciated. I cannot access the mailbox stores anymore and I do not care about the lost mailboxes. We deleted the inactive and old users as well. So I am stumped on this one. I just need to remove the old machine. THANKS!

    Read the article

  • Why does RDP Licensing is licensing the same device multiple times? [closed]

    - by NeerPatel
    Possible Duplicate: Can you help me with my software licensing issue? I've got a Citrix XenApp 6.5 Farm running on Win 2008 R2 Servers. I purchased 300 Device RDP/Remote App Licenses for ~200 users. We went with Device licenses, because most of the end users use the same machines. After 1 month of operating, we started to run out of licenses. It turns out the licensing service is consuming multiple licenses for the same machine. I can revoke licenses, but there is a limit to how many I can do. Is this operating correctly? The only explanation I can come up with is that the Licensing service is giving a license to a device for every server it connects to in our Citrix farm.

    Read the article

  • BIND authoritative name server: SERVFAIL?

    - by Luca Tettamanti
    I have a BIND 9.6 instance that acts as a caching NS for the whole building and is also authoritative for an internal zone ("example" below): zone "example" { type master; file "example"; update-policy { grant dhcp-update subdomain example. A TXT; }; }; Due to a rogue switch we lost connectivity with the rest of the world, and the NS started answering SERVFAIL; what surprised me was that the server was also unable to respond to queries for the example domain. What is the reason of this behavior? Shouldn't the NS be able to answer since it has authoritative data? edit: The rest of the configuration is the standard one shipped with Debian: hints for the root servers and the zones for localhost and broadcast.

    Read the article

  • Software for failover across multiple external hosts

    - by Lin
    I have multiple webservers with the same content, hosted across different providers. However, I can't seem to find a nice, simple failover solution. Load-balancing software (Pound, HAProxy, etc.) are unnecessary, and I need the flexibility to manage over 100+ domains, so the paid DNS failover solutions I've found are too expensive. So far the simplest solution I've thought of is just to set a very low TTL (30min - 1hr) in each zone entry on my nameservers (running BIND). Then, continuously monitor each server, and temporarily remove failed servers from zone entries. But this seems like something that should be currently available. I only have root access to different VPSes running CentOS. Any suggestions? Thanks!

    Read the article

  • Differencing Disks in VirtualBox

    - by PhilPursglove
    I'm struggling to understand how to do differencing disks in VirtualBox v3.1.0. I've created a Windows 2008 Server, but now I want to use that as a base image for a number of other servers. The help file has a description of what differencing disks are, but I can't find where it actually tells you how to do it. In the Storage dialog for a server I found the Differencing Disks checkbox: but when I check it, I'd expect it to then ask which image should be the parent so I could select my base image. Any pointers you can offer would be greatly appreciated!

    Read the article

  • Unable to SSH to EC2

    - by Walker
    I downloaded the cert-xxx.pem and pk-xxx.pem files and also the keypair.pem and moved it all to the /.ssh folder on my Ubuntu client machine. this is what I get when I try to SSH with -v at the end debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Trying private key: /root/.ssh/identity debug1: Trying private key: /root/.ssh/id_rsa debug1: Trying private key: /root/.ssh/id_dsa debug1: No more authentication methods to try. Permission denied (publickey). I am new to administering servers and I want to know if I should be trying to convert the pem files to id_rsa and id_dsa. I am not really sure if that is possible but I don't know how else to get the id_rsa, id_dsa from those pem files or if there is any work around. I managed to get access to EC2 the first time and this is my second try and I am unsuccessful so far. Any help is appreciated. regards Walker

    Read the article

  • FTPS SSH Host Key after IP Address Change

    - by David George
    I have a Secure FTP (FTPS) server that my remote sites to upload files to daily via scripted routines that run. I have had issues in the past when upgrading hardware and deploying new servers causing the RSA Fingerprint to change for that server. Then all my remote sites can't connect until I have the old key removed (usually via ssh_keygen -r myserver.com). I now have to change the IP address for myserver.com and I wondered if there is anyway to proactively generate new host keys so that when the server address changes all my FTPS client remote sites don't break?

    Read the article

  • Apache error_log showing which command output

    - by Unai Rodriguez
    Apache's error_log shows lines like the following: --- snip --- which: no ruby in (/sbin:/usr/sbin:/bin:/usr/bin) which: no locate in (/sbin:/usr/sbin:/bin:/usr/bin) which: no suidperl in (/sbin:/usr/sbin:/bin:/usr/bin) which: no get in (/sbin:/usr/sbin:/bin:/usr/bin) which: no fetch in (/sbin:/usr/sbin:/bin:/usr/bin) which: no links in (/sbin:/usr/sbin:/bin:/usr/bin) which: no lynx in (/sbin:/usr/sbin:/bin:/usr/bin) which: no lwp-mirror in (/sbin:/usr/sbin:/bin:/usr/bin) which: no lwp-download in (/sbin:/usr/sbin:/bin:/usr/bin) which: no kav in (/sbin:/usr/sbin:/bin:/usr/bin) --- end --- The architecture is: Internet - Load Balancer - Varnish - Apache There are several web servers behind the load balancer and I have checked at least one of them with rkhunter (link) and couldn't find anything suspicious. Versions: CentOS 5.7 Varnish 2.1.5 Apache 2.2.3 PHP 5.2.17 Does this mean that someone has executed the command which through Apache? How can that happen? Thank you so much.

    Read the article

  • How can I connect JConsole to WebLogic using the WL SSL Listen Port

    - by Mircea Vutcovici
    I would like to be able to use JConsole on remote WebLogic servers via the multiplexer port on SSL. Is it possible this without doing any configuration changes WebLogic? Only by adding some jars (e.g. wljmxclient.jar) or parameters to JConsole. I've tried with variations of the following command without success: $JAVA_HOME/bin/jconsole -J-Djava.class.path=$JAVA_HOME/lib/jconsole.jar:\ $JAVA_HOME/lib/tools.jar:$WL_HOME/server/lib/wljmxclient.jar \ -J-Djmx.remote.protocol.provider.pkgs=weblogic.management.remote -debug \ service:jmx:rmi:///jndi/iiop://server_name:7441/jmxrmi I think that one of the problem is that the SSL is not enabled in JConsole.

    Read the article

  • Best Solution for Load Balancing NFS File Access?

    - by DairyKnight
    I'm trying to find an optimum solution for accessing the NFS file share in my company. We have a central file server in North America and has 30GB~50GB of updated data everyday. And it's very slow for our Europe and Asia branches to access directly. Therefore, I'm trying to setup two replicate servers in those continents. I'm currently using rsync, but wonder if there exists a better solution acts more like a distributed RAID, which allows the user to transparently access the file whether synced or not. And user request will be dispatched to remote server if the file is not yet synced. I'm now looking into DRBD, but it seems not to have the functionality of auto-dispatching requests. Does anyone know if there's a better solution?

    Read the article

  • Can not create a linked server between SQL Server 2008 on a desktop and my laptop

    - by norlando
    I'm having an issue getting the linked server to connect between a desktop and my laptop. Both have SQL server 2008 and the link is coming from the desktop to my laptop. Also, both computers have Windows 7. I don't have any issues creating the linked server from my laptop to the desktop. The error I'm getting is "Login failed for user '[UserName]'. (Microsoft SQL Server, Error: 18456)." I let the user name out for security reasons. The user is an sa on both SQL servers and an admin on both computers. Does anyone have an idea what could be stopping me from creating the linked server from the desktop to my laptop?

    Read the article

  • pcap stream rotation and pruning

    - by pilcrow
    Some of my servers collect a lot of packet data. Is there a utility (or patch to tcpdump(1)) to log a pcap stream to disk which: Rotates based on size of data written Prunes written files, keeping only the N most recent Does not re-use output filenames Is self-contained (Ruling out, e.g., a rotation with external pruning via crond(8)+tmpwatch(8)) Basically I want a multilog or svlogd that groks the pcap record format. The -W filecount option of tcpdump-4.0.0 "prunes" by recycling old filenames, which violates #3 above, forcing me to consult mtimes to determine recency and providing no guarantees against surprise truncation of the log file. The -G option introduces strftime(2)-specifier support in output filenames, which would give me at least second-precision in file names, but I can't figure out how to get pruning to work with this scheme.

    Read the article

  • Amazon ec2 - WildCard Sub-Domain

    - by Sharanc25
    I'm running an ec2 instance on ubuntu running lamp stack. I configured my httpd.conf file to support wildcard sub-domain but it didn't work. My httpd.conf file NameVirtualHost * <VirtualHost *> DocumentRoot /www/example ServerName example.com ServerAlias *.example.com </VirtualHost> I tried all possible solutions but they didn't work. Finally I used amazon Route-53 to setup a wildcard DNS to redirect all *.example.com to example.com. My question is, Is it okay if I use Route-53 instead of httpd.conf file for wildcard Sub-Domain ? Is there an error in my httpd.conf file ? (Note: I used the same httpd.conf settings with another hosting provider and it worked perfectly there.) Additional Information : VirtualHost configuration: wildcard NameVirtualHosts and _default_ servers: *:80 is a NameVirtualHost default server example.com (/etc/apache2/httpd.conf:1) port 80 namevhost example.com (/etc/apache2/httpd.conf:1) port 80 namevhost ip-xx-xxx-xx-xxx.ec2.internal (/etc/apache2/sites-enabled/000-default:1) Syntax OK

    Read the article

  • Stop single NLB node at command line

    - by Patrik Hägne
    We have a NLB cluster set up for our public web servers. I'm trying to stop the "localhost" in the cluster from the command line using NLB.EXE. When I write "nbl stop" it seems that all nodes are stopped but I only want the local node (the server I'm running the command prompt on) to be stopped in the cluster. When I try specifying the node using the command "nlb stop 192.168.182.104:HOSTNAME" it fails, saying "Did not receive response from the cluster". Am I not specifying the cluster and the host correctly?

    Read the article

  • Improving SAS multipath to JBOD performance on Linux

    - by user36825
    Hello all I'm trying to optimize a storage setup on some Sun hardware with Linux. Any thoughts would be greatly appreciated. We have the following hardware: Sun Blade X6270 2* LSISAS1068E SAS controllers 2* Sun J4400 JBODs with 1 TB disks (24 disks per JBOD) Fedora Core 12 2.6.33 release kernel from FC13 (also tried with latest 2.6.31 kernel from FC12, same results) Here's the datasheet for the SAS hardware: http://www.sun.com/storage/storage_networking/hba/sas/PCIe.pdf It's using PCI Express 1.0a, 8x lanes. With a bandwidth of 250 MB/sec per lane, we should be able to do 2000 MB/sec per SAS controller. Each controller can do 3 Gb/sec per port and has two 4 port PHYs. We connect both PHYs from a controller to a JBOD. So between the JBOD and the controller we have 2 PHYs * 4 SAS ports * 3 Gb/sec = 24 Gb/sec of bandwidth, which is more than the PCI Express bandwidth. With write caching enabled and when doing big writes, each disk can sustain about 80 MB/sec (near the start of the disk). With 24 disks, that means we should be able to do 1920 MB/sec per JBOD. multipath { rr_min_io 100 uid 0 path_grouping_policy multibus failback manual path_selector "round-robin 0" rr_weight priorities alias somealias no_path_retry queue mode 0644 gid 0 wwid somewwid } I tried values of 50, 100, 1000 for rr_min_io, but it doesn't seem to make much difference. Along with varying rr_min_io I tried adding some delay between starting the dd's to prevent all of them writing over the same PHY at the same time, but this didn't make any difference, so I think the I/O's are getting properly spread out. According to /proc/interrupts, the SAS controllers are using a "IR-IO-APIC-fasteoi" interrupt scheme. For some reason only core #0 in the machine is handling these interrupts. I can improve performance slightly by assigning a separate core to handle the interrupts for each SAS controller: echo 2 /proc/irq/24/smp_affinity echo 4 /proc/irq/26/smp_affinity Using dd to write to the disk generates "Function call interrupts" (no idea what these are), which are handled by core #4, so I keep other processes off this core too. I run 48 dd's (one for each disk), assigning them to cores not dealing with interrupts like so: taskset -c somecore dd if=/dev/zero of=/dev/mapper/mpathx oflag=direct bs=128M oflag=direct prevents any kind of buffer cache from getting involved. None of my cores seem maxed out. The cores dealing with interrupts are mostly idle and all the other cores are waiting on I/O as one would expect. Cpu0 : 0.0%us, 1.0%sy, 0.0%ni, 91.2%id, 7.5%wa, 0.0%hi, 0.2%si, 0.0%st Cpu1 : 0.0%us, 0.8%sy, 0.0%ni, 93.0%id, 0.2%wa, 0.0%hi, 6.0%si, 0.0%st Cpu2 : 0.0%us, 0.6%sy, 0.0%ni, 94.4%id, 0.1%wa, 0.0%hi, 4.8%si, 0.0%st Cpu3 : 0.0%us, 7.5%sy, 0.0%ni, 36.3%id, 56.1%wa, 0.0%hi, 0.0%si, 0.0%st Cpu4 : 0.0%us, 1.3%sy, 0.0%ni, 85.7%id, 4.9%wa, 0.0%hi, 8.1%si, 0.0%st Cpu5 : 0.1%us, 5.5%sy, 0.0%ni, 36.2%id, 58.3%wa, 0.0%hi, 0.0%si, 0.0%st Cpu6 : 0.0%us, 5.0%sy, 0.0%ni, 36.3%id, 58.7%wa, 0.0%hi, 0.0%si, 0.0%st Cpu7 : 0.0%us, 5.1%sy, 0.0%ni, 36.3%id, 58.5%wa, 0.0%hi, 0.0%si, 0.0%st Cpu8 : 0.1%us, 8.3%sy, 0.0%ni, 27.2%id, 64.4%wa, 0.0%hi, 0.0%si, 0.0%st Cpu9 : 0.1%us, 7.9%sy, 0.0%ni, 36.2%id, 55.8%wa, 0.0%hi, 0.0%si, 0.0%st Cpu10 : 0.0%us, 7.8%sy, 0.0%ni, 36.2%id, 56.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu11 : 0.0%us, 7.3%sy, 0.0%ni, 36.3%id, 56.4%wa, 0.0%hi, 0.0%si, 0.0%st Cpu12 : 0.0%us, 5.6%sy, 0.0%ni, 33.1%id, 61.2%wa, 0.0%hi, 0.0%si, 0.0%st Cpu13 : 0.1%us, 5.3%sy, 0.0%ni, 36.1%id, 58.5%wa, 0.0%hi, 0.0%si, 0.0%st Cpu14 : 0.0%us, 4.9%sy, 0.0%ni, 36.4%id, 58.7%wa, 0.0%hi, 0.0%si, 0.0%st Cpu15 : 0.1%us, 5.4%sy, 0.0%ni, 36.5%id, 58.1%wa, 0.0%hi, 0.0%si, 0.0%st Given all this, the throughput reported by running "dstat 10" is in the range of 2200-2300 MB/sec. Given the math above I would expect something in the range of 2*1920 ~= 3600+ MB/sec. Does anybody have any idea where my missing bandwidth went? Thanks!

    Read the article

  • "Breadcrumbs" for series of hostnames?

    - by Hamy
    Does anyone know of a shell that would show a series of breadcrumbs as I navigate into/out of various servers, like this: Home > Build Machine > Vagrant > Docker-base Hopefully it could auto-detect logging in and out of various boxes and display the hostnames. Perhaps with a simple "no circular links", one could just try and monitor the hostname, but I don't know if there is a shell that can easily act as a 'parent' to the other shells on these various systems so that it can query hostname and/or other item. Any thoughts?

    Read the article

  • Yum error when updating / install

    - by acctman
    Yum error are the RHN servers down or is there a problem on my server. yum update Loaded plugins: rhnplugin, security There was an error communicating with RHN. RHN support will be disabled. Error communicating with server. The message was: Error Message: RHN Proxy could not successfully connect its RHN parent. Please contact your system administrator. Error Class Code: 1000 Error Class Info: RHN Proxy error. Explanation: An error has occurred while processing your request. If this problem persists please enter a bug report at bugzilla.redhat.com. If you choose to submit the bug report, please be sure to include details of what you were trying to do when this error occurred and details on how to reproduce this problem. Excluding Packages in global exclude list Finished Skipping security plugin, no data Setting up Update Process No Packages marked for Update

    Read the article

  • SQL Server to sql server linked server setup

    - by ScottStonehouse
    Please explain what is required to set up a SQL Server linked server. Server A is SQL 2005 windows logins only Server B is the same (SQL 2005 windows logins only) Server A runs windows XP Server B runs Windows Server 2003 Both SQL Server services are running under the same domain account. I am logged into my workstation with a domain account that has administrative rights on both SQL Servers. Note these are both SQL Server 2005 SP2 - I've had old hotfixes pointed out to me, but those are already applied. The issue I am having is this error: "Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. (Microsoft SQL Server, Error: 18456)"

    Read the article

  • Colocation near EC2

    - by brianreavis
    Does anyone know any colocation providers near the Amazon's US EC2 facility(ies)? I'm needing to colocate a couple servers that need to be able to connect with EC2 with the lowest latency possible. I can't even find where their facilities are... Any ideas of the best solution or places to start looking? (ps. I'm well aware that EC2 instances can be configured to do pretty much anything. I have a special need that can't be deployed to EC2.)

    Read the article

  • CentOS will not boot. Error 13

    - by ipengineer
    I am having trouble with one of our CentOS servers. I migrated this server to XenServer, installed a new xen kernel, and performed a mkinitrd with: mkinitrd --omit-scsi-modules --with=xennet --with=xenblk --preload=xenblk initrd-2.6.18-308.4.1.el5xen-no-scsi.img 2.6.18-308.4.1.el5xen Now I am getting an error 13 on boot. Screenshot: http://postimage.org/image/k7js0l41v/ I can still boot with the PAE kernel. Does anyone have any idea on how to resolve this? My Grub file looks like: default=0 timeout=5 splashimage=(hd0,0)/grub/splash.xpm.gz hiddenmenu title CentOS (2.6.18-308.4.1.el5xen) root (hd0,0) kernel /vmlinuz-2.6.18-308.4.1.el5xen ro root=/dev/hdb1 ramdisk_size=256000 initrd /initrd-2.6.18-308.4.1.el5xen-no-scsi.img title CentOS (2.6.18-308.4.1.el5PAE) root (hd0,0) kernel /vmlinuz-2.6.18-308.4.1.el5PAE ro root=/dev/hdb1 ramdisk_size=256000 initrd /initrd-2.6.18-308.4.1.el5PAE.img title CentOS (2.6.18-274.17.1.el5PAE) root (hd0,0) kernel /vmlinuz-2.6.18-274.17.1.el5PAE ro root=/dev/hdb1 ramdisk_size=256000 initrd /initrd-2.6.18-274.17.1.el5PAE.img

    Read the article

  • Windows 2008R2 blocks outbound LDAP for non-admins?

    - by Jon Bailey
    I've got a Windows 2008R2 terminal server with ~30 users on it. It's joined to a Samba-based domain. During the login script, we connect directly to the LDAP server to pull out certain profile information. This used to work just fine. Now, it doesn't, but only for non-local-admin accounts. Local admins work fine. As a non-local-admin: Connection to ports 389 or 636 just terminate (wireshark on the LDAP server reveals no connection attempt) Connection to other ports on the same server work fine Same thing on multiple LDAP servers Windows firewall is disabled Can't find any other rules/policies that may block this I suspect since this used to work, it came down during an update, but for the life of me, I can't find what. EDIT: I just ran Wireshark on the machine and didn't see anything when connecting to the LDAP server in question (or any LDAP server for that matter). I can, however, see traffic when I connect to that server on another port.

    Read the article

  • How to enable key forwarding with ssh-agent?

    - by Lamnk
    I've used the ssh-agent from oh-my-zsh to manage my SSH key. So far, so good, i only have to type the passphrase for my private key once when I start my shell and public key authentication works great. The problem is however that key forwarding doesn't work. There are 2 servers A & B which I can use public key to login. When I ssh into A then from there ssh into B, I must provide my password, which should not be the case. A is a CentOS 5.6 box, B is an Ubuntu 11.04 box. I have this on my local .ssh/config: Host * ForwardAgent yes OpenSSH on A is standard openssh 4.3 package provided by CentOS. I also enable ForwardAgent for ssh client on A, but forwarding still doesn't work.

    Read the article

  • Welcome files are not loaded! Need help with Railo, mappings and J2EE configuration!

    - by mrt181
    I have installed a J2EE Server (tried it with Glassfish3, Tomcat6 and Resin4) on Win7 64bit and deployed Railo3.1. I have then added a virtual host to the J2EE server, i.e. Resin: <host host-name="railo"> C:/resin/webapps/railo In the Railo Admin i have added this mapping: Virtual Physical / C:/webapps/ When i access http://railo:8080/ my index.cfm welcome file in C:/webapps/ is loaded (index.cfm is definded in Railos web.xml). When i try to access http://railo:8080/test which contains the same index.cfm i get an 500 Servlet Exception java.io.FileNotFoundException: C:\webapps\test (access denied) (on all J2EE Servers i tried so far). http://railo:8080/test/index.cfm works fine. I already tried to add index.cfm to Resins welcome-file-list in app-default.xml to no avail. I want to be able to access deployed apps without this url: http://localhost:8080/app/ Instead i want to use this: http://app:8080/

    Read the article

  • Mono Project: How to install Mono framework on Red Hat Linux which is compiled on centOS ?

    - by funwithcoding
    We have Red Hat Enterprise Linux servers at work place. However we dont have Red Hat Linux desktops. So I used CentOS 5.4 to compile the Mono sources and generated the Mono framework for CentOS and tested with some sample codes and I am satisfied. I want to transfer this compiled framework to Red Hat Enterprise Linux 5. How Can I do that? Do I have to compile the Mono framework statically or do I have to copy the linked libraries as well? I am not familiar with linux much. Any help is highly appreciated.

    Read the article

< Previous Page | 204 205 206 207 208 209 210 211 212 213 214 215  | Next Page >