Search Results

Search found 21597 results on 864 pages for 'timer service'.

Page 739/864 | < Previous Page | 735 736 737 738 739 740 741 742 743 744 745 746  | Next Page >

  • Dell PowerEdge 1600SC Server won't boot from Fedora 12 DVD because of CD only drive.

    - by studiohack23
    Dell PowerEdge 1600SC Server won't boot from Fedora 12 DVD in the drive because it only supports CDs as I found out after the fact. I'm a complete novice @ servers, so if you need more detail, let me know, and I'll try to provide it. This server is around 4-6 years old. it has "PXE" boot, not sure what that means? This particular server has 3 RAID hard drives. As far as I know, they have all been wiped. I looked up the service tag on Dell, and it has: Compact Disk Drive, 650M, I Internal, Half Height, 48X, BlackHitachi LG Data Storage as its CD drive. Thus, the CD drive does not support DVDs, so installation will have to be via a live CD. However, I'm trying to put Amahi Home Server (http://www.amahi.org/), and Live CD/USB stick installs are not recommended unless one is an expert Linux user. any suggestions as to how to get around this? PROBLEM SOLVED! THANKS for all the help!

    Read the article

  • Connecting to MySQL Server from PHP Command Line (MAMP)

    - by Austin White
    First of all, I'm using Mac OSX 1.6, MAMP 1.9, PHP 5.3.4, and MySQL 5.1.44. I'm in the process of setting up a video encoding service for a site using Chris Boulton's PHP-Resque and Redis. Once the worker process is fired and the videos have been encoded, I need to save their locations to a mysql database. The php script is being run from the shell, so that is where the issue begins. I import the mysql settings and when it attempts to connect, I get the following errors: Warning: mysqli::mysqli(): php_network_getaddresses: getaddrinfo failed: nodename nor servname provided, or not known in /Users/austingym/Documents/Dropbox/Website/htdocs/homefree/lib/MySQLi_Extended.class.php on line 24 Warning: mysqli::mysqli(): [2002] php_network_getaddresses: getaddrinfo failed: nodename nor servn (trying to connect via tcp://MYSQL_SERVER:3306) in /Users/austingym/Documents/Dropbox/Website/htdocs/homefree/lib/MySQLi_Extended.class.php on line 24 Warning: mysqli::mysqli(): (HY000/2002): php_network_getaddresses: getaddrinfo failed: nodename nor servname provided, or not known in /Users/austingym/Documents/Dropbox/Website/htdocs/homefree/lib/MySQLi_Extended.class.php on line 24 Warning: mysqli::set_charset(): Couldn't fetch MySQLi_Extended in /Users/austingym/Documents/Dropbox/Website/htdocs/homefree/lib/MySQLi_Extended.class.php on line 32 I realize that the error is occurring because it's trying to connect to tcp://MYSQL_SERVER:3306, when MySQL is on port 8889. I've been reading about Mac OSX and MAMP errors regarding the mysql.sock and I've gone through multiple forums and tried various fixes, but none have worked. I've tried PATH=/Applications/MAMP/Library/bin/:/Applications/MAMP/bin/php5.3/bin/:/opt/local/bin:/opt/local/sbin:$PATH and sudo ln -s /Applications/MAMP/tmp/mysql/mysql.sock /tmp/mysql.sock but neither have worked. I even ran a search on my machine for "3306" to find where it's being set, but because that's the normal default, I'm guessing it's not being set explicitly. Any clues on how to fix this rather challenging error?

    Read the article

  • Setting up a localhost mail server on Mac OSX

    - by Thom
    I asked this over on stackoverflow. They pointed me here. I would love to be able to test php webapps that require emailing registration info etc. on my mac. I downloaded a version of CommuniGate Pro. I need to mail either to an account inside or outside (whichever is best) of the localhost. Again this would be used for testing purposes to verify and debug my code prior to uploading to a hosting service. Any ideas, help and/or examples would be very much appreciated. If it would be easier I could go over to Windows XP. That would just mean setting up wamp and transfering my files over from the mac side via dropbox. I got the local mailserver to work so I can send emails between accounts. However, I cannot seem to get the php code to work. I know that I am missing something. I see where this has been asked before. I want to add that I am using xampp. In Mac OS 10.6.8. I tried changing the php.ini SMTP command to macintosh-3.local. <?php function email($to, $subject, $body, $headers) { $headers = 'MIME-Version: 1.0' . "\r\n"; $headers .= 'From: <[email protected]>' . "\r\n"; mail($to, $subject, $body, $headers); } ?>

    Read the article

  • Setting up a localhost mail server on Mac OSX

    - by Thom
    I asked this over on stackoverflow. I would love to be able to test php webapps that require emailing registration info etc. on my mac. I downloaded a version of CommuniGate Pro. I need to mail either to an account inside or outside (whichever is best) of the localhost. Again this would be used for testing purposes to verify and debug my code prior to uploading to a hosting service. Any ideas, help and/or examples would be very much appreciated. If it would be easier I could go over to Windows XP. That would just mean setting up wamp and transfering my files over from the mac side via dropbox. I got the local mailserver to work so I can send emails between accounts. However, I cannot seem to get the php code to work. I know that I am missing something. <?php function email($to, $subject, $body, $headers) { $headers = 'MIME-Version: 1.0' . "\r\n"; $headers .= 'From: <[email protected]>' . "\r\n"; mail($to, $subject, $body, $headers); } ?>

    Read the article

  • Symantec Endpoint Protection Virus Definitions

    - by Gus Denton
    I have done some Googling but I cannot get a definitive answer certainly not from the Symantec KB. I have a Virtualised Win 2003R2 server 32bit. It has been provisioned to me with Symantec Endpoint Protection 11.0.62xxx CLIENT (not a definitions server) the directory C:\Program Files\Common Files\Symantec Shared\VirusDefs is 750MB IT doesn't contain .tmp directories so it is NOT a corrupt definitions server. IT does contain directories named with a date pattern YYYYMMDD.xxx Some of these folders are 12 months old and I would like to recover the space. The sysmantect forums are full of this stuff but a lot of the postings contain links back to documents that are not specific to End Point Protection Client. It appears that I should be able to delete the older folders and all will be OK. with a service restart however there is a warning about having Live Update Administrator Installed Firstly I have no idea if I have this installed how to I check and secondly can I just ditch these old files and restart ? Regards Gus Denton Learning and Teaching Uni of New South Wales Sydney Australia For those trying to assist me I thankyou. I have followed some instructions found on the Symantec site and assumed that the response from Nixphoe would resolve my issue. It appears that as I am on a provisioned VM from a central IT unit I cannot run the Symantec commands from the Run prompt as my admin creds to get me in. (smc -stop) Basically I need to claw back some Diskspace from the c: drive which is being filed up with WSUS patches and Symantec files. I have managed to delete one symantec cache through the live update control panel and recovered 470Mb I suppose my last question for those more experienced than myself is, can I simply remove say the two oldest virus definition folders without completely foobaring the End Point protection and the server ? Regards Gus

    Read the article

  • Are there any wireless webcams/cameras that Windows will recognize as a capture device?

    - by Keithius
    I'd like to have a webcam in a different room from my computer, and the distance means USB is out of the question. I know there are many wireless cameras, but what I can't seem to find out is if any of them would be recognized by Windows as a capture device (just like a locally connected USB webcam). Most of the wireless cameras I can find (e.g., D-Link DCS920; Cisco-Linksys WVC54GCA, etc.) can all stream video directly from the camera itself, which is fine if you're using the camera as a "security" camera (for private use only), but not for other uses (say, sending the video to an online video streaming service, e.g., Ustream). It seems like this should be possible; after all, wireless (WiFi) printers with scanners are recognized by Windows. Are there any wireless (WiFi) cameras out there that would be recognized by Windows as a capture device in the same way as a USB webcam would? Alternatively, a camera that's not wireless (e.g., connects via Ethernet) would do the trick too - but I imagine if anyone is going to make a remote camera like this, they'd go the extra step and make it wireless, too.

    Read the article

  • Network Explorer Intermittently Fails to Display all Computers in Work Group

    - by graf_ignotiev
    I run a small computer lab of 10 computers and occasionally, when using the network explorer (a.k.a Network Browser) some or all of the remote computers will fail to appear. If I try to access a remote computer by its name I get an unspecified error (code 0x80004005), but I am still able to access it with the computer's IP address. The strangest part is that the problem will inexplicably go away after waiting awhile. Each computer is running Windows 7 x64 Enterprise and has identical hardware, software and configuration. They are all on the same subnet and in the same workgroup. I've spent days researching the problem and have tried the following solutions: Updated the BIOS, chipset and network adapter drivers Changed Power Settings in Network Adapter Properties so that the computer will not turn it off Disabled the Computer Browser service Changed the DHCP node type to broadcast Reviewed the Event Viewer logs Steps 3 and 4 have seemed to help the problem a little bit, but not completely. I'm beginning to suspect that the problem might lie with our router which is a ZyXEL ZyWALL 2WG, as the packets sent by Network Discovery may not be returning in time, but I wanted to get some perspective in the issue before I went any further.

    Read the article

  • Adding 2008 Server to 2008 Domain

    - by Phillip
    Hello, I'm trying to create a lab for testing before I deploy solutions, I'm no experienced IT Administrator, and therefore I come here for help. I'm running 2 Virtual Servers on the same machine on a local connection between those two. They'are able to ping each other. Their names is TSDATA1 and TSDATA2 where TSDATA1 is the Domain Controller. I am able to ping between those two, on both "ping TSDATA1" and "ping 10.0.0.1" which is the IP address of TSDATA1. The IP address of TSDATA2 is 10.0.0.2. I'm trying to join the domain with TSDATA2 both I'm getting this error when trying: Note: This information is intended for a network administrator. If you are not your network's administrator, notify the administrator that you received this information, which has been recorded in the file C:\Windows\debug\dcdiag.txt. The following error occurred when DNS was queried for the service location (SRV) resource record used to locate an Active Directory Domain Controller for domain tsdata.local: The error was: "DNS name does not exist." (error code 0x0000232B RCODE_NAME_ERROR) The query was for the SRV record for _ldap._tcp.dc._msdcs.tsdata.local Common causes of this error include the following: The DNS SRV records required to locate a AD DC for the domain are not registered in DNS. These records are registered with a DNS server automatically when a AD DC is added to a domain. They are updated by the AD DC at set intervals. This computer is configured to use DNS servers with the following IP addresses: 10.0.0.1 One or more of the following zones do not include delegation to its child zone: tsdata.local local . (the root zone) For information about correcting this problem, click Help. I've figured out it has something to do with DNS lookup, but I have no clue what to do. Can anyone help?

    Read the article

  • RemoteApp .rdp embed creds?

    - by Chris_K
    Windows 2008 R2 server running Remote Desktop Services (what we used to call Terminal Services back in the olden days). This server is the entry point into a hosted application -- you could call it Software as a Service I suppose. We have 3rd party clients connecting to use it. Using RemoteApp Manager to build RemoteApp .rdp shortcuts to distribute to client workstations. These workstations are not in the same domain as the RDS server. There is no trust relationship between domains (nor will there be). There is a tightly controlled site to site VPN between workstations and the RDS server, we're quite confident we have access to the server locked down. The remoteApp being run is an ERP application with its own authentication scheme. The issue? I'm trying to avoid the need to create AD logins for every end user when connecting to the RemoteApp server. In fact, since we're doing a remoteApp and they have to authenticate to that app, I'd rather just not prompt them at all for AD creds. I certainly don't want them caught up in managing AD passwords (and periodic expirations) for accounts they only use to get to their ERP login. However, I can't figure out how to embed AD creds in a RemoteApp .rdp file. I don't really want to turn off all authentication on the RDS server at that level. Any good options? My goal is to make this as seamless as possible for the end-users. Clarifying questions are welcome.

    Read the article

  • Networking Problem MrxSmb event 50 "Delayed Write Failed" errors occurring all of the sudden

    - by Johnny Musso
    JUST THIS MONTH, we have started getting reports from a number of very stable clients that MrxSmb event id 50 errors keep appearing in their system event logs. Otherwise, they do not appear to have any networking problems except that there is a critical legacy application which seems to either be generating the MrxSmb errors or having errors occur because of them. The legacy application is comprised of 16 bit and 32 bit code and has not been changed or recompiled in many years. It has always been stable on Windows XP systems. The customers that have the problem usually have a small (5 clients or less) peer to peer network with all Windows XP systems. All service packs are loaded on the XP machines. Note: The only thing that seems to correct the problem is disabling opportunistic locking. I don't like this solution because it seems to slow down the network and sometimes causes record locking issues between users (on some networks). Also, this seems to have just started happening - as if a Windows update for XP has caused it? However, I have removed recent updates and it did not correct the issue. Thanks in advance for any help you can provide.

    Read the article

  • Should I persist images on EBS or S3 ??

    - by enes
    Hi; I am migrating my Java,Tomcat, Mysql server to AWS EC2. I have already attached EBS volume for storing Mysql data. In my web application people may upload images. So I should persist them. There are 2 alternatives in my mind. 1- Save uploaded images to EBS volume. 2- Use S3 service. The followings are my notes, please be skeptic about them, as my expertise is not on servers, but software development. EBS plus: S3 storage is more expensive. (0.15 $/Gb 0.1$/Gb) S3 plus: Serving statics from EBS may influence my web server's performance negatively. Is this true? Does Serving images affect server performance notably? For S3 my server will not be responsible for serving statics. S3 plus: Serving statics from EBS may result I/O cost, probably it will be minor. EBS plus: People say EBS is faster. S3 plus: People say S3 is more safe for persistence. EBS plus: No need to learn API, it is straight forward to save the images to EBS volume. Namely I can not decide, will be happy if you guide. Thanks

    Read the article

  • Linux (DUP!) ping packages

    - by Darkmage
    i cant seem t figure out what is going on here. The Linux machine I am using is running as a VM on a Win7 machine using Virtual Box running as a service. If i ping the win7 Host i get ok result. root@Virtual-Box:/home/glennwiz# ping -c 100000 -s 10 -i 0.02 192.168.1.100 PING 192.168.1.100 (192.168.1.100) 10(38) bytes of data. 18 bytes from 192.168.1.100: icmp_seq=1 ttl=128 time=1.78 ms 18 bytes from 192.168.1.100: icmp_seq=2 ttl=128 time=1.68 ms if i ping localhost im ok root@Virtual-Box:/home/glennwiz# ping -c 100000 -s 10 -i 0.02 localhost PING localhost (127.0.0.1) 10(38) bytes of data. 18 bytes from localhost (127.0.0.1): icmp_seq=1 ttl=64 time=0.255 ms 18 bytes from localhost (127.0.0.1): icmp_seq=2 ttl=64 time=0.221 ms but if i ping gateway i get DUP packets root@Virtual-Box:/home/glennwiz# ping -c 100000 -s 10 -i 0.02 192.168.1.1 PING 192.168.1.1 (192.168.1.1) 10(38) bytes of data. 18 bytes from 192.168.1.1: icmp_seq=1 ttl=64 time=1.27 ms 18 bytes from 192.168.1.1: icmp_seq=1 ttl=64 time=1.46 ms (DUP!) 18 bytes from 192.168.1.1: icmp_seq=2 ttl=64 time=22.1 ms 18 bytes from 192.168.1.1: icmp_seq=2 ttl=64 time=22.4 ms (DUP!) if i ping other machine on same LAN i stil get dups. pinging remote hosts also gives (DUP!) result root@Virtual-Box:/home/glennwiz# ping -c 100000 -s 10 -i 0.02 www.vg.no PING www.vg.no (195.88.55.16) 10(38) bytes of data. 18 bytes from www.vg.no (195.88.55.16): icmp_seq=1 ttl=245 time=10.0 ms 18 bytes from www.vg.no (195.88.55.16): icmp_seq=1 ttl=245 time=10.3 ms (DUP!) 18 bytes from www.vg.no (195.88.55.16): icmp_seq=2 ttl=245 time=10.3 ms 18 bytes from www.vg.no (195.88.55.16): icmp_seq=2 ttl=245 time=10.6 ms (DUP!)

    Read the article

  • Error in Bind9 named.conf file. Bind won't start.

    - by tj111
    I'm trying to setup a DNS server on an Ubuntu Server machine (10.04). I configured an entry in named.conf.local to test it, but when trying to restart bind9 I get the following error: * Starting domain name service... bind9 [fail] So I checked the output of syslog and this is what I get. May 20 18:11:13 empression-server1 named[4700]: starting BIND 9.7.0-P1 -u bind May 20 18:11:13 empression-server1 named[4700]: built with '--prefix=/usr' '--mandir=/usr/share/man' '--infodir=/usr/share/info' '--sysconfdir=/etc/bind' '--localstatedir=/var' '--enable-threads' '--enable-largefile' '--with-libtool' '--enable-shared' '--enable-static' '--with-openssl=/usr' '--with-gssapi=/usr' '--with-gnu-ld' '--with-dlz-postgres=no' '--with-dlz-mysql=no' '--with-dlz-bdb=yes' '--with-dlz-filesystem=yes' '--with-dlz-ldap=yes' '--with-dlz-stub=yes' '--with-geoip=/usr' '--enable-ipv6' 'CFLAGS=-fno-strict-aliasing -DDIG_SIGCHASE -O2' 'LDFLAGS=-Wl,-Bsymbolic-functions' 'CPPFLAGS=' May 20 18:11:13 empression-server1 named[4700]: adjusted limit on open files from 1024 to 1048576 May 20 18:11:13 empression-server1 named[4700]: found 4 CPUs, using 4 worker threads May 20 18:11:13 empression-server1 named[4700]: using up to 4096 sockets May 20 18:11:13 empression-server1 named[4700]: loading configuration from '/etc/bind/named.conf' May 20 18:11:13 empression-server1 named[4700]: /etc/bind/named.conf:10: missing ';' before 'include' May 20 18:11:13 empression-server1 named[4700]: loading configuration: failure May 20 18:11:13 empression-server1 named[4700]: exiting (due to fatal error) So it thinks I have an error in the default named.conf file, which is pretty ridiculous. I went through it and deleted a blank line just for the hell of it, but I can't see how it figures there's an error in there. Note that before this I did have an error in named.conf.local, but it showed up properly in syslog and I fixed it, so it is reporting the correct file. Here is the contents of named.conf: // This is the primary configuration file for the BIND DNS server named. // // Please read /usr/share/doc/bind9/README.Debian.gz for information on the // structure of BIND configuration files in Debian, *BEFORE* you customize // this configuration file. // // If you are just adding zones, please do that in /etc/bind/named.conf.local include "/etc/bind/named.conf.options"; include "/etc/bind/named.conf.local"; include "/etc/bind/named.conf.default-zones";

    Read the article

  • Disabled FRS replication on a DFS link, but the targets still list the replica set in their FRS conf

    - by Graeme Donaldson
    It's been a while since I've had to deal with the wonders of FRS, so I'm doing some testing to refresh my memory. This is what I've done so far. I am stuck with FRS rather than DFS-R for the moment since not all of my link targets are running R2. Created a domain-based DFS root, hosted on 4 servers. Created a DFS link under the root, targeted at 2 servers. The shares on both servers were empty. Dropped about 500MB of data into the target folder on one server and waited for replication to complete. Added/removed/modified files on both targets and confirmed that changes are replicated within a few seconds. Deleted the contents of the target folder on 1 server and waited for the other server to replicate the deletion. All of this worked perfectly, so now I want to remove my DFS link since I only created it for testing purposes. This is where it gets weird. I'm pretty sure that in the past I've disabled replication on the DFS link and after a short amount of time each target would log an info event in the FRS event log, something along the lines of "this server is no longer a member of replica set X". I have waited about 3 hours and I haven't seen this happen. ntfrsutl ds tells me that the server is not a member of any set, which is expected because when I disable replication on the link, the AD attributes on the computer object are removed. The weird part is... ntfrsutl sets still shows me the replica set, with all the properties, etc. So it seems like the FRS-related attributes of the target server's AD object are gone, but the FRS service for some reason hasn't removed the replica set. Can anyone see what I have done wrong?

    Read the article

  • SQL 2008 Replication corrupt data problem

    - by Jonathan K
    We took a SQL 2000 database. Took a lightspeed backup. Restored on SQL 2008 active/passive cluster. Then setup replication to replicate the data back to SQL 2000. So 2008 is the publisher/distributor, and 2000 is doing a pull subscription. Everything works well, execpt we occassionally get corrupt data in varchar/text fields on the subscriber. So for example we have a table with 4500 records. When we run this statement: update MedstaffProvider set Notes = 'Cell Phone: 360.123.4567 Answering Service: 360.123.9876' where LastName = 'smith' The record in the 2008 database is updated as expected. But in the subsriber datbase we'll get gibberish in the notes field: óPÌ[1] T $Oé[1] ð²ñ. K Here's what we know: This is repeatable, meaning we can run that same query all day long and get the same gibberish. If you alter update statement slightly the data gets replicated just fine. The collation on both databases is the same. So far we've only detected the problem with text/varchar fields. (The notes field above is text). Only one or two records in a table are impacted. The table structure looks identical in both 2000/2008. We haven't made any changes. We have found one solution that fixes the problem. Basically if we recreate the table in 2008 (say as MedStaffProvider2) and then insert all the data. Drop the original table. Rename the table to it's original name. Setup replication again. And run the exact same update statement it works as expected. Does anyone have any idea what might be happening here? Or are there any other techniques we can use to troubleshoot this? I've found a solution for this, but would really like to undertsand why this is happening.

    Read the article

  • Fatal Execution Engine Error on the Windows2008 r2, IIS7.5

    - by user66524
    Hi Guys We are running some asp.net(3.5) applications on the Windows2008 r2, IIS7.5. Recently we got some event logs so difficult, we have not idea hope some guys can help. 1.EventID: 1334 (9-1-2011 8:41:57) Error message An error occurred during a process host idle check. Exception: System.AccessViolationException Message: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. StackTrace: at System.Collections.Hashtable.GetEnumerator() at System.Web.Hosting.ApplicationManager.IsIdle() at System.Web.Hosting.ProcessHost.IsIdle() 2.EventID: 1023 (9-1-2011 19:44:02) Error message .NET Runtime version 2.0.50727.4952 - Fatal Execution Engine Error (742B851A) (80131506) 3.EventID: 1000 (9-1-2011 19:44:03) Error message Faulting application name: w3wp.exe, version: 7.5.7600.16385, time stamp: 0x4a5bcd2b Faulting module name: mscorwks.dll, version: 2.0.50727.4952, time stamp: 0x4bebd49a Exception code: 0xc0000005 Fault offset: 0x0000c262 Faulting process id: 0x%9 Faulting application start time: 0x%10 Faulting application path: %11 Faulting module path: %12 Report Id: %13 4.EventID: 5011 (9-1-2011 19:44:03) Error message A process serving application pool 'AppPoolName' suffered a fatal communication error with the Windows Process Activation Service. The process id was '2552'. The data field contains the error number. 5.some info: we got the memory.hdmp(234MB) and minidump.mdmp(19.2) from control panel action center but I donot know how to use that :(

    Read the article

  • Limiting bandwith on an Windows 7 machine

    - by Mihai Damian
    I need to limit the bandwidth on my Windows 7 x64 machine. In the past (on XP) I've been able to use NetLimiter for similar tasks. However for some reason I can't get it to work anymore. For lower limits the bandwidth tests are able to exceed the limit by 10-50%; higher limits seem to be ignored completely and the bandwidth tests report download speeds of over 10 times the speed I set. I'm using speedtest.net and some similar service from my ISP for these tests. Anyway, I don't necessarily need a program as complex as NetLimiter since I only need to throttle my machine's bandwidth, not a specific program's. In case you are wondering why in the world I'd want to cripple my Internet speed, there is a funny story behind this. Long story short, my modem gets random disconnects. Tech support comes in, says my Internet speed is abnormally high and I must be using some tools to somehow make it go faster than it's supposed to and this messes up my modem. I check the connection with another computer and it seems that my PC is the only one in my network that gets abnormal speeds. I reinstall my OS, speed looks normal at first, after I install the batch of 50 or so updates, it goes back to abnormally high speeds and the disconnect problems are not solved. Now I don't have a clue if the explanation the tech team gave me was just a strategy to lay the blame on someone else, but I was trying to give them the benefit of the doubt and see what happens if I really reduce my speed to their specification. Any help appreciated.

    Read the article

  • Backup and Archive Strategy Question

    - by OneNerd
    I am having trouble finding a backup strategy for our code assets that 'just works' without any manual intervention. Goal is to have an off-site backup (a synchronized one) so that when we check-in files, create builds, etc. to the network drive, the entire folder structure is automatically synchronized and backed-up (in real time, or 1x per day) at some off-site location so if our office blows up, we don't lose all of our data. I have looked into some online backup services, but have not yet had any success. Some are quirky/buggy, others limit file size and/or kinds of files (which doesn't work well for developer files). Everything gets checked in and saved to a single server (on a Raid Mirror), so we just need to have a folder on that server backed up/synchronized to some off-site location. So my question is this. What are you using for your off-site backup strategy. What software, system, or service? Is there a be-all/end-all system of backing up your code assets that I just haven't found yet? Thanks

    Read the article

  • Cooling for a small server room

    - by John Zwinck
    I have a server room about 12 feet square with an unfinished ceiling (exposed ducts and wiring). It houses a few servers (about ten, 1U and 2U) and some networking gear (four 1U switches, three routers, three modems, two cable boxes). With the door closed, it runs around 80 degrees Fahrenheit with half the servers turned on. When I turned on all the servers it reached 86 before I chickened out and propped the door open. The room is adjacent to air-conditioned office space, but does not itself have dedicated air conditioning. The ventilation for this room seems to be limited to one duct coming in at ceiling level, with a powered fan to draw air in, and one duct at ceiling level to allow air to flow out (it seems like it may just go into the drop ceiling cavity in the adjacent room). The adjacent office space stays fairly cool, but I'd prefer not to leave the door propped open all the time. There is both 110v and 208v service in the room, and plenty of power available. But there are no windows, and no floor drains (in a pinch we might be able to run a condensation hose through a small hole we'd drill in the wall to a nearby sink area, but only if absolutely necessary). I've considered portable A/C units, but I'm not sure on sizing and a lot less sure how we would run the exhaust hose(s). I suppose we could point one at the existing room exhaust duct (air return), but substantially modifying the duct is probably a no-no. I've also considered installing a fan box in the door of the room, but I'm concerned that this will only drop the temperature a little. Even right now, with all the equipment on, the room is at 83 degrees with the door open. And the main building A/C turns off daily at 6 PM to conserve energy, so the adjacent room temperature rises at night. How would you cool this room? Let's say the goal is to bring the temperature with everything running from a steady state of around 90 degrees down to 75 (equivalently, to offset the heat produced by ten 1U servers).

    Read the article

  • Failed to generate a user instance of SQL Server

    - by Goondocks
    I'm using Windows 7 Beta and trying to install a web application locally. This web site uses Microsoft SQL Server 2005 Express (SQLEXPRESS) and a MDB file in the web site's ~/App_Data folder. I was instructed to configure IIS7 to use Classic .NET AppPool for this web application. Each time the web site loads, I receive the following error: There was an error trying to connect to the Database Server: Failed to generate a user instance of SQL Server due to failure in retrieving the user's local application data path. Please make sure the user has a local user profile on the computer. The connection will be closed. The Internet is packed with articles written on this subject. The prevailing wisdom seems to be: Configure the SQL Express Service to use the Local System account. Delete the following directory: C:\Users\username\AppData\Microsoft\Microsoft SQL Server Data\SQLEXPRESS Neither of these fixes have made any impact. I have tinkered with permissions and settings for hours to no avail. Can anyone suggest a fix or help me understand how to get more detailed information about the problem.

    Read the article

  • Windows XP Language, explorer.exe

    - by nmuntz
    Hi, I was given by my company a laptop with Windows XP Professional in Spanish. I would like to translate it to English, since I really DISLIKE to use localized versions of programs. I have read about Windows MUI packs, however you MUST have Windows XP Pro in English in order to translate it to other language, you can't translate it TO English from other language. Since reinstalling the OS using a Win XP CD in english is not an option (don't have the license nor the CD, and don't have domain privileges to rejoin my computer to the domain), I was wondering what are the essential files that contain localized strings of text. I was doing some research, and apparently explorer.exe has many of the Windows Error Messages and other strings. Will replacing my original explorer.exe with one from Windows XP in English be enough (and work) for having a "basic" english version of windows? Im mainly interested in having error messages, start menu, and the control panel in english. Also, does it HAVE to be the same version as the Service Pack im running? Besides explorer.exe are there any other essential files that i should try to get and replace? Do you see any "dangers" in replacing this files with english version ones? Thanks in advance for your help.

    Read the article

  • how to design pound -> varnish -> jboss for ha + loadbalancing

    - by andreash
    Hello, I'm planning a new infrastructure for our web application. We have two JBossAS5 servers, running in a cluster. Session state will be replicated via JBoss Cache. In front of that, there should be some cache, to speed up delivery of static elements. However, most of the traffic to our app will be via HTTPS. So far, I had been thinking of two Varnish caches in front of the JBossASs, each being configured for loadbalancing to the two JBossASs via round-robin. Since Varnish doesn't handle HTTPS, then there would need to be two pound proxies in front of the Varnishs, dealing with the HTTPS. The two pounds would be made high-available with Heartbeat/LinuxHA. The traffic to www.example.com would then be going through our firewall, from there to the virtual IP of the pounds, from there to the Varnishs, and from there to the JBossASs. Question 1: Does this make sense? Or is it overly complicated, and the same goal can be reached with simpler methods? Question 2: If my layout is fine, how do I configure the pound - Varnish step? Should I a) make the Varnish service high-available through Heartbeat/LinuxHA as well and direct traffic from pound to the virtual IP of the Varnishs, or should I rather b) Configure two independent Varnishs and use load-balancing in pound to address the different Varnishs? Thanks a lot for your insight! Andreas.

    Read the article

  • Git push over http (using git-http-backend) and Apache is not working

    - by Ole_Brun
    I have desperately been trying to get push for git working through the "smart-http" mode using git-http-backend. However after many hours of testing and troubleshooting, I am still left with error: Cannot access URL http://localhost/git/hello.git/, return code 22 fatal: git-http-push failed` I am using latest versions of Ubuntu (12.04), Apache2 (2.2.22) and Git (1.7.9.5) and have followed different tutorials found on the Internet, like this one http://www.parallelsymmetry.com/howto/git.jsp. My VHost file currently looks like this: <VirtualHost *:80> SetEnv GIT_PROJECT_ROOT /var/www/git SetEnv GIT_HTTP_EXPORT_ALL SetEnv REMOTE_USER=$REDIRECT_REMOTE_USER DocumentRoot /var/www/git ScriptAliasMatch \ "(?x)^/(.*?)\.git/(HEAD | \ info/refs | \ objects/info/[^/]+ | \ git-(upload|receive)-pack)$" \ /usr/lib/git-core/git-http-backend/$1/$2 <Directory /var/www/git> Options +ExecCGI +SymLinksIfOwnerMatch -MultiViews AllowOverride None Order allow,deny allow from all </Directory> </VirtualHost> I have changed the ownership of the /var/www/git folder to root.www-data and for my test repositories I have enabled anonymous push by doing git config http.receivepack true. I have also tried with authenticated users but with the same outcome. The repositories were created using: sudo git init --bare --shared [repo-name] While looking at the apache2 access.log, it appears to me that WebDAV is trying to be used, and that git-http-backend is never fired: 127.0.0.1 - - [20/May/2012:23:04:53 +0200] "GET /git/hello.git/info/refs?service=git-receive-pack HTTP/1.1" 200 207 "-" "git/1.7.9.5" 127.0.0.1 - - [20/May/2012:23:04:53 +0200] "GET /git/hello.git/HEAD HTTP/1.1" 200 232 "-" "git/1.7.9.5" 127.0.0.1 - - [20/May/2012:23:04:53 +0200] "PROPFIND /git/hello.git/ HTTP/1.1" 405 563 "-" "git/1.7.9.5" What am I doing wrong? Is it an issue with the version of git and/or apache that I am using perhaps? BTW: I have read all the git http related questions on ServerFault and StackOverflow, and none of them provided me with a solution, so please don't mark this as duplicate.

    Read the article

  • Munin with postgresql 9.2

    - by jreid9001
    I am trying to set up Munin to collect stats on a server with postgresql 9.1 and 9.2 (the server is currently running 9.1, have tested on a fresh VM with 9.2 to rule out some weird problem on the running server. I had to patch some of the plugins for 9.2 due to renamed columns (e.g. procpid to pid), but that's no problem). Munin is installed from the EPEL repos, postgres from the official one. Both up to date. When I try to run munin-node-configure --suggest, I get this output: # The following plugins caused errors: # postgres_bgwriter: # Junk printed to stderr # postgres_cache_: # Junk printed to stderr # postgres_checkpoints: # Junk printed to stderr # postgres_connections_: # Junk printed to stderr # postgres_connections_db: # Junk printed to stderr # postgres_locks_: # Junk printed to stderr # postgres_querylength_: # Junk printed to stderr # postgres_scans_: # Junk printed to stderr # postgres_size_: # Junk printed to stderr # postgres_transactions_: # Junk printed to stderr # postgres_tuples_: # Junk printed to stderr # postgres_users: # Junk printed to stderr # postgres_xlog: # Junk printed to stderr After a lot of searching around, I edited /etc/munin/plugin-conf.d/munin-node and added the following: [postgres*] user postgres This stops munin-node-configure complaining about stderr and lets me add the plugins, but when I telnet to the server on 4949 and try to fetch the stats, I just get "Bad exit". When I run the plugin individually via munin-run (e.g. munin-run postgres_size_ALL ), it works completely fine. Looking at /var/log/munin/munin-node.log, this is the output: Error output from postgres_size_ALL: DBI connect('dbname=template1','',...)failed: could not connect to server: Permission denied Is the server running locally and accepting connections on Unix domain socket "/tmp/.s.PGSQL.5432"? at /usr/share/perl5/vendor_perl/Munin/Plugin/Pgsql.pm line 377 Service 'postgres_size_ALL exited with status 1/0. I am now out of ideas... the socket definitely exists, and pg_hba.conf is set to allow all users/databases from localhost with trust.

    Read the article

  • Phusion Passenger (Apache, Sinatra) suddenly not working for a single site on my server

    - by Kerrick
    I've had Phusion Passenger working for a few of my sites for months. Then, today, it stopped working for a single site. I hadn't changed anything (I hadn't even SSH'ed into the server for a week), and everything is set up the way it should for it to work. Plus, it's working fine for other sites! I'm about to pull my hair out trying to find out what's wrong, so I was hoping y'all could help. Passenger is not working on kerricklong.com -- I only get the "It works!" Apache default page. If I look at the headers, it's not even serving the X-Powered-By: Phusion Passenger (mod_rails/mod_rack) header that I get on my other (currently working) Passenger-powered sites on the same server running Ubuntu Server 10.04. The following is in my /etc/apache2/sites-available/kerricklong.com file, but it's identical (with names and paths changed) to the configuration file for the site that is working. <VirtualHost *:80> ServerAdmin [email protected] ServerName kerricklong.com ServerAlias *.kerricklong.com DocumentRoot /redacted/path/to/kerricklong.com/public ErrorLog /redacted/path/to/kerricklong.com/logs/error.log <Directory /redacted/path/to/kerricklong.com/public> Allow from all Options -MultiViews Include /etc/apache2/h5bp.conf </Directory> php_flag engine off </VirtualHost> I've got the necessary tmp/, logs/, and public/ directories, along with config.ru. I've also run sudo a2dissite then sudo a2ensite, sudo service apache2 restart, and reboot the server to try to fix it. What gives?

    Read the article

< Previous Page | 735 736 737 738 739 740 741 742 743 744 745 746  | Next Page >