Search Results

Search found 18677 results on 748 pages for 'current'.

Page 530/748 | < Previous Page | 526 527 528 529 530 531 532 533 534 535 536 537  | Next Page >

  • Installing xampp on system that already have mysql

    - by Charith
    I'm rather new to PHP and xampp. I have a computer that has installed MySQL server and MySQL workbench as I was working with Java and NetBeans. Now I want to use my computer for developing PHP and other web stuff too. I installed xampp successfully. But when I'm trying to access phpMyAdmin, it gives me an error saying mysql server rejected its connection Actually I tried stopping my current MySQL service and installing it again. However xampp have a its own mysql server in its installation path too. I tried configuring config.inc.php to use my existing installation of MySQL which is on a separate path. But I failed. Can anyone please instruct me how to configure this xampp to use my existing MySQL server to do everything and ignore the installed one with itself? I don't want two MySQL services to run on my system and clash in future. I'll be glad if anyone can explain to me what is best to use when you're developing Java, PHP, C and all the stuff on the same machine. P.S.: I have been given a password for my existing MySQL sever (user = root) as we do it usually when installing MySQL alone.

    Read the article

  • Fast distributed filesystem for a large amounts of data with metadata in database

    - by undefined hero
    My project uses several processing machines and one storage machine. Currently storage organized with a MSSQL filetable shared folder. Every file in storage have some metadata in database. Processing machines executes tasks for which they needed files from storage and their metadata. After completing task, processing machine puts resulting data back in storage. From there its taken by another processing machine, which also generates some file and put it back in storage. And etc. Everything was fine, but as number of processing machines increases, I found myself bottlenecked myself with storage machines hard drive performance. So I want processing machines to put files in distributed FS. to lift load from storage machines, from which they can take data from each other, not only storage machine. Can You suggest a particular distributed FS which meets my needs? Or there is another way to solve this problem, without it? Amounts of data in FS in one time are like several terabytes. (storage can handle this, but processors cannot). Data consistence is critical. Read write policy is: once file is written - its constant and may be only removed, but not modified. My current platform is Windows, but I'm ready to switch it, if there is a substantially more convenient solution on another one.

    Read the article

  • init.d script runs correctly but process doesn't live when booted fully up

    - by thetrompf
    I have a problem with an init.d script #!/bin/bash ES_HOME="/var/es/current" PID=$(ps ax | grep elasticsearch | grep $ES_HOME | grep -v grep | awk '{print $1}') #echo $PID #exit 0 case "$1" in start) if [ -z "$PID" ]; then echo "Starting Elasticsearch" echo "Starting Elasticsearch" /var/tmp/elasticsearch su -m elasticsearch -c "${ES_HOME}/bin/elasticsearch" exit 0; else echo "Elasticsearch already running" echo "Elasticsearch already running" /var/tmp/elasticsearch exit 0; fi ;; stop) if [ -n "$PID" ]; then echo "Stopping Elasticsearch" kill ${PID} echo "Stopped Elasticsearch" exit 0; else echo "Elasticsearch is not running" exit 0; fi ;; esac The scripts runs just file, as I can see in /var/tmp/elasticsearch a new line is added after every boot, but if I run: /etc/init.d/elasticsearch stop Just after the server is booted, I get "Elasticsearch is not running", ergo somehow the process does not stay alive. My question is why? and what am I doing wrong? Thanks in advance.

    Read the article

  • Moving a lotus database causes incoming mail failure

    - by Logman
    Hey all, I moved a couple lotus note databases to another hd (ran out of space) by using a directory link/pointer on the domino server. USed a text file with the .DIR ext with the desired new path etc... well that works fine. I open up a lotus notes client, use the id and then OPEN the database. The directory link works fine, the new location of the db is opened with no problems. We found out that we couldnt FORWARD any mail: We did the following to make it work: Goto menu FILE|MOBILE|EDIT CURRENT LOCATION Goto tab MAIL and enter the correct path for "Mail File:" example: it should read "mail\morespace\flabor" and not "mail\flabor" "morespace" = directory link/pointer we can forward and reply to emails fine after that fix. The problem is that the user has no incoming email. Domino is still trying to send the incoming mail of that user to its old db location "mail\flabor" instead of "mail\morespace\flabor". Delivery error saying user does not exist. Is this a cache problem? We have reset the server ("Q" at the prompt), though we have not completely shut it down though. Thanks Frank

    Read the article

  • Port Forwarding failing only to Ubuntu servers from Draytek router

    - by Rufinus
    I know this is a kinda unusal question, but Draytek support (..which is very eager to solve the issue) seems to reach its limits. Scenario: Draytek Vigor Multiwan router with current firmware. Multiple WAN IP Aliases on one of the wan ports DMZ (or port forwarding doesnt matter) from wan ip alias to internal host currently i have two internal hosts: 192.168.0.51 (Ubuntu) 192.168.0.53 (Debian) both should be accessible from outside via one of the wan ip aliases. both are accessible with their internal ip's at all times (!) If the router gots restartet, both external ips are forwarding to its internal hosts. But after a few minutes up to 2 hours, the ubuntu host is no longer reachable via its external interface. The debian hosts on the other hand is reachable. In what does ubuntu differs from debian ? I know at least of one user with the exact same problem. see http://ubuntuforums.org/showthread.php?p=10994279 Any ideas ? TIA EDIT: via ping diagnostics directly on vigor, 192.168.0.53 is pingable, 192.168.0.51 is not. but both hosts are perfectly reachable from anywhere inside the network. if i restart ubuntu networking it works again for a short time.... i'm out of ideas.. EDIT 2: after further investigation, i noticed a ping from .51 to the network (or a host in the internet) is enough to make the port-forwarding working again. So i will add an Cronjob as a "keep-alive" ping. This will solve the problem, but the reason for this behaivor is still in the dark. Thanks to all commentors.

    Read the article

  • Linux DHCPD Mac-Address based Groups

    - by GruffTech
    Our Current DHCPD.conf looks like the following. subnet 10.0.32.0 netmask 255.255.255.0 { range 10.0.32.100 10.0.32.254; option subnet-mask 255.255.255.0; option broadcast-address 10.0.32.255; option domain-name-servers 208.67.222.222,208.67.220.220; option routers 10.0.32.5; host Dev-ABaird-W { hardware ethernet 00:1D:09:3E:49:13; fixed-address 10.0.32.94; } ... more static hosts .... } About as basic as it gets. The old router is 10.0.32.1, our company wanted to implement a squid proxy to better monitor web traffic while at work, and if necessary block large time-wasters, IE Facebook.com. However, we've quickly realized that this change has played a mean prank on our Polycom SIP Phones. Occasionally our phones will not ring, the end recipient hears ringing (this is artificially created by our PBX) however the handset never rings. The ONLY thing that has changed in our network is the option routers line. So, Since all Polycom MAC addresses begin with 00:04:F2 would it be possible in DHCP to say any 00:04:F2:::* MAC addresses get option routers 10.0.32.1, and anything else must talk with our Gateway?

    Read the article

  • TV won't go into standby

    - by Robert
    When I select Start-Turn off computer-Standby the 'turn off computer' option window closes, and then nothing else happens. I can start new applications, and Windows acts like I never selected standby. I ran it for several hours after that. If I have a TV program scheduled to record when I select standby I get a window (the Pinnacle TV software) asking if I'm sure, there are programs scheduled to record - and the computer just keeps running after I select yes, never going into standby. I added that detail as it shows the standby process is starting. [This problem also happens if a TV program is not scheduled, so the scheduler task in not running/in memory. This problem happens regardless of whether I'm not watching TV. This problem happens regardless of whether Media Center is running (it usually isn't, I'm using Pinnacle to watch TV).] I looked at "How to troubleshoot hibernation and standby issues in Windows XP" http://support.microsoft.com/kb/907477 - ACPI is enabled, and "standby" is an option in "Power Options Properties." So it appears to be setup correctly. Windows XP SP3 Media Center Edition, all current updates installed.

    Read the article

  • Maximum limit of filepointer in php reached and not changeable

    - by mlaug
    I have a server with the current 5.3.x version installed. Since we are running a really simple and small server in php using sockets, that connects to a lot clients using sockets we need to raise the open file limit that has been already done on the server for the user, that runs the server #ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 29879 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 8192 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 29879 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited and we compiled php with --enable-fd-setsize=8192 still we are getting [19-Nov-2012 09:24:23 Europe/Berlin] PHP Warning: socket_select(): You MUST recompile PHP with a larger value of FD_SETSIZE. It is set to 1024, but you have descriptors numbered at least as high as 1024. --enable-fd-setsize=2048 is recommended, but you may want to set it to equal the maximum number of open files supported by your system, in order to avoid seeing this error again at a later date. once in a while in our logs. Anyone knows who to configure the unix server and php correctly to have that working? I found a bug, but that is related to 2006 and marked as "not a bug" https://bugs.php.net/bug.php?id=37025&edit=1

    Read the article

  • BackupExec 12 + RALUS - VERY slow backups

    - by LVDave
    We use Backup Exec 12 and the Remote Agent for Linux/Unix Servers (RALUS) to backup a large RHEL5 system. For various reasons we need to do a daily working set job. These working-set jobs run abysmally slow. The link between the target machine and the BE server is gigabit, and any other type of job runs 1-3GB/min. These working-set jobs start out at perhaps 40MB/min and over the course of the backup job slowly drops down so low that the BE job rate display in the "current jobs" goes blank.. Since we usually are only doing changed-files for one day, the job is usually small and finishes overnight and we don't worry abotu the slowness, but we had some issues with the backup server, and missed about 6 days of fairly heavy work on the Linux box, so this working-set job will be a doozy.. We have support with Symantec, and I've pestered them a lot about this, they've had me run RALUS in debug mode, sent them that log and a VXgather from the BE host and they had no fix/workaround.. To give an idea, I have the mentioned working-set job running for the last 3 1/2 hours and it's backed up just under 10MEGAbytes.... I'm posting this here to see if anybody in the "real world" has seen this/and/or has any ideas what might be causing these abysmally slow jobs, since Symantec seems to be clueless...

    Read the article

  • DPM 2007 clashing with existing SQL backup job

    - by Paul D'Ambra
    I've recently installed a DPM2007 server on Server 2003 and have set up a protection group against a server 2003 server running SQL 2005 SP3. The SQL server in question has a full backup (as a sql agent job) once a day and transaction log backups hourly. These are zipped up and FTP'd to a server offsite by a scheduled task. Since adding the DPM job I'm receiving many error messages: DPM tried to do a SQL log backup, either as part of a backup job or a recovery to latest point in time job. The SQL log backup job has detected a discontinuity in the SQL log chain for database SERVER_NAME\DB_Name since the last backup. All incremental backup jobs will fail until an express full backup runs. My google-fu suggests that I need to change the full backup my sqlagent job is running to a copy_only job. But I think this means that I can't use that backup with the transaction_logs to restore the database if the building (including the DPM server) burns down. I'm sure I'm missing something obvious and thought I'd see what the hivemind suggests. It is an option to set-up a co-located DPM server elsewhere and have DPM stream the backup but that's obviously more expensive than the current set up. Many thanks in advance

    Read the article

  • Move an SSD with Windows 7 to a new PCIe SATA controller card

    - by Shevek
    This is my current setup: Abit AB9 QuadGT motherboard which has an Intel P965 chipset with Intel ICH8 SATA Rev. 2.0 controller Kingston HyperX 3K 120GB SSD which supports SATA Rev. 3.0 Windows 7 Ultimate x64 For some reason I was unable to install Windows with the controller set in AHCI mode, even with the correct driver from Intel so it is currently in IDE mode. Due to the ICH8 controller being SATA Rev 2.0 and running in IDE mode, the SSD is operating well under it's published read/write speeds. I have ordered an Asus U3S6 controller card to add both SATA Rev. 3.0 and USB 3.0 to my computer. The motherboard does have a PCIe x4 slot available so I will hopefully achieve the full potential of my SSD. My question is this - am I able to swap the SSD over to the controller card from the motherboard controller without having to clean install Windows? I am hoping that all I will need to do is ensure that the controller card drivers are available to Windows and set the registry as per this KB article. Will this work or should I perform a clean installation?

    Read the article

  • Can I recover a rm -rf-ed Mercurial repository?

    - by WishCow
    I made the mistake of wiping out my entire project directory with a quick "rm -rf project". Of course, the .hg directory went with it. I had about 15-20 changesets, that I have not pushed to anyone, and I would really really like to get those back. The system is a Ubuntu machine, and the partiton where the delete happened is ext3, the project consist mostly of PHP files. I know about the guideline to not write to the disk in question. The first idea was to use the tool named scalpel, to get the PHP files back and diff them with the current version from the repo, and somehow carve the changes out. While it succeeded, it did not recover the file names (or there is a switch I'm missing), so I'm left with a few thousand sequentially named .php files, combing through them is not an option. Can a kind soul please save me, and suggest a way to: a) get the repo back, or b) get the files back, with filenames For those wondering how I did such a stupid thing: I was working on a file in Vim which I wanted to remove from the repository: :!hg rm % This complained that the file is in a subrepository, so I specified the following: :!hg rm % -R engine which complained that file has modifications, use -f to force. And this is when somehow, I made up the following command: :!rm -rf % -R engine Somehow, seeing "force" makes me do a rm -rf by reflex.

    Read the article

  • Add and remove letterhead in Word document

    - by Daniel Wolf
    Our company has letterheaded paper (pre-printed paper with our logo on it). Whenever we send something out by mail, we print it on that paper. However, when we send the same document via email, we convert it to a PDF file. Now the problem is: when converting a Word document to PDF, it should contain the letterhead. When printing the same document on paper, it should not (or else the letterhead would be printed twice). Currently, we are using two different Word document templates - one with letterhead, one without. So whenever we want to add or remove the letterhead, we have to create a new document with the other template and copy and paste everything over. Nasty solution. What I'm looking for is some simple way to switch the letterhead on and off. What I've tried so far: Switching the template: There does not seem to be a simple way to switch the template for an existing document. Using a picture watermark: Our letterhead goes all the way to the border of the page. (No printer supports this, of course, but it is fine for export to PDF.) Apparently depending on the current default printer, Word will not allow a borderless watermark, instead shifting the image around. Using the page header: When editing the page header, I can insert pictures at arbitrary positions, which is great. However, I could not find a way (short of macros) to enable/disable just the pictures in the header. (The text should remain there.)

    Read the article

  • Configuring CakePHP on Hostgator

    - by yaeger
    I have absolutely no idea what I am doing wrong here. I have followed just about every guide there is with installing cakephp on shared hosting and I am still having problems. I have also started over each time when following a guide. Maybe someone can help me out here as I am out of options. Here is my current setup: / app webroot vendors lib cake public_html .htaccess index.php plugins I have configured the index.php file in the public_html to point to the correct files. I have also done this in the index.php file located in webroot folder. I am getting an Internal 500 server error and it says to check my logs for what the error specifically is. However there are no logs being generated. I removed the .htaccess file from the public_html folder and I get the following errors: Warning: require(/app/webroot/index.php) [function.require]: failed to open stream: No such file or directory in /home/user/public_html/index.php on line 40 Fatal error: require() [function.require]: Failed opening required '/app/webroot/index.php' (include_path='.:/usr/lib/php:/usr/local/lib/php') in /home/user/public_html/index.php on line 40 line 40 is require APP_DIR . DS . WEBROOT_DIR . DS . 'index.php'; DS = "/" WEBROOT_DIR = "app" Anyone have any suggestions? I am at lost at what I am doing wrong.

    Read the article

  • FTP not listing directory NcFTP PASV

    - by Jacob Talbot
    I am attempting to setup Multicraft on my server, all is running smoothly however the FTP won't allow anyone to connect from a remote FTP client, where net2ftp will work smoothly from a remote location. I have included the transcript from my FTP client, Transmit below to give you an idea of what's going on. I have disabled iptables as well, and still no luck either way. Transmit 4.1.7 (x86_64) Session Transcript [Version 10.8.2 (Build 12C54)] (21/10/12 11:23 PM) LibNcFTP 3.2.3 (July 23, 2009) compiled for UNIX 220: Multicraft 1.7.1 FTP server Connected to ateam.bn-mc.net. Cmd: USER jacob.9 331: Username ok, send password. Cmd: PASS xxxxxxxx 230: Login successful Cmd: TYPE A 200: Type set to: ASCII. Logged in to ateam.bn-mc.net as jacob.9. Cmd: SYST 215: UNIX Type: L8 Cmd: FEAT 211: Features supported: EPRT EPSV MDTM MLSD MLST type*;perm*;size*;modify*;unique*;unix.mode;unix.uid;unix.gid; REST STREAM SIZE TVFS UTF8 End FEAT. Cmd: OPTS UTF8 ON 200: OK Cmd: PWD 257: "/" is the current directory. Cmd: PASV Could not read reply from control connection -- timed out. (SReadline 1)

    Read the article

  • How to unblacklist an IP at Google?

    - by DJRayon
    I own a small business with two servers for webhosting. When setting up the primary (CentOS 5.5 + WHM, secondary is WHM DNS Only) server I kinda messed up the firewall, so the hackers could send stuff from my server. My primary IP is x.y.29.218. Anyway - I got blacklisted in several places, but now those blacklistings are gone. For a week or so, but Google still has my IP blacklisted. I handling serious damages because of that. Many clients want to switch from my hosting, etc. I've fixed the hole with CSF firewall SMTP_BLOCK option and enabled also the WHM SMTP TEAK Currently all I see from the Main Email View Mail Statistics (Errors section) in WHM is rows and rows of the following message removed-the-email-address-for-security R=lookuphost T=remote_smtp: SMTP error from remote mail server after end of data: host aspmx.l.google.com [a.b.39.27]: 550-5.7.1 [x.y.29.218 1] Our system has detected an unusual rate of\n550-5.7.1 unsolicited mail originating from your IP address. To protect our\n550-5.7.1 users from spam, mail sent from your IP address has been blocked.\n550-5.7.1 Please visit http://www.google.com/mail/help/bulk_mail.html to review\n550 5.7.1 our Bulk Email Senders Guidelines. h24si3868764fas.171 What are my options? I have one IP free. How can I configure Exim to send mail from that IP? My brain is like constantly blowing up because of this problem. Please someone, who has any knowledge how to deal with the current situation, please give me some kind of help - any help, suggestions, etc. I've tried everything I know, and I still don't know much, because this is the first time (I just started to webhost, etc) I deal with real physical servers not some kind of pre-setup VPS solution. Many - many thanks, whoever has time to offer some help.

    Read the article

  • Home Server: cpu virtualisation, what to choose?

    - by Huygens
    I'm looking for virtualisation solutions for storage and OS for a home server. A sort of private cloud where I manage the storage space independently of the VM one. This question focus on VM (or compute instance) management and what would best suit my needs. (I have another question related to the storage management). My use cases are: A backup server: rsync and other services running. A personal cloud server: a kind of owned dropbox system, à la ownCloud. " users foreseen. A media server: streaming videos and displaying photos. Here my environement and wishes: Server: HP Proliant MicroServer with 8 GB RAM (AMD Turion dual core with AMD-V technology) OS types: only Linux (perhaps a *BSD VM in the future) Linux distributions do not matter, I'm familiar with RHEL, Fedora, Suse, Ubuntu, but any other recommandation will be fine 2-3 VMs foreseen: backup server, owncloud server and media server (optional). Those are only servers, so no graphical console needed (I don't need VirtualBox) By VM I mean a virtualised environment like KVM, Xen, etc. or a compute instance like with OpenStack storage should be "virtualised/cloudified" see my other question. VM should be able to be migrated to another server in the future if performance cannot be fullfilled anymore by the current server It does not matter if installation of such setup is complicated as long as management tools allow for easy maintenance I don't have Windows at home, so solution should be Linux friendly and would be nice to be web based. But native apps are OK too. System should be easy to enhance: by adding a new server to migate some of the VMs to it. So it's really a kind of private cloud on which I could run some Linux OS. I would prefer free (libre, as in a free speach) and open source tools. But it does not have to be free as in a free beer. So Xen, KVM, VitualBox or OpenStack? What would you recommend?

    Read the article

  • How to move my data from my old MacBook Pro to my new one?

    - by Tim Büthe
    I just purchased a new MacBook Pro and already got an 2008 model. I wonder how I move all my data over to the new one. My first idea was, to use my Time Machine backup and restore from it, which seems to be a good idea and should work just fine regarding to this link: http://blog.duncandavidson.com/2008/01/restoring-from-time-machine.html. But, since my current MacBook got older Software on it, like iLife '08 instead of iLife '09 I would have to upgrade this afterwards. Is this correct, or does Time Machine does some magic to exclude well known software? And is it possible to reinstall or upgrade iLife with the included installation DVDs? My second idea is, to just swap the hard drives instead of using the Time machine backup. If it is not too complicated to remove the hdd, this should be the fastest way. This also has the benefit, that the 2008er MacBook then contains a brand new installation and I don't have to remove all my stuff or reinstall Mac OS before I give it away. My question on that second idea would be: does snow leopard handle this stuff correctly? I reboot with the new hardware and all just works fine? So in a nutshell: What would you do: restore from backup or swap drives? And what about the new software?

    Read the article

  • Why does redis report limit of 1024 files even after update to limits.conf?

    - by esilver
    I see this error at the top of my redis.log file: Current maximum open files is 1024. maxclients has been reduced to 4064 to compensate for low ulimit. I have followed these steps to the letter (and rebooted): Moreover, I see this when I run ulimit: ubuntu@ip-XX-XXX-XXX-XXX:~$ ulimit -n 65535 Is this error specious? If not, what other steps do I need to perform? I am running redis 2.8.13 (tip of the tree) on Ubuntu LTS 14.04.1 (again, tip of the tree). Here is the user info: ubuntu@ip-XX-XXX-XXX-XXX:~$ ps aux | grep redis root 1027 0.0 0.0 66328 2112 ? Ss 20:30 0:00 sudo -u ubuntu /usr/local/bin/redis-server /etc/redis/redis.conf ubuntu 1107 19.2 48.8 7629152 7531552 ? Sl 20:30 2:21 /usr/local/bin/redis-server *:6379 The server is therefore running as ubuntu. Here are my limits.conf file without comments: ubuntu@ip-XX-XXX-XXX-XXX:~$ cat /etc/security/limits.conf | sed '/^#/d;/^$/d' ubuntu soft nofile 65535 ubuntu hard nofile 65535 root soft nofile 65535 root hard nofile 65535 And here is the output of sysctl fs.file-max: ubuntu@ip-XX-XXX-XXX-XXX:~$ sysctl -a| grep fs.file-max sysctl: permission denied on key 'fs.protected_hardlinks' sysctl: permission denied on key 'fs.protected_symlinks' fs.file-max = 1528687 sysctl: permission denied on key 'kernel.cad_pid' sysctl: permission denied on key 'kernel.usermodehelper.bset' sysctl: permission denied on key 'kernel.usermodehelper.inheritable' sysctl: permission denied on key 'net.ipv4.tcp_fastopen_key' as sudo ubuntu@ip-10-102-154-226:~$ sudo sysctl -a| grep fs.file-max fs.file-max = 1528687 Also, I see this error at the top of the redis.log file, not sure if it's related. It makes sense that the ubuntu user isn't allowed to change max open files, but given the high ulimits I have tried to set he shouldn't need to: [1050] 23 Aug 21:00:43.572 # You requested maxclients of 10000 requiring at least 10032 max file descriptors. [1050] 23 Aug 21:00:43.572 # Redis can't set maximum open files to 10032 because of OS error: Operation not permitted.

    Read the article

  • Cannot set video resolution above 640x480 after installing Windows XP SP2

    - by waanders
    I've installed Windows XP SP2 on a computer (there was not SP at all). Now the display settings are set back to 640x480 and 4 bits colors. And I can't change it, it's the only option in Settings tab of the Display dialog of Windows. The screen look awful now, how can I solve this problem? UPDATE: Seems to be a problem with the video driver (thanks @Karan and @Hennes). I did run Speccy (PC-Wizard freezes the computer) and this is a part of the log file: Summary Operating System Microsoft Windows XP Professional 32-bit SP3 CPU Intel Celeron Willamette 0.18um Technology RAM 512 MB DDR @ 133MHz (2.5-3-3-6) Motherboard COMPAQ 0838h (FC-478) Graphics Standard Monitor (640x480@1Hz) Hard Drives 19.0GB Maxtor 2B020H1 (PATA) Optical Drives No optical disk drives detected Audio No audio card detected ... Graphics Monitor Name Standard Monitor on Current Resolution 640x480 pixels Work Resolution 640x450 pixels State enabled, primary Monitor Width 640 Monitor Height 480 Monitor BPP 4 bits per pixel Monitor Frequency 1 Hz Device \\.\DISPLAY1 OpenGL Version 1.1.0 Vendor Microsoft Corporation Renderer GDI Generic GLU Version 1.2.2.0 Microsoft Corporation Values GL_MAX_LIGHTS 8 GL_MAX_TEXTURE_SIZE 1024 GL_MAX_TEXTURE_STACK_DEPTH 10 GL Extensions GL_WIN_swap_hint GL_EXT_bgra GL_EXT_paletted_texture GL_EXT_bgra

    Read the article

  • Scheduling Automatic Backups for Virtual Private Web Server running CENTOS 6.3 and WHM

    - by Oliver Farrell
    I'm pretty new to administering my own VPS - but thus far am finding it quite a compelling experience. There's something quite refreshing about having complete control over everything it does. One thing that I would like to look at is a suitable backup solution (a few times a day). My current setup is as follows: I'm running a CENTOS 6.3 VPS with a single 25GB hard drive solely for the purpose of hosting websites. I'm using WHM & cPanel for administering them. I now plan on adding an additional hard disk and hooking it up to my VPS. What I'm not sure about is how I get the two disks talking and get the backup process going. I'm not a seasons SSH-er so don't really know where to start. I'm hosting with Serverlove (one of the best hosting providers I've used) and am provided with a number of unique identifiers for each hard disk so I imagine these may play a part in linking them together. I appreciate that this is a little vague (I'm clutching at straws) but any assistance is very much appreciated.

    Read the article

  • Backing up Excel Files to a different Directory

    - by Joe Taylor
    In Excel 2007 in the Save As box there is an option to 'Create a Backup' which simply backs up the file whenever it is saved. Unfortunately it backs up the file to the same directory as the original. Is there a simple way to change this directory to another drive / folder? I have messed about with macros to do this, coming up with: Private Sub Workbook_BeforeClose(Cancel As Boolean) 'Private Sub Workbook_BeforeSave(ByVal SaveAsUI As Boolean, Cancel As Boolean) 'Saves the current file to a backup folder and the default folder 'Note that any backup is overwritten Application.DisplayAlerts = False ActiveWorkbook.SaveCopyAs Filename:="T:\TEC_SERV\Backup file folder - DO NOT DELETE\" & _ ActiveWorkbook.Name ActiveWorkbook.Save Application.DisplayAlerts = True End Sub This creates a backup of the file ok the first time, however if this is tried again I get: Run-Time Error '1004'; Microsoft Office Excel cannot access the file 'T:\TEC_SERV\Backup file folder - DO NOT DELETE\Test Macro Sheet.xlsm. There are several possible reasons: The file name or path does not exist The file is being used by another program The workbook you are trying to save has the same name as a... I know the path is correct, I also know that the file is not open anywhere else. The workbook has the same name as the one I'm trying to save over but it should just overwrite. I have posted the question about the coding on Stack Overflow but wondered if there is an easier way to do this. Any help would be much appreciated. Joe

    Read the article

  • DNS Round-robin, Load Balancing, Load sharing, and failover in 2012

    - by user1089770
    I have been reading many posts on serverfault as well as on other sites regarding all these. What I understand is, Multiple A records(round-robin dns) can be used for both : Load sharing (round-robin, but NOT load-balancing). Many people say that “Load Balancing” but I think there will be no load-balancing because “Balance” means (literally) “compare two(or more) and adjust” (and that is what Real s/w or h/w Load balancers do) but Browsers never do this, instead they randomly select and IP and connect to it. It doesn't have any knowledge about the current load of that server (probably, the IP it picked had the highest load!). Automatic failover (latest browsers only). Yes, I think DNS can be used as a simple failover system (at least in 2012, I dont know when it actually "came in effect"). please refer to : http://webmasters.stackexchange.com/questions/10927/using-multiple-a-records-for-my-domain-do-web-browsers-ever-try-more-than-one and Browser-based DNS failover using multiple A records and http://www.nber.org/sys-admin/dns-failover.html I would like to make sure my assumptions/findings are right. So let me know please.....

    Read the article

  • Turn off monitor (energy saving) while in text console mode (in Linux)

    - by Denilson Sá
    How to configure Linux text console to automatically turn of the monitor after some time? And by "text console" I mean that thing that you get on ctrl+alt+F[1-6], which is what you get whenever X11 is not running. And, no, I'm not using any framebuffer console (it's a plain, good and old 80x25 text-mode). Many years ago, I was using Slackware Linux, and it used to boot up in text-mode. Then you would manually run startx after the login. Anyway, the main login "screen" was the plain text-mode console, and I remember that the monitor used to turn off (energy saving mode, indicated by a blinking LED) after some time. Now I'm using Gentoo, and I have a similar setup. The machine boots up in text-mode, and only rarely I need to run startx. I say this because this is mostly my personal Linux server, and there is no need to keep X11 running all the time. (which means: I don't want to use GDM/KDM or any other graphical login screen) But now, in this Gentoo text-mode console, the screen goes black after a while, but the monitor does not enter any energy-saving mode (the LED is always lit). Yes, I've waited long enough to verify this. Thus, my question is: how can I configure my current system to behave like the old one? In other words, how to make the text console trigger energy-saving mode of the monitor? (maybe I should (cross-)post this question to http://unix.stackexchange.com/ )

    Read the article

  • DAS vs SAN storage for serving 2 to 4 nodes

    - by Luke404
    We currently have 4 Linux nodes with local storage, arranged in two active/passive pairs with storage mirrored using DRBD, running virtual machines (actually using Xen Hypervisor) for typical hosting workloads (mail, web, a couple VPS, etc.). We're approaching the (presumed) maximum IOPS of those servers, and we're planning to migrate to an external storage solution with two active nodes, with capacity for up to four active nodes. Since we're an all-Dell shop I've done some research and found the MD3200 / MD3200i products should be the ones we're looking for. We are pretty sure we won't be attaching more than 4 hosts on a single storage and I'm wondering if there is any clear advantage for one or the other. In theory I should be able to attach 4 SAS hosts to a single MD3200 (single links on a single controller MD3200, or dual redundant SAS links from each host to a dual-controller MD3200), or 4 iSCSI hosts to a single MD3200i (directly on its 4 GigE ports without any switch, again with dual links for the dual controller option). Both setups should let us implement live VM migration since all hosts can access all the LUNs at the same time, and also some shared filesystem like GFS2 or OCFS2. Also, both setups should allow full redundancy of the whole system (assuming dual controllers in the storage). One difference I can see is that the DAS solution is actually limited to 4 hosts while the iSCSI one should be able to grow to more hosts (adding two GigE switches to the mix). One point for the iSCSI solution is that it would allow us to start out with our current nodes and upgrade them at a later time (we can't add other SAS controllers, but they already have 4 GigE ports each). With the right (iSCSI|SAS) controllers I should be able to connect diskless nodes and boot them off the external storage which I think is a good thing (get rid of any local storage). On the other hand, I would have thought the SAS one to be cheaper but it seems like an MD3200 actually costs a little less than an MD3200i (?) (please note: I've used Dell gear in my examples since that's what we're looking for but I assume the same goes with other vendors) I would like to know if my assumptions above are correct, and if I'm missing any important difference between the two setups.

    Read the article

< Previous Page | 526 527 528 529 530 531 532 533 534 535 536 537  | Next Page >