Search Results

Search found 41598 results on 1664 pages for 'segmentation fault'.

Page 328/1664 | < Previous Page | 324 325 326 327 328 329 330 331 332 333 334 335  | Next Page >

  • how to set up ubuntu LAMP stack /var/www similar to the xampp htdocs for http://localhost

    - by kimsia
    In xampp, all anyone has to do is to place a folder inside htdocs and http://localhost/folder_name will automatically work. If i set up a LAMP stack using sudo tasksel in ubuntu, how do i do it such that if i put a folder inside /var/www the same thing would happen as in localhost/folder_name would work? update sorry all, i have chosen to do things in a different way hence this question is no longer relevant to me. I apologise. I have awarded upticks to all answers here. Thanks for the suggestion.

    Read the article

  • Sign multiple domains with single Domain Key (dk-filter)

    - by Lashae
    Motivation The private shopping website GILT, send periodical update emails from giltgroupe.bounce.ed10.net however all of the mails are signed with domain keys of giltgroupe.com. mailed-by giltgroupe.bounce.ed10.net signed-by giltgroupe.com My Story I couldn't manage to sign x.com with y.com 's domain key using dk-filter under Debian Lenny with postfix. If I try to init dk-filter service with following arguments: DAEMON_OPTS="$DAEMON_OPTS -d x.com,y.com -c nofws -k -i /var/dk-filter/internal_hosts -s /etc/dk-keys.conf" dk-filter service signs with domain x.com (d=x.com) If I change the daemon arg.s as following: DAEMON_OPTS="$DAEMON_OPTS -d x.com -c nofws -k -i /var/dk-filter/internal_hosts -s /etc/dk-keys.conf" then emails sent From y.com is not being signed. the dk-keys.conf file is as follows: *:/var/dk-filter/y.com/mail I managed to do same thing with DKIM, works perfect. However DK doesn't seem to work. I don't have any problem signing y.com's emails with y.com's key and x.com's emails x.com's key, which indicates there is no configuration problem. Do you have any experience/advice to make it possible to sign emails from multiple domains by a specific chosen domain?

    Read the article

  • Unable to connect SQL Server instance from Visual Studio 2008 SP1 on Vista x64

    - by Shimmy
    Hi folks! I installed on a Vista x64 machine Visual Studio 2008 SP1 (with integrated SQL from the installation package) and when I try to add an MDF file to a project or to the App_Data when working with web, I get the following message: Connections to SQL Server Files (*.mdf) require SQL Server Express 2005 to function properly. Please verify the installation of the component or download from the URl: http:go.microsoft.com/fwlink/?linkID=49251. Just to make sure: SQL 2005 express is installed and I connect to it via SSMS. Update: I am 90% sure that this is a Microsoft bug with x64 machines.

    Read the article

  • Weird .#filename files on remote ssh-connected systems after mcedit

    - by etranger
    I'm using MacFusion sshfs in combination with Midnight Commander, and when I edit remote text files with mcedit, weird symlinks are created on the remote system. $ ls -l .* lrwxr-xr-x 1 user group 34 Jun 27 01:54 .#filename.txt -> [email protected] where etranger is my local login name, and mbp is a hostname of my notebook running MacOS. symlinks can be removed by running remote rm command, but cannot be deleted on the mac-fuse mounted volume and thus pollutes the filesystem. I cannot figure what part of software is responsible for this, and how I could fix this, any help is appreciated. EDIT: This appears to be mcedit behavior as documented here: https://dev.openwrt.org/ticket/8245 Apparently, sshfs fails to remove symlink to the lock file for some reason (".#" in filename, perhaps), and it pollutes the filesystem. A quick workaround is possible, using another bug of Midnight Commander: editing (F4) the broken symlink effectively converts it to a missing lock file it was supposed to point to, and removes the symlink itself. The newly created file may then be deleted normally. EDIT 2: Unchecking "Follow symlink" in MacFusion apparently allows sshfs to remove dead symlinks, so the problem disappears completely.

    Read the article

  • Godaddy domain and Bluehost web hosting

    - by Digital site
    I have a domain from Godaddy and web hosting at Bluehost. I want to make this work as some people say no need to transfer the domain from Godaddy to Bluehost. I was trying to find out how to get this work out by adding name servers for Bluehost ns1.bluehost.com ns2.bluehost.com at Godaddy. This works fine, but not sure if 100% OK yet. The reason why I say that is when I type in my address name on any browser this way: mydomain.com it doesn't work. Instead I get an error message stating that this server is not found or couldn't connect to it... However, when I write the domain name and include the www. prefix it works fine... The other problem is when I search in google or yahoo, the domain shows like this: mydomain.com , which is not really good because my clients think my site is down because of the error message, and most new people don't know if they have to add www. to the domain to work. I just want to make at least the domain works like this: mydomain.com

    Read the article

  • How do I make Linux recognize a new SATA /dev/sda drive I hot swapped in without rebooting?

    - by Philip Durbin
    Hot swapping out a failed SATA /dev/sda drive worked fine, but when I went to swap in a new drive, it wasn't recognized: [root@fs-2 ~]# tail -18 /var/log/messages May 5 16:54:35 fs-2 kernel: ata1: exception Emask 0x10 SAct 0x0 SErr 0x50000 action 0xe frozen May 5 16:54:35 fs-2 kernel: ata1: SError: { PHYRdyChg CommWake } May 5 16:54:40 fs-2 kernel: ata1: link is slow to respond, please be patient (ready=0) May 5 16:54:45 fs-2 kernel: ata1: device not ready (errno=-16), forcing hardreset May 5 16:54:45 fs-2 kernel: ata1: soft resetting link May 5 16:54:50 fs-2 kernel: ata1: link is slow to respond, please be patient (ready=0) May 5 16:54:55 fs-2 kernel: ata1: SRST failed (errno=-16) May 5 16:54:55 fs-2 kernel: ata1: soft resetting link May 5 16:55:00 fs-2 kernel: ata1: link is slow to respond, please be patient (ready=0) May 5 16:55:05 fs-2 kernel: ata1: SRST failed (errno=-16) May 5 16:55:05 fs-2 kernel: ata1: soft resetting link May 5 16:55:10 fs-2 kernel: ata1: link is slow to respond, please be patient (ready=0) May 5 16:55:40 fs-2 kernel: ata1: SRST failed (errno=-16) May 5 16:55:40 fs-2 kernel: ata1: limiting SATA link speed to 1.5 Gbps May 5 16:55:40 fs-2 kernel: ata1: soft resetting link May 5 16:55:45 fs-2 kernel: ata1: SRST failed (errno=-16) May 5 16:55:45 fs-2 kernel: ata1: reset failed, giving up May 5 16:55:45 fs-2 kernel: ata1: EH complete I tried a couple things to make the server find the new /dev/sda, such as rescan-scsi-bus.sh but they didn't work: [root@fs-2 ~]# echo "---" > /sys/class/scsi_host/host0/scan -bash: echo: write error: Invalid argument [root@fs-2 ~]# [root@fs-2 ~]# /root/rescan-scsi-bus.sh -l [snip] 0 new device(s) found. 0 device(s) removed. [root@fs-2 ~]# [root@fs-2 ~]# ls /dev/sda ls: /dev/sda: No such file or directory I ended up rebooting the server. /dev/sda was recognized, I fixed the software RAID, and everything is fine now. But for next time, how can I make Linux recognize a new SATA drive I have hot swapped in without rebooting? The operating system in question is RHEL5.3: [root@fs-2 ~]# cat /etc/redhat-release Red Hat Enterprise Linux Server release 5.3 (Tikanga) The hard drive is a Seagate Barracuda ES.2 SATA 3.0-Gb/s 500-GB, model ST3500320NS. Here is the lscpi output: [root@fs-2 ~]# lspci 00:00.0 RAM memory: nVidia Corporation MCP55 Memory Controller (rev a2) 00:01.0 ISA bridge: nVidia Corporation MCP55 LPC Bridge (rev a3) 00:01.1 SMBus: nVidia Corporation MCP55 SMBus (rev a3) 00:02.0 USB Controller: nVidia Corporation MCP55 USB Controller (rev a1) 00:02.1 USB Controller: nVidia Corporation MCP55 USB Controller (rev a2) 00:04.0 IDE interface: nVidia Corporation MCP55 IDE (rev a1) 00:05.0 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:05.1 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:05.2 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:06.0 PCI bridge: nVidia Corporation MCP55 PCI bridge (rev a2) 00:08.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a3) 00:09.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a3) 00:0a.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0b.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0c.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0d.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0e.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0f.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:18.0 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] HyperTransport Technology Configuration 00:18.1 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Address Map 00:18.2 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] DRAM Controller 00:18.3 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Miscellaneous Control 00:19.0 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] HyperTransport Technology Configuration 00:19.1 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Address Map 00:19.2 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] DRAM Controller 00:19.3 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Miscellaneous Control 03:00.0 VGA compatible controller: Matrox Graphics, Inc. MGA G200e [Pilot] ServerEngines (SEP1) (rev 02) 04:00.0 PCI bridge: NEC Corporation uPD720400 PCI Express - PCI/PCI-X Bridge (rev 06) 04:00.1 PCI bridge: NEC Corporation uPD720400 PCI Express - PCI/PCI-X Bridge (rev 06) Update: In perhaps a dozen cases, we've been forced to reboot servers because hot swap hasn't "just worked." Thanks for the answers to look more into the SATA controller. I've included the lspci output for the problematic system above (hostname: fs-2). I could still use some help understanding what exactly isn't supported hardware-wise in terms of hot swap for that system. Please let me know what other output besides lspci might be useful. The good news is that hot swap "just worked" today on one of our servers (hostname: www-1), which is very rare for us. Here is the lspci output: [root@www-1 ~]# lspci 00:00.0 RAM memory: nVidia Corporation MCP55 Memory Controller (rev a2) 00:01.0 ISA bridge: nVidia Corporation MCP55 LPC Bridge (rev a3) 00:01.1 SMBus: nVidia Corporation MCP55 SMBus (rev a3) 00:02.0 USB Controller: nVidia Corporation MCP55 USB Controller (rev a1) 00:02.1 USB Controller: nVidia Corporation MCP55 USB Controller (rev a2) 00:04.0 IDE interface: nVidia Corporation MCP55 IDE (rev a1) 00:05.0 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:05.1 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:05.2 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:06.0 PCI bridge: nVidia Corporation MCP55 PCI bridge (rev a2) 00:08.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a3) 00:09.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a3) 00:0b.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0c.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0f.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:18.0 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] HyperTransport Configuration 00:18.1 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Address Map 00:18.2 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] DRAM Controller 00:18.3 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Miscellaneous Control 00:18.4 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Link Control 00:19.0 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] HyperTransport Configuration 00:19.1 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Address Map 00:19.2 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] DRAM Controller 00:19.3 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Miscellaneous Control 00:19.4 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Link Control 03:00.0 VGA compatible controller: Matrox Graphics, Inc. MGA G200e [Pilot] ServerEngines (SEP1) (rev 02) 04:00.0 PCI bridge: NEC Corporation uPD720400 PCI Express - PCI/PCI-X Bridge (rev 06) 04:00.1 PCI bridge: NEC Corporation uPD720400 PCI Express - PCI/PCI-X Bridge (rev 06) 09:00.0 SCSI storage controller: LSI Logic / Symbios Logic SAS1064ET PCI-Express Fusion-MPT SAS (rev 04)

    Read the article

  • home, end, delete, pageup, pagedown with ksh

    - by Nicolas
    Hello. I want to use home, end, delete, pageup, pagedown with ksh. My TERM is xterm-color. These keys works fine with tcsh and zsh, but not with ksh (print a tilda ~) I found this: bind '^[[3'=prefix-2 bind '^[[3~'=delete-char-forward bind '^[[1'=prefix-2 bind '^[[1~'=beginning-of-line bind '^[[4'=prefix-2 bind '^[[4~'=end-of-line But when I set one bindkey, the last does not work anymore. How can I use these keys in ksh with a .kshrc ? Thanks.

    Read the article

  • Having XP VM use my host OSX ssh tunnel to connect to a remote site?

    - by Manachi
    I am using Mac OSX and have Windows XP running on VMWare Fusion. I'm creating an ssh tunnel from OSX to a remote server, and then trying to have Windows XP use that tunnel (I actually use a program called Proxifier on XP to filter my XP MS SQL Server traffic through that tunnel) Note that I can successfully create an ssh tunnel (on port 9333) from the XP putty to the remote host, and have SQL Server Proxify through that tunnel and it all works correctly. However when I try to set up the tunnel in OSX, and have Proxifier in XP point to the OSX tunnel instead of localhost, it doesn't seem to connect. Here is the OSX command i'm using to create the tunnel: ssh -i /my/key -p 9001 -D 9333 -g me@remotehostname Then I set my XP proxifier to point to macosxhostname:9333 (instead of the previous localhost:9333 which worked corrently when using putty) Any suggestions on what I may have missed? My XP firewall is turned off while setting this up.

    Read the article

  • backup of KVM VM's running on Ubuntu 12.4.1 precise edition from a remote machine

    - by Dr. Death
    I am creating a library API which will take the backup of all the VM's running on KVM hypervisor. My VM's can be of any type. I am taking this backup from a remote machine and need to put the backup at remote server. I have KVM, Libvirt installed on my system. Some of my VM's are LVM based and some are normal VM's running on KVM. I research and found out an excellent perl script for taking the backup http://pof.eslack.org/2010/12/23/best-solution-to-fully-backup-kvm-virtual-machines/ but since I am developing this library in C++ I cannot use it however it has given me a good understanding of how it will work. One thing I didnot able to sort out is if my VM's are not created using virt-manager or are created using any other tool them virsh system list command does not give them in the list of running VM's however they are running perfectly on my KVM server. Is there a way to list these VM's in my system list anyhow? secondly, when I am taking backup from the remote machine I am getting out of my ssh mode as soon as my libvirt command finishes and for every command I need to ssh again, Is there a way that I do not need to ssh each and every time? I have already used the rsa key for ssh but when once my command finishes my control moves to the remote machine again and try to find out my source VM location in remote machine's local drives which in turn fails it. here is the main problem I am facing. also for the LVM based VM I am able to take the live backup but for non LVM based my machines are getting suspended and not been able to take the live backup. Since my library will work on the remote machine only I might not know the VM's configruation on the KVM server. so need to make it consistent for all the VM's. Please share any thing related to this issue so that I may be able to take the live backup of the non lvm vm's also. I'll update my working and any research findings time to time to all of you. Thanks in advance for your suggestions in these regards.

    Read the article

  • Linux udev persistent net rule

    - by Anonymous
    I have a Linux system (Slackware Linux 13.0) with two network interfaces. Let's call them NIC0 and NIC1 My goal is to make NIC0 to appear as eth0 in the system. I know this can be achieved via udev rules that map network aliases to MAC addresses of network interfaces. In Slackware Linux the file /etc/udev/rules.d/70-persistent-net.rules contains such rules. The trickiest part of my problem is that I need to fake the MAC address of NIC0. I know I can dynamically change the MAC addres of a network interface with the command: ifconfig eth0 hw ether <new MAC address> Do you see the problem? This supposes that the network interfaces are already set up. So my question is: If I would have an udev rule for NIC1(the one that shall go up as eth1, with its original MAC address), would it be enough for the system to bring the other network interface (NIC0) as eth0 by default? This way I could change its MAC address later, after the udev machinery completes and the network aliases are brought up.

    Read the article

  • What Hypervisors support non-homogenous clusters?

    - by edude05
    I've been using Citrx Xenserver for awhile on a few machines that don't support Hardware Virtualization as a test for various small servers. I recently have been experimenting with moving the PV Vms between machines but Xenserver gives me errors that roughly say I need to have homogenous hardware for this to work. Because of this I haven't been able to setup XenMotion or any of the nice features that come with server pooling in Xenserver. I'm considering moving away from XenServer, however I can't seem to find a Hypervisor that explicitly supports non-homogenous clusters. On a side note, we do have a few idenitally configured Dell 1950s that haven't had any VM solution setup on yet, so if we can find a solution that can allow us to move PVs to those as well that would be great. Non free solutions are OK as well. What hypervisor will allow this? Thanks!

    Read the article

  • Download Microsoft Windows OS for Test Environments

    - by Gavin
    I need to create a development environments with Windows XP, Windows Server 2003, Windows Server 2008 on VMWare. Are there free (unrestricted) development type versions of those respective operating systems distributed by Microsoft? Or is there some kind of cheap software network/membership i can sign up for? Other companies like Oracle make this very easy to download dev copies of Solaris and OEL, but Microsoft seems a little too protective of their operating systems and don't make it easy.

    Read the article

  • Is there any way to detect when nginx has completed a graceful shutdown?

    - by Daniel Vandersluis
    I have a ruby on rails application which is running on passenger and nginx, with one main webserver and multiple application servers. I am trying to update my deployment process in order to minimize (or ideally, remove) any downtime caused by the deployment. The main roadblock right now is that passenger takes some time to restart (ie. reload the application), so in order to get around this, I want to stagger my restarts so that only one app server gets restart at a time. In order to do this without losing any long running passenger processes, I am thinking I need to gracefully shutdown the app server's nginx instance, which will cause it to no longer accept new connections but continue to process the existing ones; as well, HAProxy will detect that the app server is down and route new requests to the other server. However, assuming that there is a long-running process, I am not sure how to detect when the graceful shutdown has completed so that I can start it back up. Since the shutdown is caused by sending a signal (ie. kill -QUIT $( cat /var/run/nginx.pid )), and the kill command will return immediately, I cannot combine commands (ie. kill ... && touch restarted), as the touch command will execute immediately, even if nginx hasn't completed its shutdown. Is there any good way to do this?

    Read the article

  • Fedora 17 - Dropping into debug shell after attempted partitioning

    - by i.h4d35
    So I tried creating a new partition on Fedora 17 using fdisk as follows: Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (2048-823215039, default 2048): Using default value 2048 Last cylinder or +size or +sizeM or +sizeK (1-9039, default 9039): +15G Once this was done,instead of formatting the partition I created, I ran the partprobe command to write the changes to the partition table. On rebooting the computer, it drops to the debug shell and gives me the error as follows: dracut warning:unable to process initqueue dracut warning:/dev/disk/by-uuid/vg_mymachine does not exist dropping to debug shell dracut:/# While trying to run fsck on the said partition from the debug shell, it says "etc/fstab not found" and inside /etc I see a fstab.empty file. Is it now possible to retrieve what I have from the computer? Any help would be appreciated. Thanks in advance Edit: I've also tried the following steps for additional troubleshooting: I tried to boot using the Fedora disk and tried the rescue mode - says no Linux partition detected. I tried to create an fstab file by combining the entries from blkid and the /etc/mtab file and using the UUIDs from the mtab file - It didn't work. As soon as I rebooted the machine, it promptly dropped me in to the debug shell and the fstab file which i created wansn't there anymore in /etc (part of this solution)

    Read the article

  • check_mk IPMI PCM sensor reading randomly fails

    - by Julian Kessel
    I use check_mk_agent for monitoring a server with IPMI and the freeipmi-tools installed. As far as I can see, the monitoring randomly detects no value returned by the IPMI Sensor "Temperature_PCH_Temp". That's a problem since it results in a CRITICAL state triggering a notification. The interruption lasts only over one check, the following is always OK. The temperature is in no edge area and neither the readings before the fail nor after show a Temp that is tending to overrun a treshold. Has someone an idea on what could be the reason for this behaviour and how prevent it?

    Read the article

  • Password best practices

    - by pcampbell
    Given the recent events with a 'hacker' learning and retrying passwords from website administrators, what can we suggest to everyone about best practices when it comes to passwords? use unique passwords between sites (i.e. never re-use a password) words found in the dictionary are to be avoided consider using words or phrases from a non-English language use pass phrases and use the first letter of each word l33tifying doesn't help very much Please suggest more!

    Read the article

  • Variable directory names over SCP

    - by nedm
    We have a backup routine that previously ran from one disk to another on the same server, but have recently moved the source data to a remote server and are trying to replicate the job via scp. We need to run the script on the target server, and we've set up key-based scp (no username/password required) between the two servers. Using scp to copy specific files and directories works perfectly: scp -r -p -B [email protected]:/mnt/disk1/bsource/filename.txt /mnt/disk2/btarget/ However, our previous routine iterates through directories on the source disk to determine which files to copy, then runs them individually through gpg encryption. Is there any way to do this only by using scp? Again, this script needs to run from the target server, and the user the job runs under only has scp (no ssh) access to the target system. The old job would look something like this: #Change to source dir cd /mnt/disk1 #Create variable to store # directories named by date YYYYMMDD j="20000101/" #Iterate though directories in the current dir # to get the most recent folder name for i in $(ls -d */); do if [ "$j" \< "$i" ]; then j=${i%/*} fi done #Encrypt individual files from $j to target directory cd ./${j%%}/bsource/ for k in $(ls -p | grep -v /$); do sudo /usr/bin/gpg -e -r "Backup Key" --batch --no-tty -o "/mnt/disk2/btarget/$k.gpg" "$/mnt/disk1/$j/bsource/$k" done Can anyone suggest how to do this via scp from the target system? Thanks in advance.

    Read the article

  • Audit success in event log from not administrator IP - is that immediately a hack success indicator?

    - by Valentin Kuzub
    I checked event log today and between mass of failed audit events I found some successes which originated from not my country. However they look a little weird and no process is specified, while when I logon using RDP it says winlogon.exe I am wondering whether that means my system was compromised or there are good variants and it doesnt mean its all that bad. I am using a VPS solution if thats useful.

    Read the article

  • can't access SATA card config screen on boot, nor access the disks

    - by Ronald
    We've just upgraded our file server using an ASUS P6T WS Pro board, running FreeBSD-RELEASE 8.2 and using zfs to manage 12 WD20EARS disks. Since our 3ware card has been giving us trouble we started using the six on-board SATA connectors and got a SuperMicro USAS2-L8i to provide eight more ports. Mechanically, the card is an awkward fit but electrically it all seems ok. Upon boot, the LSI controller shows up and states that pressing ctrl-c will bring up the LSI Config Utility. When doing that, the message changes to state that the utility will be started after initialization, however that never happens. There does seem to be an error message that's only displayed too briefly to read and seems to be about PCI and "not enough space". (That message is pushed off by a hardware summary and I've found no way to scroll back at this point.) The disks do not show up in any recognizable ways after booting, either. I found a hint in another discussion to check the address mapping on either the card or the motherboard BIOS, but have found no way to do that. So what I tried on a hunch is to disable everything that's on-board, including network adapters, Firewire controller and SATA. In fact, after doing that, I can successfully launch the LSI Config Utility. As far as I can tell, all looks well in there, and when booting in that configuration it also displays a list of the disks connected to it, which looks just fine as well. Only problem now is that I can't boot that way, because I need the on-board SATA controller and network adapters. As soon as I re-enable any of them I'm back to square one. That discussion I mentioned about mapping addresses said to try D000, then D7FF, then DFFF, in order. The LSI Config Utility shows the card address as D000 but offers no way of changing it. Any tips or insights would be appreciated.

    Read the article

  • How to change mod_rewrite to avoid {REQUEST_FILENAME} in order to get around 255 character URL limit?

    - by Jeremy Reimer
    According to this answer: max length of url 257 characters for mod_rewrite? there is a maximum 255 character hard limit based on the file system for using mod_rewrite. According to the accepted answer, there are two solutions: Change the URL format of your application to a max of 255 characters between each slash. Move the Rewrite rules into the apache virtual host config and remove the REQUEST_FILENAME. I cannot use the first method, so I am trying to figure out the second. I have put the Rewrite rules into the Apache virtual host config as requested. However I cannot figure out how to remove the REQUEST_FILENAME and still have my web application framework (Dragonfly) still work. Here is the portion of the rewrite rules that I moved from .htaccess into the virtual host config file of Apache: RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-l RewriteCond %{REQUEST_FILENAME} !-f [OR] # if don't want Dragonfly to process html files comment # out the line below (you may need to remove the [OR] above too). RewriteCond %{REQUEST_FILENAME} \.(html|nl)$ # Main URL rewriting. RewriteRule (.*) index.cgi?$1 [L,QSA] I've tried removing {REQUEST_FILENAME} and it just breaks the framework in various ways. How do I rewrite this without using {REQUEST_FILENAME}?

    Read the article

  • mysql mass insert data

    - by user12145
    Edit: I realized that if I construct a large query in memory, the speed has increased almost 10 times of magnitude "insert ignore into xxx(col1, col2) values('a',1), values('b',1), values('c',1)..." Edit: since I have an index on the first column, the insert time creeps up as I insert more. Can I delay the index until the end? Original: I'm using the following to batch insert 10 million rows into mysql db(not all at once, since they don't all fit into memory), it's too slow(taking many hours). should I use load file to improve performance? I would have to create a second file to store all the 10 million rows, then load that into db. are there better ways? PreparedStatement st=con.prepareStatement("insert ignore into xxx (col1, col2) "+ " values (?, 1)"); Iterator d=data.iterator(); while(d.hasNext()){ st.clearParameters(); st.setString(1, (d.next()).toLowerCase()); st.addBatch(); } int[]updateCounts=st.executeBatch();

    Read the article

  • Sending email using SMTP (Gmail) from Hudson CI

    - by jensendarren
    How can I set up Hudson CI so that I can send out emails from the server following a build failure? At the moment all I get is the following error: com.sun.mail.smtp.SMTPSendFailedException: 530 5.7.0 Must issue a STARTTLS command first One solution is to start Hudson as follows: java -Dmail.smtp.starttls.enable="true" -jar /usr/share/hudson/hudson.war However, I am already using the following to start Hudson: sudo /etc/init.d/hudson start I am thinking the solution is to somehow set the system property mail.smtp.starttls.enable in a property file somewhere, but I have no idea how to do that. What are my options? Thank you all in advance!

    Read the article

  • Which IMAP flags are reliably supported across most mail servers?

    - by Ben Butler-Cole
    I am writing an application which reacts to emails sent to a mailbox. It retrieves the emails via IMAP. It will be deployed to a number of systems where I do not control the mail server configuration. I would like to use IMAP flags to indicate which messages have been handled. Are the system flags sufficiently widely supported that I can reasonably depend on them in my application? Are user-defined flags sufficiently widely supported? (If the answer is "ha ha, not a chance", then I shall use folders instead.) Thanks -Ben

    Read the article

  • FileZilla Server Configuration Problems

    - by LiamB
    I've set-up FileZilla server a Windows 2008 Machine, I then created the user, password and added a share folder which I set to Home Directory. I then connect to the server from the client computer Status: Connecting to {IP} Status: Connection established, waiting for welcome message... Response: 220-Welcome To {NAME} FTP Response: 220 {DOMAIN} Command: USER {USER} Response: 331 Password required for {USER} Command: PASS ********* Response: 230 Logged on Status: Connected Status: Retrieving directory listing... Command: PWD Response: 257 "/" is current directory. Command: TYPE I Response: 200 Type set to I Command: PASV Response: 227 Entering Passive Mode ({}DATA) Command: MLSD The connection works fine, however no remote directory is selected, it shows as "/" however uploading any file fails. Any suggestions on how to debug this more?

    Read the article

< Previous Page | 324 325 326 327 328 329 330 331 332 333 334 335  | Next Page >