Search Results

Search found 54052 results on 2163 pages for 'run configuration'.

Page 503/2163 | < Previous Page | 499 500 501 502 503 504 505 506 507 508 509 510  | Next Page >

  • Visual Studio hangs Windows

    - by Kronikarz
    I have: Windows XP Pro SP3 with latest updates, drivers, .NET, etc. Pentium 4 2.8GHz 2GB RAM 150GB HD ATI Radeon HD 3400 Recently (as early as a week ago) Visual C++ (both 2005 Pro and 2008 Express) started hanging up my computer. Whenever they are run, after 5-20 minutes of work the computer freezes. Everything becomes unresponsive, including the mouse cursor. No combination of keys does anything. What's strange, is that Winamp/Firefox continues to play whatever it was playing at the time (internet radio, mp3 playlist, etc). The only thing I can do is a hard reboot. I've run CCleaner and a full AVG antivirus scan, both of which found nothing suspicious. Does anyone know of a solution to this problem?

    Read the article

  • Innodb : cannot allocate the memory for the buffer pool

    - by mingyeow
    My innodb keeps crashing. This is the error message below. Does anyone know why this keeps happening? InnoDB: by InnoDB 49201616 bytes. Operating system errno: 12 InnoDB: Check if you should increase the swap file or InnoDB: ulimits of your operating system. InnoDB: On FreeBSD check you have compiled the OS with InnoDB: a big enough maximum process size. InnoDB: Note that in most 32-bit computers the process InnoDB: memory space is limited to 2 GB or 4 GB. InnoDB: We keep retrying the allocation for 60 seconds... 0 processes alive and '/usr/bin/mysqladmin --defaults-file=/etc/mysql/debian.cnf ping' resulted in /usr/bin/mysqladmin: connect to server at 'localhost' failed error: 'Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)' Check that mysqld is running and that the socket: '/var/run/mysqld/mysqld.sock' exists! InnoDB: Fatal error: cannot allocate the memory for the buffer pool [ERROR] Default storage engine (InnoDB) is not available

    Read the article

  • Linux Mint does not start after renaming home directory

    - by RUBY
    I am new to linux and was just trying to rename the only directory in home from rk to rhk. I messed up the whole thing and the settings. Created some new thing named rhk which I can't remember as it got all messed up and Now I am getting nothing after Linux Mint 10(julia) boots up - no start menu, no panel, no taskbar nothing. I tried to work in the recovery mode and got some(downloaded) 216mb of something(in the repair broken packages) hoping that it might help but didn't help. Moreover whenver I have booted in it shows messages like Could not update ICEauthority file /home/rk./.ICEauthority there is a problem with the configuration server. (usr/lib/libconfig24/gconfsanitycheck2 exited with status 256) The panel encountered a problem while loading "OAFIID: GNOME_mintMenu" The panel encountered a problem while loading "OAFIID: GNOME_IndicatorApplet" Naulitis could not create the following reqiured folders: /home/rk/Desktop, /home/rk/. Naulitis Moreover Alt+F2 gives Run application or run with file and nothing seems to be working.

    Read the article

  • Creating Multiple Users on Single PHP-FPM Pool

    - by Vince Kronlein
    Have PHP-FPM/FastCGI up and running on my cPanel/WHM server but I'd like have it allow for multiple users off of a single pool. Getting all vhosts to run off a single pool is simple by adding this to the Apache include editor under Global Post Vhost: <IfModule mod_fastcgi.c> FastCGIExternalServer /usr/local/sbin/php-fpm -host 127.0.0.1:9000 AddHandler php-fastcgi .php Action php-fastcgi /usr/local/sbin/php-fpm.fcgi ScriptAlias /usr/local/spin/php-fpm.fcgi /usr/local/sbin/php-fpm <Directory /usr/local/sbin> Options ExecCGI FollowSymLinks SetHandler fastcgi-script Order allow,deny Allow from all </Directory> </IfModule> But I'd like to find a way to implement php running under the user, but sharing the pool. I manage and control all the domains that run under the pool so I'm not concerned about security of files per account, I just need to make sure all scripting can be executed by the user who owns the files, instead of needing to change file permissions for each account, or having to create tons of vhost include files.

    Read the article

  • LSI 9260-8i w/ 6 256gb SSDs - RAID 5, 6, 10, or bad idea overall?

    - by Michael Pearson
    We're provisioning a new production server for our reasonably busy website. Our choice of host have available a 6 drive configuration with a LSI 9260-8i card. My initial thought was to fill all six bays with SSDs (Intel 520 256gb) and set them up in RAID. Good, bad, or terrible idea? Can the card handle it? Should we be using RAID 5, 6 or 10? This would be the first time the provider have filled all six slots for this rackmount with SSDs, so they're a bit hesitant. I'm wondering if somebody else with this card has done something similar in a production environment. We do about 43gb of writes per day and currently use about 300gb of storage. The server acts as webserver, database, and image store for approx 1 million files. The plan is to underprovision the SSDs by approximately 10% to 20% to increase their overall lifespan & performance. The fallback option is 2x480gb SSDs in RAID 1 and another 2x1TB HDDs in RAID 1. The motivation behind this is that the server rental cost difference between 2xSSDs and 6xSSDs is minimal (compared to the overall cost of the rental). We do not have any special high-IOPs requirements. However, if the configuration is known to work, I don't see a good reason to not use it and not have to worry about having separate 'fast and small' and 'slow and large' disks.

    Read the article

  • Regedit as Current User

    - by user1013264
    I'm trying to apply a registry fix for an Outlook/O365 issue on a user's account. The issue is that "regedit" is blocked by a domain GPO. I'm able to run "gpedit" using the local admin account. Question : When I run "regedit as the local admin, am I modifying the registry for the local admin user or the domain user who's actually logged onto the workstation? I'm trying to apply the following fix: http://support.microsoft.com/kb/2843677 Also, the path for the above mentioned registry should end in " \Preferences" which is what I'm unable to locate. I'm able to navigate up until \Outlook. Any suggestions would be appreciated. Thank you. Running Outlook 2010.

    Read the article

  • how can I effect DNS Caching on PHP/Memcache application

    - by Niro
    In a very high loaded Ubuntu/PHP web server I found that the PHP line: $memcache-connect("int-aws_ec2.memcached.myapp.net",11211); sometimes takes ~5 secs. Replacing the url with the ip address decreases the server load from ~20 to 0 My question is - where are the settings that effect the DNS caching for this? Is it in the server level or the memcache library ? How can I change it ? Additional info: Ubuntu 10.04 lucid PHP: 5.3.2-1ubuntu4.10 Apache/2.2.14 (Ubuntu) Amazon EC2 Even more info per Celada's comment: The DNS handling for the memcache server is done by scalr (the platform I use to manage the cloud resources). They have a client located on the instances and their own DNS servers. /etc/nsswitch.conf - hosts: files dns /etc/resolv.conf: nameserver 172.16.0.23 domain ec2.internal search ec2.internal The domain is not in hosts.conf To check if I run nscd I used /etc/init.d/nscd stop and received 'no such file' so i guess I dont run nscd. Thanks !

    Read the article

  • Setting a Static IP Running FreeBSD8 in VirtualBox hosted on Windows 7

    - by gvkv
    I'm using VirtualBox on Windows 7 (host) to run a FreeBSD (guest) based web server. I`ve assigned a static ip of 192.168.80. 1 to the (virtualized) NIC which is run in bridged mode. The problem is that when I ping an external server (such as google.com) I get a No route to host error: dimetro# ping google.com PING google.com (66.249.90.104): 56 data bytes ping: sendto: No route to host ... I can ping the BSD server from both another virtualized machine and my host machine and from the server, I can ping everything on the network. The router ip is 192.168.1.1/16. ADDENDUM: I have the following lines in /etc/rc.conf on the BSD VM to configure networking: defaultrouter="192.168.1.1" ifconfig_em0="inet 192.168.80.1 netmask 255.255.0.0"

    Read the article

  • Trouble setting up PATH for Java on Debian

    - by milkmansrevenge
    I am trying to get Oracle Java 7 update 3 working correctly on Debian 6. I have downloaded and set up the files in /usr/java/jre1.7.0_03. I have also set the following two lines at the end of /etc/bash.bashrc: export JAVA_HOME=/usr/java/jre1.7.0_03 export PATH=$PATH:$JAVA_HOME/bin Logging in as other users and root is fine, Java can be found: chris@mc:~$ java -version java version "1.7.0_03" Java(TM) SE Runtime Environment (build 1.7.0_03-b04) Java HotSpot(TM) 64-Bit Server VM (build 22.1-b02, mixed mode) However there are two cases where Java cannot be found as detailed below. Note that both of these worked fine when I have previously installed OpenJDK Java 6 via aptitude, but I need Oracle Java 7 for various reasons. Most importantly, I cannot run commands as another user via su, despite the PATH showing that Java should be present. The user was created with adduser chris root@mc:~# su chris -c "echo $PATH" /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/java/jre1.7.0_03/bin:/bin root@mc:~# su chris -c "java -version" bash: java: command not found root@mc:~# su chris -c "/usr/java/jre1.7.0_03/bin/java -version" java version "1.7.0_03" ... How can it be in the PATH but not be found? Update 05/04/2012: explained by Daniel, to do with it being a non-interactive shell so files such as /etc/profile and /etc/bash.bashrc are not executed. Doing a full swap to that user and running Java works: root@mc:~# su chris chris@mc:/root$ java -version java version "1.7.0_03" ... I run a script on start up which exhibits similar but slightly different problems. The script is located in /etc/init.d/start-mystuff.sh and calls a jar: #!/bin/bash # /etc/init.d/start-mystuff.sh java -jar /opt/Mars.jar I can confirm that the script runs on start up and the exit code is 127, which indicates command not found. Inserting a line to print/save the PATH shows that it is: /sbin:/usr/sbin:/bin:/usr/bin This second problem isn't as important because I can just point directly to the Java executable in the script, but I am still curious! I have tried setting the full PATH and JAVA_HOME explicitly in /etc/environment which didn't help. I have also tried setting them in /etc/profile which doesn't seem to help either. I have tried logging in and out again after setting PATH in the various locations (duh!). Anyway, long post for what will probably have a simple one line solution :( Any help with this would be greatly appreciated, I have spent far too long trying to fix it by myself. Motivation The first problem may seem obscure but in my system I have users that are not allowed SSH access yet I still want to run processes as them. I have a ton of scripts operating in this way and don't want to have to change them all.

    Read the article

  • Public-to-Public IPSec tunnel: NAT confusion

    - by WuckaChucka
    I know this is possible -- and apparently fairly common with larger companies that don't/can't route private addresses for overlap reasons -- but I can't wrap my head around how to get this to work. I'm playing around with pfSense, Vyatta and a Cisco 5505 right now, hardware-wise. So here's my setup: WEST: Vyatta outside: 10.0.0.254/24 inside: 172.16.0.1/24 machine a: 172.16.0.200/24 EAST: Cisco 5505 outside: 10.0.0.210/24 inside: 192.168.10.1 machine b (webserver): 192.168.10.2 So what we're trying to do is this: route traffic across the tunnel from machine A to machine B without using private addresses. i.e. 172.16.0.200 makes a TCP request to 10.0.0.210:80, and as far as EAST is concerned, it sees a src IP of 10.0.0.254. On WEST, I have your typical many-to-one Source NAT to translate 172.16.0.0/24 to 10.0.0.254 and that's confirmed to be working. Also on WEST, I have the following IPSec config: Local IP: 10.0.0.254 Peer IP: 10.0.0.210 local subnet: 10.0.0.254/32 remote subnet: 10.0.0.210/32 I have the reversed configuration on EAST. What happens when I make a request from machine A to 10.0.0.210:80 is that the SNAT translates the private address of machine A to 10.0.0.254 and it's routed out (and discarded at the other end) without establishing the tunnel. What I'm assuming is happening is that the inside interface on WEST receives a packet from 172.16.0.200 and since this doesn't match the local subnet defined in the tunnel configuration, it's not processed by the IPSec engine and the tunnel is not established. How do you make this work? Seems like a chicken and egg thing with the NAT and IPSec and I just can't wrap my head around how this can be done: can I say, "if a packet is received on the inside interface with a destination of 10.0.0.210, translate it to 10.0.0.254 before the IPSec engine inspects it"?

    Read the article

  • Switching from prefork MPM to worker MPM + php-fpm on ubuntu

    - by Shane
    All tutorials I found were how to fresh install worker MPM + PHP-FPM, since my wordpress blog's already up and running with prefork MPM, correct me if I'm wrong in the simulated installation process: I'm on ubuntu and according to some tutorials, the following lines would do all the tricks: apt-get install apache2-mpm-worker libapache2-mod-fastcgi php5-fpm php5-gd a2enmod actions fastcgi alias Then you setup configuration in /etc/apache2/conf.d/php5-fpm.conf: <IfModule mod_fastcgi.c> AddHandler php5-fcgi .php Action php5-fcgi /php5-fcgi Alias /php5-fcgi /usr/lib/cgi-bin/php5-fcgi FastCgiExternalServer /usr/lib/cgi-bin/php5-fcgi -host 127.0.0.1:9000 -pass-header Authorization </IfModule> After all these, restart: service apache2 restart && service php5-fpm restart Question: 1) Would it cause any down time in the whole process for previously running sites with prefork MPM? 2) Do you have to change any already existent configuration files like php or mysql or apache2(would they take effect immediately after the switch without you doing anything)? 3) I've already have apc up and running, do you have to re-install/re-configure it after the switch? 4) How do you find out if apache2 is working in worker MPM mode as expected? Thanks a lot!

    Read the article

  • Installing Ubuntu to a USB drive

    - by Carl Smotricz
    I'm having a rough time getting Ubuntu to run from a 250 GB USB hard drive. I booted Ubuntu 9.10 from a CD and ran the regular "install" to the attached USB drive. I used the "advanced" option on the drive partition question to put the boot loader on /dev/sdb (the USB disk) but when I boot the machine it doesn't recognize there's a boot loader on the USB drive (it offers to boot from 2 other devices but not the USB disk). I also tried booting from the Ubuntu CD and using usb-creator-gtk to set up the USB drive. Seems to me this is meant to work with flash drives. I got a bootable USB disk but it looked and worked like the CD, i.e. it gave me options of "live CD" operation, installing, memtest, etc. That's not the way I want to run the system. Some help in installing Ubuntu, bootable into a "full" running system on my USB drive would be appreciated.

    Read the article

  • Installing sqlite gem fails on AWS Linux instance with sqlite-devel libraries installed

    - by Scott
    Hi, I'm running an instance built off ami-595a0a1c. I am trying to install the sqlite3 (or sqlite) gem and it's failing with the below error: $ sudo gem install sqlite3 Building native extensions. This could take a while... ERROR: Error installing sqlite3: ERROR: Failed to build gem native extension. /usr/bin/ruby extconf.rb checking for sqlite3.h... no sqlite3.h is missing. Try 'port install sqlite3 +universal' or 'yum install sqlite3-devel' and check your shared library search path (the location where your sqlite3 shared library is located). extconf.rb failed * Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/usr/bin/ruby --with-sqlite3-dir --without-sqlite3-dir --with-sqlite3-include --without-sqlite3-include=${sqlite3-dir}/include --with-sqlite3-lib --without-sqlite3-lib=${sqlite3-dir}/lib Gem files will remain installed in /usr/lib64/ruby/gems/1.8/gems/sqlite3-1.3.3 for inspection. Results logged to /usr/lib64/ruby/gems/1.8/gems/sqlite3-1.3.3/ext/sqlite3/gem_make.out Typically, this just means you need to install the development libraries and everything is cool. However, I have installed the sqlite-devel packages and still no dice. Since this is the Amazon Linux instance, I'd rather not add more repositories than the ones Amazon provides if possible. What can i do to get this thing to compile? Thanks for any insight! From a brand new instance, here's what I've done: $ sudo yum install rubygems ruby-devel $ sudo gem update --system $ sudo gem install rails $ rails new app $ cd app $ rails server Could not find gem 'sqlite3 (= 0)' in any of the gem sources listed in your Gemfile. $ sudo yum install sqlite-devel $ sudo gem install sqlite (or sqlite3 -- same result) See breakage above

    Read the article

  • MS SQL dts to ssis migration error

    - by Manjot
    Hi, I have migrated some DTS packages to SSIS 2005 using "Migration" wizard. When I tried to run it, it fails saying you need a higher version of SSIS even though the destination SSIS server is on 9.0.4211 level. then I digged in the package using business intelligence studio and saw that one of the package subtasks is "Transform data task" (the dts version) and the package fails to run that. The storage location for this dts task is set to "Embedded in Task". I didn't touch it. why didn't it convert this task to an SSIS data flow task? any help please? Thansk in advance

    Read the article

  • Running scheduled web scripts in a Windows Server environment

    - by Dan Murfitt
    I'm trying to get a scheduled web script running on a Windows Server and so far the only way I've managed to automate this process is by using the Task Scheduler to open Internet Explorer with the web address as a parameter. I then need to create a separate task to run just after this task to close Internet Explorer (otherwise the task doesn't complete). Is there a better way of doing this? I've also managed to run the script by calling the web address through a Telnet connection to the web server (GET /web/address/here) but I haven't found a way of automating this process on a scheduled basis. Any ideas appreciated

    Read the article

  • BIOS corrupted? How to proceed? Acer Travelmate 290

    - by dtlussier
    I have a older ACER Travelmate 290 (manufactured in 2002 or 2003), which I recently tried to upgrade to Ubuntu 9.10. After doing the upgrade process it appeared as if I had a problem with my x server configuration, as on the first reboot post-install I heard the Ubuntu startup sound, but had a black screen. I thought I would then reboot again to drop down into text-mode to trouble shoot the x configuration problem. However, when I tried rebooting, something went wrong and since then when I start up the machine I get absolutely nothing except the first hardware check (i.e. HDD light flashes, CD/DVD drive spins, etc.). Other than that the screen remains totally black and I have no HDD or processor activity at all. I have tried restarting it a number of time holding down all kinds of key combinations to try and coax it into the BIOS (if possible) with no luck. I have also tried putting in both a live Linux disc and a Windows install disk without any luck. With a disk in the drive it will spin for a few seconds and then stop. All this has lead me to suspect that the BIOS is somehow corrupt (not sure about the right terminology). I have tried putting a new BIOS image and installer program downloaded from ACER on a USB key to see if it will run when I start up the machine, but no luck. I'm not sure if this method of interacting/updating/flashing the BIOS will work outside of Windows/DOS as both OSs are mentioned kind of ambiguously in the documentation. I have also taken the laptop case apart and inspected the various cards and cannot find any obviously burned out components. I'm not sure how to proceed at this point in terms of components to try, or how to try and load a new BIOS image onto the board. Any advice here would be great, especially from those with experience with this particular line of laptops. Thanks!

    Read the article

  • Mac OSX and root login enabled

    - by reza
    All I am running OSX 10.6.8 I have enabled root login through Directory Utility. I have assigned a password. I get an error when I try to ssh root@localhost. ssh -v root@localhost OpenSSH_5.2p1, OpenSSL 0.9.8r 8 Feb 2011 debug1: Reading configuration data /Users/rrazavipour-lp/.ssh/config debug1: Reading configuration data /etc/ssh_config debug1: Connecting to localhost [127.0.0.1] port 22. debug1: Connection established. debug1: identity file /Users/rrazavipour-lp/.ssh/identity type -1 debug1: identity file /Users/rrazavipour-lp/.ssh/id_rsa type 1 debug1: identity file /Users/rrazavipour-lp/.ssh/id_dsa type 2 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.2 debug1: match: OpenSSH_5.2 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.2 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host 'localhost' is known and matches the RSA host key. debug1: Found key in /Users/rrazavipour-lp/.ssh/known_hosts:47 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,keyboard-interactive debug1: Next authentication method: publickey debug1: Offering public key: /Users/rrazavipour-lp/.ssh/id_dsa debug1: Authentications that can continue: publickey,keyboard-interactive debug1: Trying private key: /Users/rrazavipour-lp/.ssh/identity debug1: Offering public key: /Users/rrazavipour-lp/.ssh/id_rsa debug1: Authentications that can continue: publickey,keyboard-interactive debug1: Next authentication method: keyboard-interactive Password: debug1: Authentications that can continue: publickey,keyboard-interactive debug1: Authentications that can continue: publickey,keyboard-interactive debug1: Authentications that can continue: publickey,keyboard-interactive debug1: No more authentication methods to try. Permission denied (publickey,keyboard-interactive). What I am doing wrong? I know I have the password correct.

    Read the article

  • Linux RAID-0 performance doesn't scale up over 1 GB/s

    - by wazoox
    I have trouble getting the max throughput out of my setup. The hardware is as follow : dual Quad-Core AMD Opteron(tm) Processor 2376 16 GB DDR2 ECC RAM dual Adaptec 52245 RAID controllers 48 1 TB SATA drives set up as 2 RAID-6 arrays (256KB stripe) + spares. Software : Plain vanilla 2.6.32.25 kernel, compiled for AMD-64, optimized for NUMA; Debian Lenny userland. benchmarks run : disktest, bonnie++, dd, etc. All give the same results. No discrepancy here. io scheduler used : noop. Yeah, no trick here. Up until now I basically assumed that striping (RAID 0) several physical devices should augment performance roughly linearly. However this is not the case here : each RAID array achieves about 780 MB/s write, sustained, and 1 GB/s read, sustained. writing to both RAID arrays simultaneously with two different processes gives 750 + 750 MB/s, and reading from both gives 1 + 1 GB/s. however when I stripe both arrays together, using either mdadm or lvm, the performance is about 850 MB/s writing and 1.4 GB/s reading. at least 30% less than expected! running two parallel writer or reader processes against the striped arrays doesn't enhance the figures, in fact it degrades performance even further. So what's happening here? Basically I ruled out bus or memory contention, because when I run dd on both drives simultaneously, aggregate write speed actually reach 1.5 GB/s and reading speed tops 2 GB/s. So it's not the PCIe bus. I suppose it's not the RAM. It's not the filesystem, because I get exactly the same numbers benchmarking against the raw device or using XFS. And I also get exactly the same performance using either LVM striping and md striping. What's wrong? What's preventing a process from going up to the max possible throughput? Is Linux striping defective? What other tests could I run?

    Read the article

  • Why do password entries over ssh take so long?

    - by Dean
    When I'm ssh'd into my server, any time I enter my password, there's a 40 second delay before the server responds. This occurs when logging in, as well as whenever I run a command via sudo. The delay does not happen when I run su and enter my password however. Using the -v flag for ssh doesn't show anything during this time. Looking at Wireshark, all traffic between the two machines stops while this is happening. Any idea what's happening, or advice on how to investigate this? The server is running Debian squeeze (6.0.4)

    Read the article

  • How to reset mysql's replication settings completely, without reinstalling it?

    - by user38060
    I set up mysql replication by adding references to binlogs, relay logs etc in my.cnf restarted mysql, it worked. I wanted to change it so I deleted all binlog related files including log-bin.index, removed binlog statements from my.cnf restarted server, works set master to '', purge master logs since now(), reset slave, stop slave, stop master. now, to set up replication again, I added binlog statements to the server. But then I hit this problem when restarting with: sudo mysqld (the only way to see mysql's startup errors) I get this error: /usr/sbin/mysqld: File '/etc/mysql/var/log-bin.index' not found (Errcode: 13) Because indeed, this file does not exist! (I deleted it, while trying to set up a new replication system) Hmm, if I change the config line to: log-bin-index = log-bin.index I get a different error: [ERROR] Can't generate a unique log-filename /etc/mysql/var/bin.(1-999) [ERROR] MSYQL_BIN_LOG::open failed to generate new file name. [ERROR] Aborting The first time I set up replication on this system, I didn't need to create this file. I did the same thing - added references to a previously non-existing file, and mysql created it. Same with relay logs, etc. I don't know why mysql insists on trying to read the old folder. Should I just reinstall the whole package again? That seems like overkill. my my.cnf: [client] port = 3306 socket = /var/run/mysqld/mysqld.sock [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp skip-external-locking bind-address = IP key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 myisam-recover = BACKUP table_cache = 64 sort_buffer =64K net_buffer_length =2K query_cache_limit = 1M query_cache_size = 16M slow_query_log_file = /etc/mysql/var/mysql-slow.log long_query_time = 1 log-queries-not-using-indexes expire_logs_days = 10 max_binlog_size = 100M server-id = 3 log-bin = /etc/mysql/var/bin.log log-slave-updates log-bin-index = /etc/mysql/var/log-bin.index log-error = /etc/mysql/var/error.log relay-log = /etc/mysql/var/relay.log relay-log-info-file = /etc/mysql/var/relay-log.info relay-log-index = /etc/mysql/var/relay-log.index auto_increment_increment = 10 auto_increment_offset = 3 master-host = HOST master-user = USER master-password=PWD replicate-do-db = DBNAME collation_server=utf8_unicode_ci character_set_server=utf8 skip-character-set-client-handshake [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] #no-auto-rehash [myisamchk] key_buffer_size = 16M sort_buffer_size = 8M [mysqlhotcopy] interactive-timeout !includedir /etc/mysql/conf.d/ Update: Changing all the /etc/mysql/var/xxx paths in binlog & relay log statements to local has somehow solved the problem. I thought it was apparmor causing it at first, but when I added /etc/mysql/* rw, to apparmor's config and restarted it, it still couldn't read the full path.

    Read the article

  • Any reason not to disable the Windows pagefile given enough physical RAM?

    - by Evgeny
    The question of disabling the Windows pagefile has already been discussed quite a bit, for example here and here and here. People continue to upvote answers that say "you should not disable your pagefile even if you have plenty of RAM", but I have yet to see any concrete, verifiable reasons being given for this advice. As far as I can see, if you never need to read from the pagefile (because you have enough RAM) then performance could only be worse with it enabled due to Windows pre-emptively writing to it. At best, performance would be the same. I can't see how it could possibly be improved by writing data you never need to read. So my question is: Assuming that I have enough physical RAM for everything I do, is there any reason I should not disable the pagefile? Let's say the version of Windows is Windows XP x64 SP2 or Windows Server 2003 x64 SP2 (same thing). If it's different for Windows Server 2008 x64 I'd be interested to hear an answer for that as well. I'm looking for specific, objective reasons from good sources, not just opinions. Something like "here are the benchmarks done with and without a pagefile and the results were better with a pagefile, even with enough RAM" or "according to this MS KB article problem X occurs if you disable the pagefile". So far the only reasons I've seen mentioned are: Even if you think you have enough RAM you might run out. OK, but for the purposes of this question, let's just take it as a given that I have enough. Maybe I only ever read my email and I have 16GB RAM. Or 128GB. Or 1TB. Or whatever - but it's enough for 100% of what I do, 100% of the time. Another way to think of it is: if I have x MB physical RAM and y MB pagefile and I never run out of RAM in that configuration, would I not be better off, performance-wise, with x+y MB physical RAM and no pagefile? Windows is "used to" having a paging file and it might not function as reliably (from Understanding the Impact of RAM on Overall System Performance That's rather vague and I find it hard to believe, given that MS has provided the option to disable the pagefile. Windows knows what it's doing better than you. No - it doesn't know that I won't run more programs or load more data, but I do.

    Read the article

  • SSD Fresh Does Not Start

    - by Jim Fell
    I recently installed a new 60GB SSD as my primary hard drive and re-installed Windows 7 Professional 64-bit. I then installed SSD Fress from Abelssoft to optimize Windows to run on the SSD. It seemed to install okay, but when I try to run the utility, its splash screen appears briefly before it quietly closes. No errors are displayed; the utility just fails to launch. I have run SSD Fresh on another SSD-equipped Windows 7 Pro x64 computer in the past without any problems. Does anyone know what might be preventing the program from running? I tried shutting down the Spybot Resident and disabling the firewall and virus scanner with no luck. I also tried running the tool as administrator; I even tried reinstalling it, running the installer as administrator. No luck. Every time I try to launch the program the Event Viewer logs this same set of errors: Error 4/2/2012 11:35:44 PM Application Error 1000 (100) Error 4/2/2012 11:35:43 PM .NET Runtime 1026 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None For those who are interested, here is my system configuration: ASRock M3A770DE AM3 AMD 770 ATX AMD Motherboard AMD Athlon II X3 455 Rana 3.3GHz Socket AM3 95W Triple-Core Desktop Processor ADX455WFGMBOX G.SKILL Value Series 8GB (2 x 4GB) 240-Pin DDR3 SDRAM DDR3 1333 (PC3 10600) Desktop Memory Model F3-10600CL9D-8GBNT Mushkin Enhanced Chronos Deluxe MKNSSDCR60GB-DX 2.5" 60GB SATA III Synchronous MLC Internal Solid State Drive (SSD) (Primary/Boot HD) Western Digital Caviar Blue RFHWD1600AAJS 160GB 7200 RPM SATA 3.0Gb/s 3.5" Internal Hard Drive -Bare Drive (Secondary HD) Sony Optiarc CD/DVD Burner Black SATA Model AD-7261S-0B LightScribe Support RAIDMAX RX-850AE 850W ATX12V v2.3 / EPS12V SLI Certified CrossFire Ready 80 PLUS GOLD Certified Modular Active PFC Power Supply ASUS HD7850-DC2-2GD5 Radeon HD 7850 2GB 256-bit GDDR5 PCI Express 3.0 x16 HDCP Ready CrossFireX Support Video Card Asus ML228H 21.5" Full HD LED BackLight LED Monitor Slim Design (x3)

    Read the article

  • Problem removing registry key with .reg file

    - by TMRW
    Didnt notice this being asked so here i am. I have a problem with specific registry key: NvCplDaemon"="RUNDLL32.EXE C:\\Windows\\system32\\NvCpl.dll,NvStartup" Problem is that i have tried many variations of the reg file like: Windows Registry Editor Version 5.00 HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Run "NvCplDaemon"=-"RUNDLL32.EXE C:\\Windows\\system32\\NvCpl.dll,NvStartup" and Windows Registry Editor Version 5.00 [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Run] "NvCplDaemon"=-"RUNDLL32.EXE C:\\Windows\\system32\\NvCpl.dll,NvStartup" And they all seemingly complete but the key remains.It is not locked or something.I can delete and recreate it manually any time.Im guessing there is some small spelling error on my file because i think i have followed MS instructions: http://support.microsoft.com/kb/310516 This is how it looks in Registry: Someone?

    Read the article

  • No dual cpu support for VirtualBox with a CPU that doesn't support multicore?

    - by djangofan
    With VMWare it works fine and I can run multiple cores on a VMWare image. With Sun VirtualBox I can only run 1 cpu on a image. Its annoying. Why does Sun Virtualbox not work the same as VMWare in this respect?? My CPU is: XEON 3.00GHz Intel 90nm 2MBCache QUAD CPU x14 Socket 604 mPGA Family 15 Model 4(04) Stepping 3 Revision 05 MMX SSE3 XD SIV.exe tells me: No virtual machine extensions x86 with 64-bit support NO IA64 support MPS but with NO MCP 2 physical processors, 2 cores, 4 logical processors

    Read the article

  • PSQL 64bit driver error

    - by Alex Holsgrove
    I have an Ubuntu 12.04 64bit server setup under Hyper-V. I have installed Pervasive 64bit SQL drivers so that a stock-updater script can run daily (Updates external MySQL database from another local server running Exchequer software / PSQL database). These drivers seem to conflict, as I found out when trying to run any apt-get commands: apt-get update apt-get: /usr/local/psql/lib64/libstdc++.so.6: version `GLIBCXX_3.4.9' not found (required by apt-get) apt-get: /usr/local/psql/lib64/libstdc++.so.6: version `GLIBCXX_3.4.15' not found (required by apt-get) apt-get: /usr/local/psql/lib64/libstdc++.so.6: version `GLIBCXX_3.4.11' not found (required by apt-get) apt-get: /usr/local/psql/lib64/libstdc++.so.6: version `GLIBCXX_3.4.11' not found (required by /usr/lib/x86_64-linux-gnu/libapt-pkg.so.4.12) apt-get: /usr/local/psql/lib64/libstdc++.so.6: version `GLIBCXX_3.4.9' not found (required by /usr/lib/x86_64-linux-gnu/libapt-pkg.so.4.12) apt-get: /usr/local/psql/lib64/libstdc++.so.6: version `GLIBCXX_3.4.15' not found (required by /usr/lib/x86_64-linux-gnu/libapt-pkg.so.4.12) Any help would be great.

    Read the article

< Previous Page | 499 500 501 502 503 504 505 506 507 508 509 510  | Next Page >