Search Results

Search found 55134 results on 2206 pages for 'argument error'.

Page 1038/2206 | < Previous Page | 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045  | Next Page >

  • OpenLDAP replication fails, "syncrepl_entry: rid=666 be_modify failed (20)"

    - by Pavel
    I've configured a second host to replicate the main LDAP server via syncrepl in the slapd.conf: syncrepl rid=666 provider=ldaps://my-main-server.com type=refreshAndPersist searchBase="dc=Staff,dc=my-main-server,dc=com" filter="(objectClass=*)" scope=sub schemachecking=off bindmethod=simple binddn="cn=repadmin,dc=my-main-server,dc=com" credentials=mypassword When I restart slapd, it writes to /var/log/debug Jun 11 15:48:33 cluster-mn-04 slapd[29441]: @(#) $OpenLDAP: slapd 2.4.9 (Mar 31 2009 07:18:37) $ ^Ibuildd@yellow:/build/buildd/openldap2.3-2.4.9/debian/build/servers/slapd Jun 11 15:48:34 cluster-mn-04 slapd[29442]: slapd starting Jun 11 15:48:34 cluster-mn-04 slapd[29442]: null_callback : error code 0x14 Jun 11 15:48:34 cluster-mn-04 slapd[29442]: syncrepl_entry: rid=666 be_modify failed (20) Jun 11 15:48:34 cluster-mn-04 slapd[29442]: do_syncrepl: rid=666 quitting I've looked into the sources for the return code and found only #define LDAP_TYPE_OR_VALUE_EXISTS 0x14 in include/ldap.h. Anyway, I don't quite get what the error message means. Can you help me debugging this problem and figure out why the LDAP replication doesn't work? I've managed to put a "manual" copy via slapcat and slapadd into the database, but I'd like to sync automatically. UPDATE: "Solved" by removing /var/lib/ldap/* and re-importing the database with slapadd.

    Read the article

  • How to Re-install an App that Shows up in the Appstore as 'Update' Instead of 'Buy App'

    - by Craig Reville
    So long story short: I dropped the wrong app into 'clean my mac' and I hit 'cancel' but it was too late by that point. I rebooted and appstore said it had an update, when I opened appstore it was showing an update for the app I just uninstalled. I tried clicking 'update' but it gives me an error saying it's unable to install after 'downloading'. When I try to go into 'purchased apps' it shows the app as uninstalled so I click 'install' and I get an error saying it's already installed. I'm running Lion OS X, latest version, updated, mac book pro is only a few months old. I tried searching through the entire system to remove all traces of the app, after rebooting appstore no longer shows the app and no longer shows the update but on the apps page it still says 'Update'. I tried reinstalling the app from desktop OUT of the appstore and again says the app is 'already installed'. So after reading more about lion I found an article that spoke about 'BundleID' being the thing that tells appstore what's installed and needing updating however I can't find the location of where the BundleID would be. Any thoughts? I've tried CCleaner, AppCleaner etc and none of them show the app, mainly because it is uninstalled. Update I've spoken to Apple Support who confirmed that there is a file in the system that connects separately to tell the system if there are updates available however they declined to inform me of any further details. Apple also referred me from technical support to iTunes App Store opposed to Mac App Store support and from there I have been referred to AppleCare who are currently 'investigating' this issue. Hopefully there will be a fix that's simple to implement for people having similar issues, this appears to be a more common issue than I previously thought.

    Read the article

  • Why is 32-bit-mode required in IIS7.5 for my app?

    - by Jonas Lincoln
    I have a .net4 web application running in a 64 bits 2008 server. I can only get it to run when I set the app pool to Enable 32-bits application to true. All dlls are compiled for .net4 (verified with corflags.exe). How can I figure out why Enable 32-bit is required? The error message from the event log when starting as a 64-bit app-pool Event code: 3008 Event message: A configuration error has occurred. Event time: 2011-03-16 08:55:46 Event time (UTC): 2011-03-16 07:55:46 Event ID: 3c209480ff1c4495bede2e26924be46a Event sequence: 1 Event occurrence: 1 Event detail code: 0 Application information: Application domain: removed Trust level: Full Application Virtual Path: removed Application Path: removed Machine name: NMLABB-EXT01 Process information: Process ID: 4324 Process name: w3wp.exe Account name: removed Exception information: Exception type: ConfigurationErrorsException Exception message: Could not load file or assembly 'System.Data' or one of its dependencies. An attempt was made to load a program with an incorrect format. at System.Web.Configuration.CompilationSection.LoadAssemblyHelper(String assemblyName, Boolean starDirective) at System.Web.Configuration.CompilationSection.LoadAllAssembliesFromAppDomainBinDirectory() at System.Web.Configuration.AssemblyInfo.get_AssemblyInternal() at System.Web.Compilation.BuildManager.GetReferencedAssemblies(CompilationSection compConfig) at System.Web.Compilation.BuildManager.CallPreStartInitMethods() at System.Web.Hosting.HostingEnvironment.Initialize(ApplicationManager appManager, IApplicationHost appHost, IConfigMapPathFactory configMapPathFactory, HostingEnvironmentParameters hostingParameters, PolicyLevel policyLevel, Exception appDomainCreationException) Could not load file or assembly 'System.Data' or one of its dependencies. An attempt was made to load a program with an incorrect format. at System.Reflection.RuntimeAssembly._nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, RuntimeAssembly locationHint, StackCrawlMark& stackMark, Boolean throwOnFileNotFound, Boolean forIntrospection, Boolean suppressSecurityChecks) at System.Reflection.RuntimeAssembly.InternalLoadAssemblyName(AssemblyName assemblyRef, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection, Boolean suppressSecurityChecks) at System.Reflection.RuntimeAssembly.InternalLoad(String assemblyString, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection) at System.Reflection.Assembly.Load(String assemblyString) at System.Web.Configuration.CompilationSection.LoadAssemblyHelper(String assemblyName, Boolean starDirective) Request information: Request URL: "our url" Request path: "url" User host address: ip-adddress User: Is authenticated: False Authentication Type: Thread account name: "app-pool" Thread information: Thread ID: 6 Thread account name: "app-pool" Is impersonating: False Stack trace: at System.Web.Configuration.CompilationSection.LoadAssemblyHelper(String assemblyName, Boolean starDirective) at System.Web.Configuration.CompilationSection.LoadAllAssembliesFromAppDomainBinDirectory() at System.Web.Configuration.AssemblyInfo.get_AssemblyInternal() at System.Web.Compilation.BuildManager.GetReferencedAssemblies(CompilationSection compConfig) at System.Web.Compilation.BuildManager.CallPreStartInitMethods() at System.Web.Hosting.HostingEnvironment.Initialize(ApplicationManager appManager, IApplicationHost appHost, IConfigMapPathFactory configMapPathFactory, HostingEnvironmentParameters hostingParameters, PolicyLevel policyLevel, Exception appDomainCreationException) Custom event details:

    Read the article

  • Completely remove and freshly install MySql on XP?

    - by Corey Ogburn
    I have read this question and have not found it as a solution and I have even attempted much more. I've uninstalled MySql 5.5.18 and deleted: C:\Program Files\MySql C:\Documents and Settings\All Users\Application Data\MySql After uninstalling, I restart the computer. When I reinstall, in the MySql Server Instance Configuration Wizard I leave everything to their defaults except: I add a firewall exception I check Launch MySQL Server Automatically I check Include BIN directory in windows path Enable root access from remote machines (I'll lock that down later, just debugging for now, I have also tried installing without this option to no avail) I've tried Typical and Complete while installing, as well as with and without strict mode. No combination shows a difference. After all this, it cannot Apply Security Settings and I get a 10061 error (it also said error number 2003) and this article didn't help. I've tried everything I can to completely uninstall and successfully reinstall so I can start from scratch. I've uninstalled and reinstalled about a dozen times with minor changes (including turning off the firewall at times), each time deleting the above folders and any proper registry entries with no success. Note by success, applying security settings and a working remote connection. I can connect locally every time, but it's remotely that counts. I have tried to look for exterior problems such as port forwarding in the router and (even though the installer should add it) I do double check the firewall settings, which have always allowed the default port. I'm out of ideas.

    Read the article

  • SSL timeout on some sites, across all browsers, on Mac OS X Snow Leopard

    - by dansays
    For the past several weeks, I've been receiving "Error 7 (net::ERR_TIMED_OUT): The operation timed out" when I attempt to connect to either Twitter or Paypal via SSL. I get this specific error in Google Chrome, but the same problem occurs in both Safari and Firefox. Other sites work fine, and other computers on my network can access these two sites. I have no firewall settings that would prevent me from accessing these sites over port 443. I notice that both Twitter and Paypal both have "Verisign Class 3 Extended Validation SSL CA" certificates. It is unclear whether this is related to the problem. In an effort to troubleshoot, I attempted to open the test sites referenced on Verisign's root certificate support page, which worked fine. Just to be sure, I downloaded and installed the root package file and installed all included Verisign certificates. No joy. I feel like I've hit a dead end. Any ideas? Update the first: I also cannot connect to FedEx.com, who also has a Verisign Class 3 Extended Validation cert. Update the second: Aaaaaaand it fixed itself. I did nothing. Or, I did something that worked, but in a delayed fashion. Frustrating, but a win is a win. I'll take it.

    Read the article

  • Why can't I bind to 127.0.0.1 on Mac OS X?

    - by Noah Lavine
    Hello, I'm trying to set up a simple web server on Mac OS X, and I keep getting an error when I run bind. Here's what I'm running (this transcript uses GNU Guile, but just as a convenient interface to posix). (define addr (inet-aton "127.0.0.1")) ; get internal representation of 127.0.0.1 (define sockaddr (make-socket-address AF_INET addr 8080)) ; make a struct sockaddr (define sock (socket PF_INET SOCK_STREAM 0)) ; make a socket (bind sock sockaddr) ; bind the socket to the address That gives me the error In procedure bind: can't assign requested address. So I tried it again allowing any address. (define anyaddr (make-socket-address AF_INET INADDR_ANY 8080)) ; allow any address (bind sock anyaddr) And that works fine. But it's weird, because ifconfig lo0 says lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> mtu 16384 inet6 ::1 prefixlen 128 inet6 fe80::1%lo0 prefixlen 64 scopeid 0x1 inet 127.0.0.1 netmask 0xff000000 So the loopback device is assigned to 127.0.0.1. So my question is, why can't I bind to that address? Thanks. Update: the output of route get 127.0.0.1 is route to: localhost destination: localhost interface: lo0 flags: <UP,HOST,DONE,LOCAL> recvpipe sendpipe ssthresh rtt,msec rttvar hopcount mtu expire 49152 49152 0 0 0 0 16384 0

    Read the article

  • Which events specifically cause Windows 2008 to mark a SAN volume offline?

    - by Jeremy
    I am searching for specific criteria/events that will cause Windows 2008 to mark a SAN volume as offline in disk management, even though it is connected to that SAN volume via FC or iSCSI. Microsoft states that "A dynamic disk may become Offline if it is corrupted or intermittently unavailable. A dynamic disk may also become Offline if you attempt to import a foreign (dynamic) disk and the import fails. An error icon appears on the Offline disk. Only dynamic disks display the Missing or Offline status." I am specifically wondering if, on the SAN, changing the path to the disk (such as the disk being presented to the host via a different iSCSI target IQN or a different LUN #) would cause a volume to be offlined in disk management. Thanks! Edit: I have already found two reasons why a disk might be set offline, disk signature collisions and the SAN disk policy. Bounty would be awarded to someone who can find further documented reasons related to changes in the volume's path. Disk signature collisions: http://blogs.technet.com/b/markrussinovich/archive/2011/11/08/3463572.aspx SAN disk policy: http://jeffwouters.nl/index.php/2011/06/disk-offline-with-error-the-disk-is-offline-because-of-a-policy-set-by-an-administrator/

    Read the article

  • Subdomains not working with virtual hosts on apache2 ubuntu

    - by cy834sh4rk
    I'm trying to set up a subdomain on my ec2 account but can't figure out what's going on. I've looked for a few hours and haven't been able to find an answer :-/ I'm trying to set up a subdomain using virtual hosts but no matter what I try the browser can't find the subdomain :-( I have the following vhosts files set up: apache2/sites-available/mysite (this site currently works) <VirtualHost *:80 ServerName mysite.com ServerAdmin webmaster@localhost DocumentRoot /home/sites/mysite <Directory /home/sites/mysite Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory ErrorLog ${APACHE_LOG_DIR}/mysite-error.log LogLevel warn CustomLog ${APACHE_LOG_DIR}/mysite-access.log combined </VirtualHost apache2/sites-available/red (this is the subdomain I'm trying to set up) <VirtualHost *:80 ServerName red.mysite.com ServerAdmin webmaster@localhost DocumentRoot /var/www/red <Directory /var/www/red Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory ErrorLog ${APACHE_LOG_DIR}/red-error.log LogLevel warn CustomLog ${APACHE_LOG_DIR}/red-access.log combined </VirtualHost Apache mod_rewrite is enabled. I've enabled both sites using a2ensite and I make sure I restart apache every time I make a change. /etc/hosts 127.0.0.1 localhost 127.0.0.1 mysite.com 127.0.0.1 red.mysite.com Any help would be appreciated. Thanks!

    Read the article

  • All virtualhosts serving Apache default files

    - by tj111
    I'm trying to configure Apache as an in-network webserver, and am using the sites-available/sites-enabled feature as opposed to just static vhost files. I set up a couple VirtualHosts, all with a unique DocumentRoot, however request for all the VirtualHosts just serve up the "It's Working!" default file. I can't for the life of me figure out why it won't serve the content out of the correct directory. Here's the contents of the virtualhost directive files, let me know if I need to post more. default (note that apache renames this to 000-default in sites-enabled, so it's not an ordering issue) NameVirtualHost *:80 ServerName emp <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName emp DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> billmed <VirtualHost *:80> ServerName billmed.emp ServerRoot /home/empression/Projects/billmed/web/httpdocs <Directory "/home/empression/Projects/billmed/web/httpdocs"> Order Allow,Deny Allow from All </Directory> </VirtualHost> Note that I have DNS zones for both emp and billmed.emp, as well as entries in /etc/hosts. My ultimate goal is to set up this machine as an in-house webserver with a custom tld (emp), but progress has been pretty slow.

    Read the article

  • EFS Remote Encryption

    - by Apoulet
    We have been trying to setup EFS across our domain. Unfortunately Reading/Writing file over network share does not work, we get an "Access Denied" error. Another worrying fact is that I managed to get it working for 1 machine but no other would work. The machines are all Windows 2008R2, running as VM under ESXi host. According to: http://technet.microsoft.com/en-us/library/bb457116.aspx#EHAA We setup the involved machine to be trusted for delegation The user are not restricted and can be trusted for delegation. The users have logged-in on both side and can read/write encrypted files without issues locally. I enabled Kerberos logging in the registry and this is the relevant logs that I get on the machine that has the encrypted files. In order for all certificate that the user possess (Only Key Name changes): Event ID 5058: Audit Success, "Other System Events" Key file operation. Subject: Security ID: {MyDOMAIN}\{MyID} Account Name: {MyID} Account Domain: {MyDOMAIN} Logon ID: 0xbXXXXXXX Cryptographic Parameters: Provider Name: Microsoft Software Key Storage Provider Algorithm Name: Not Available. Key Name: {CE885431-9B4F-47C2-8415-2D766B999999} Key Type: User key. Key File Operation Information: File Path: C:\Users\{MyID}\AppData\Roaming\Microsoft\Crypto\RSA\S-1-5-21-4585646465656-260371901-2912106767-1207\66099999999991e891f187e791277da03d_dfe9ecd8-31c4-4b0f-9b57-6fd3cab90760 Operation: Read persisted key from file. Return Code: 0x0[/code] Event ID 5061: Audit Faillure, "System Intergrity" [code]Cryptographic operation. Subject: Security ID: {MyDOMAIN}\{MyID} Account Name: {MyID} Account Domain: {MyDOMAIN} Logon ID: 0xbXXXXXXX Cryptographic Parameters: Provider Name: Microsoft Software Key Storage Provider Algorithm Name: RSA Key Name: {CE885431-9B4F-47C2-8415-2D766B999999} Key Type: User key. Cryptographic Operation: Operation: Open Key. Return Code: 0x8009000b Could this be related to this error from the CryptAcquireContext function NTE_BAD_KEY_STATE 0x8009000BL The user password has changed since the private keys were encrypted. The problem is that the users I using at the moment can not change their password.

    Read the article

  • SFTP, Chroot problems on Redhat

    - by Curtis_w
    I'm having problems setting up sftp with a ChrootDirectory. I've done an equivalent setup on other distros, but for some reason I cannot get it to work on a Redhat AMI. The changes to my sshd_config file are: Subsystem sftp internal-sftp Match Group ftponly PasswordAuthentication yes X11Forwarding no ChrootDirectory %h ForceCommand internal-sftp AllowTcpForwarding no I have the concerned usere's homes at /home/user, owned by root. After connecting with a user in the ftponly group, I'm dropped into / without permissions for anything, and am unable to do anything. sftp bob@localhost Connecting to localhost... bob@localhost's password: sftp> pwd Remote working directory: / I can connect normally with users not in the ftponly group. openssh version 5.3 I've experimented with different permissions, as well as having users own their own home directory (gives a Write failed: Broken pipe error), and so far, nothing has seemed to work. I'm sure it's a permissions error, or something equally as trivial, but at this point my eyes are beginning to glaze over, and any help would be greatly appreciated. EDIT: James and Madhatter, thanks for clarifying. I was confused by chroot dropping me in /... just didn't think through it properly. I've added the appropriate directories and permissions to get read access. One other key part was enabling write access to chrooted homes: setsebool -P ssh_chroot_rw_homedirs on in order to get write access. I think I'm all set now. Thanks for the help.

    Read the article

  • Eclipse on Ubuntu: Rectangles instead of Strings and some Java methods and classes

    - by Claus Hausberger
    after upgrading from Ubuntu 9.04. to 11.04 (new installation), I have weird problems with the Eclipse editor. With the Eclipse PyDev plugin, whenever I typ single quoted strings like 'bla', they appear as rectangles (both the quotes as well as the string). First I thought this was a problem with the PyDev plugin, but it also happens with Java and Scala Plugins. With Java, it happens, for example, when typing System.out.println("bla") and then "out" is shown as rectangles only. Weird is that for about half a second I see "System.out.println" and then the editor changes it to System.[][][].println (not really [] (here I used two brackets), it is shown as rectangles). This is very weird. I've never had this before with any Ubuntu, Java or Eclipse version. Currently, I use: Ubuntu 11.04. Eclipse 3.6 Java 1.6.0_25 The latest plugins for Python (2.1) and Scala (beta 5) where used. Eclipse and Ubuntu Terminal is set to UTF-8. The problem also happens when using KDE instead of Gnome. I doubt is has anything to do with Java as I use the same versions on older Ubuntu installations (10.04, 9.10, etc) at work. It does not happen with Netbeans. But I saw once error dialog message from the Update Manager where there were some rectangles in the error widget. Maybe this is the same problem Any ideas what could be wrong here and how to fix this? Eclipse is unusable but I need this for work and also for Scala and Python (the Eclipse plugins for those are very good now). Claus

    Read the article

  • SATA Driver for Acer Aspire One D257

    - by Robert Niestroj
    i have a Acer Aspire One D257. In this netbook the hard disk is defect so i bought a new one. Now i want to reinstall Windows 7. Im using an external DVD Drive plugged into USB. The Windows 7 DVD is staring, Win7 setup is starting and when it comes to Hard Drive options it says that no drive was detected and i should try search for drivers. It shows me this window: Screenshot from web Now i cant find the right drivers for this netbook to continue with the installation. The laptop has the newest BIOS - 1.15, it is reset to factory default settings except that i enabled the Boot Menu prompt with F12. From the Acer Support Website i've downloaded the SATA AHCI Driver and the Chipset Driver. I unpacked both to a USB flashdrive in seperate folders. When i select the SATA AHCI Driver it does not find any drivers. When i uncheck the checkbox "Hide drivers that are not compatible with hardware on this computer" it shows one driver: Acer HWID (path_to\1.inf). When i continue with this driver i got an error message that says something like: No new devices found. Check if the driver files are on the installation disk. When i show him the Chipset Driver it sees a lot more driver. When i uncheck the checkbox "Hide drivers that are not compatible with hardware on this computer" it show some drivers: Intel N10 Family DMI Bridge Intel N10/ICH7 Family PCI Express Root Port Intel N10/ICH7 SMBUS Controller Intel N10/ICH7 Family USB Universal Host Controller Intel N10/ICH7 Family USB2 Enhanced Host Controller Intel N10/ICH7 Family Interface LPC Controller When i uncheck this checkbox i get a lot more drivers, and some SATA Drivers but the also do not work. I get the same error message as before. Can someone help me find a driver that should work or am i doing anything else wrong?

    Read the article

  • Problems compiling coreutils-8.5 on Solaris 5.10 on Intel platform

    - by PP
    I am having trouble compiling coreutils-8.5 on Solaris 5.10 on the Intel platform using cc. Firstly I had the following error during ./configure: checking whether <wchar.h> uses 'inline' correctly... no configure: error: <wchar.h> cannot be used with this compiler (/tool/sunstudio12.1/bin/cc -xc99=all -g -D_REENTRANT). This seemed similar to the problem in this question. The solution was to edit configure and replace the reference of -xc99=all to -xc99=all,no_lib. This permitted the configure to complete. Then I ran /usr/sfw/bin/gmake and it progressed until I received the following message: Making all in src gmake[2]: Entering directory `/home/peterp/src/coreutils-8.5/src' gmake all-am gmake[3]: Entering directory `/home/peterp/src/coreutils-8.5/src' CCLD chroot Undefined first referenced symbol in file eaccess ../lib/libcoreutils.a(euidaccess.o) ld: fatal: Symbol referencing errors. No output written to chroot What could cause this problem? PS I was only compiling coreutils because I wanted colour ls.

    Read the article

  • rsnapshot - not correctly archiving mysql databases

    - by Tiffany Walker
    My rsnapshot configuration: snapshot_root /.snapshots/ backup /home/user localhost/ backup_script /usr/local/backup_mysql.sh localhost/mysql/ Using this file: NOW=$(date +"%m-%d-%Y") # mm-dd-yyyy format FILE="" # used in a loop ### Server Setup ### #* MySQL login user name *# MUSER="root" #* MySQL login PASSWORD name *# MPASS="YOUR-PASSWORD" #* MySQL login HOST name *# MHOST="127.0.0.1" #* MySQL binaries *# MYSQL="$(which mysql)" MYSQLDUMP="$(which mysqldump)" GZIP="$(which gzip)" # get all database listing DBS="$($MYSQL -u $MUSER -h $MHOST -p$MPASS -Bse 'show databases')" # start to dump database one by one for db in $DBS do FILE=$BAK/mysql-$db.$NOW-$(date +"%T").gz # gzip compression for each backup file $MYSQLDUMP --single-transaction -u $MUSER -h $MHOST -p$MPASS $db | $GZIP -9 > $FILE done It dumps the databases under / I then tried with the following: http://bash.cyberciti.biz/backup/rsnapshot-remote-mysql-backup-shell-script/ I got: rsnapshot hourly ---------------------------------------------------------------------------- rsnapshot encountered an error! The program was invoked with these options: /usr/bin/rsnapshot hourly ---------------------------------------------------------------------------- ERROR: backup_script /usr/local/backup_mysql.sh returned 1 WARNING: Rolling back "localhost/mysql/" ls -la /.snapshots/hourly.0/localhost/mysql total 8 drwxr-xr-x 2 root root 4096 Nov 23 17:43 ./ drwxr-xr-x 4 root root 4096 Nov 23 18:20 ../ What exactly am I doing wrong? EDIT: # /usr/local/backup_mysql.sh *** Dumping MySQL Database *** Database> information_schema..cphulkd..eximstats..horde..leechprotect..logaholicDB_ns1..modsec..mysql..performance_schema..roundcube..test.. *** Backup done [ files wrote to /.snapshots/tmp/mysql] *** root@ns1 [~]# ls -la /.snapshots/tmp/mysql total 8040 drwxr-xr-x 2 root root 4096 Nov 23 18:41 ./ drwxr-xr-x 3 root root 4096 Nov 23 18:41 ../ -rw-r--r-- 1 root root 1409 Nov 23 18:41 cphulkd.18_41_45pm.gz -rw-r--r-- 1 root root 113522 Nov 23 18:41 eximstats.18_41_45pm.gz -rw-r--r-- 1 root root 4583 Nov 23 18:41 horde.18_41_45pm.gz -rw-r--r-- 1 root root 71757 Nov 23 18:41 information_schema.18_41_45pm.gz -rw-r--r-- 1 root root 692 Nov 23 18:41 leechprotect.18_41_45pm.gz -rw-r--r-- 1 root root 2603 Nov 23 18:41 logaholicDB_ns1.18_41_45pm.gz -rw-r--r-- 1 root root 745 Nov 23 18:41 modsec.18_41_45pm.gz -rw-r--r-- 1 root root 138928 Nov 23 18:41 mysql.18_41_45pm.gz -rw-r--r-- 1 root root 1831 Nov 23 18:41 performance_schema.18_41_45pm.gz -rw-r--r-- 1 root root 3610 Nov 23 18:41 roundcube.18_41_45pm.gz -rw-r--r-- 1 root root 436 Nov 23 18:41 test.18_41_47pm.gz MySQL Backup seems fine.

    Read the article

  • What kernel modules are required for wi-fi to work?

    - by Leonid Shevtsov
    My custom-built 2.6.32 kernel cannot connect to any WPA-protected network. The kernel includes (probably?) everything that should be needed for wifi, including IPv4 network support (IPv6 is disabled), the ath5k wireless driver (which is used in the generic Ubuntu 2.6.31 kernel) and all crypto APIs. The card is being detected, however, iwlist scan returns wlan0 Failed to read scan data : Network is down and network-manager log says <info> (wlan0): driver supports SSID scans (scan_capa 0x01). <info> (wlan0): new 802.11 WiFi device (driver: 'ath5k') <info> (wlan0): exported as /org/freedesktop/NetworkManager/Devices/1 <info> (wlan0): now managed <info> (wlan0): device state change: 1 -> 2 (reason 2) <info> (wlan0): bringing up device. <info> (wlan0): preparing device. <info> (wlan0): deactivating device (reason: 2). supplicant_interface_acquire: assertion `mgr_state == NM_SUPPLICANT_MANAGER_STATE_IDLE' failed <info> modem-manager is now available <WARN> default_adapter_cb(): bluez error getting default adapter: The name org.bluez was not provided by any .service files <info> Trying to start the supplicant... <info> (wlan0): supplicant manager state: down -> idle <info> (wlan0): device state change: 2 -> 3 (reason 0) <WARN> nm_supplicant_interface_add_cb(): Unexpected supplicant error getting interface: wpa_supplicant couldn't grab this interface. The exact same configuration works with the generic kernel. Is anything except wifi and crypto api needed for wi-fi to work?

    Read the article

  • Windows 7 restarts while being idle

    - by Ondrej Slinták
    My Windows 7 almost always restarts when I keep it idle for ~20-30mins. It happened randomly before, but lately, if I leave the computer I can be sure it's gonna restart after those 30mins. It never happens when I play games or work tho, just when it's idle. It's a fresh install of Windows 7 64bit. I had also problems while installing it, it always crashed while finalizing the install and I had to reinstall again. Eventually it installed on 3rd or 4th try after I deleted all of my partitions and added them again. I thought it might have been a hardware problem, but temperatures seem to be okay and I have no idea how to track what might have been causing it. Any ideas? I'm running Windows 7 64bit on: Gigabyte EX58-UD4P Intel(R) Core(TM) i7 CPU 920 @ 2.67GHz NVIDIA GeForce GTX 260 6GB of DDR3 1066Mhz RAM WDC WD1001FALS-00J7B0 1TB SATA II I have a very bad feeling it might be something with HDD and its compatibility with Windows 7 as I haven't had those problems for 1 year while I had Vista. Edit: I checked Event Viewer critical errors from this night. PC restarted first time at 11:12pm, then at 3:06am and since then every ~20min until I came back to it. Error message is: The system has rebooted without cleanly shutting down first. This error could be caused if the system stopped responding, crashed, or lost power unexpectedly. Source: Kernel-Power

    Read the article

  • URL Redirect Configuration in Virtualhost for a Single Page Web Application

    - by fenderplayer
    I have a web application under development that I am running locally. The home page of the application is fetched with the following url: http://local.dev/myapp/index.shtml When the app runs, javascript on the webpage maintains the url and the app state internally. Some of the other urls read as: http://local.dev/myapp/results?param1=val1&param2=val2 http://local.dev/myapp/someResource Note that there are no pages named results.html or someResource.html on my web server. They are just made up URLs to simulate RESTfulness in the single page app. All the app code - javascript, css etc - is present in the index.shtml file So, essentially, the question is how can I redirect all requests to the first URL above? Here's how the vhost configuration looks like: <VirtualHost 0.0.0.0:80> ServerAdmin [email protected] DocumentRoot "/Users/Me/mySites" ServerName local.dev RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(myapp|myapp2)\/results\?.+$ $1/index.shtml [R=301,L] <Directory "/Users/Me/mySites/"> Options +Includes Indexes MultiViews FollowSymlinks AllowOverride All Order allow,deny Allow from all </Directory> ErrorLog "/private/var/log/apache2/error.log" CustomLog "/private/var/log/apache2/access.log" common </VirtualHost> But this doesn't seem to work. Requesting the other URLs directly results in 404 error.

    Read the article

  • Windows 2008 server smart card security module problem

    - by chris13work
    Hi, I've got a smart card reader and a server application using it as a security module. If I run it under DOS prompt, everything is fine. The server is running and clients can connect to it. I tried to install the server as window service and start it. The server starts but always gives back authentication error because it cannot call the smart card to do encryption. Then I tried to start it with task scheduler and set the trigger factor as "on startup". The server starts also but still cannot access the smart card reader. Then I tried remote desktop to the machine and run the server application under DOS prompt. Same error is returned. The situation is that the smart card reader only works under active console desktop environment. In the server application, WINSCARD API is used to access the smart card reader. Any suggestion so that we can access the smart card reader in running services? OS: Windows Server 2008 Smart Card Driver: Windows USB smart card Reader Smart Card API: WINSCARD

    Read the article

  • Windows 2008 server smart card security module problem

    - by chris13work
    Hi, I've got a smart card reader and a server application using it as a security module. If I run it under DOS prompt, everything is fine. The server is running and clients can connect to it. I tried to install the server as window service and start it. The server starts but always gives back authentication error because it cannot call the smart card to do encryption. Then I tried to start it with task scheduler and set the trigger factor as "on startup". The server starts also but still cannot access the smart card reader. Then I tried remote desktop to the machine and run the server application under DOS prompt. Same error is returned. The situation is that the smart card reader only works under active console desktop environment. In the server application, WINSCARD API is used to access the smart card reader. Any suggestion so that we can access the smart card reader in running services? OS: Windows Server 2008 Smart Card Driver: Windows USB smart card Reader Smart Card API: WINSCARD

    Read the article

  • Is it possible to have DisplayLink USB display hotplugging with Xorg 1.13 on kernel 3.4?

    - by lkraav
    keithp seems to be the only one on the interwebs to have written anything about the subject and he worked with 3.5_rc. I don't want to go above 3.4 at the moment for various stability reasons and am trying to see whether I can get this to work. Xorg 1.13 recognizes the display on connection, "udl" module is loaded, xorg-video-modesetting driver also loads, display lights up. So everything seems to be good. I emerged xrandr-9999 (not many changes on top of 1.3.5): $ xrandr --listproviders Providers: number : 2 Provider 0: id: 69 cap: 0x0 crtcs: 2 outputs: 4 associated providers: 0 name:Intel Provider 1: id: 338 cap: 0x0 crtcs: 1 outputs: 1 associated providers: 0 name:modesetting But I can't get any further, just like this guy: $ xrandr --setprovideroutputsource 338 69 X Error of failed request: BadValue (integer parameter out of range for operation) Major opcode of failed request: 139 (RANDR) Minor opcode of failed request: 35 () Value in failed request: 0x152 Serial number of failed request: 11 Current serial number in output stream: 12 $ xrandr --setprovideroutputsource 1 0 X Error of failed request: 148 Major opcode of failed request: 139 (RANDR) Minor opcode of failed request: 35 () Serial number of failed request: 11 Current serial number in output stream: 12 Any thoughts?

    Read the article

  • How can I stop IIS7 (integrated mode) from reporting a 404 before I get a chance to handle it?

    - by Gary McGill
    I have an ASP.NET MVC 2 application running on IIS7 in integrated mode. I'm trying to do my own 404 handling, but IIS7 seems to be intercepting the error and returning its own 404 message to the client before I get a chance to handle it. I'm not having much luck coming at the problem from a programming perspective over on Stack Overflow, so I wondered if maybe it's a configuration problem. Is there something I have to do to tell IIS to let me handle my own errors? (I'm trying to use Application_Error in my global.asax but it's not even getting there). There is a custom error page defined (at the machine level, I think) for 404 but when I tried removing that it didn't really help - it simply showed a bald one-liner message instead. My code still didn't get a look in. Is it perhaps something to do with the routing? Maybe the "mysite.com/nosuchpage" URL isn't being routed to me and that's why I don't get a chance to intercept it? Do I need to do something so that ALL requests get routed through my app?

    Read the article

  • Problem routing between directly connected Subnets w/ ASA-5510

    - by Zephyr Pellerin
    This is an issue I've been struggling with for quite some time, with a seemingly simple answer (Aren't all IT problems?). And that is the problem of passing traffic between two directly connected subnets with an ASA While I'm aware that best practice is to have Internet - Firewall - Router, in many cases this isn't possible. For example, In have an ASA with two interfaces, named OutsideNetwork (10.19.200.3/24) and InternalNetwork (10.19.4.254/24). You'd expect Outside to be able to get to, say, 10.19.4.1, or at LEAST 10.19.4.254, but pinging the interface gives only bad news. Result of the command: "ping OutsideNetwork 10.19.4.254" Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 10.19.4.254, timeout is 2 seconds: ????? Success rate is 0 percent (0/5) Naturally, you'd assume that you could add a static route, to no avail. [ERROR] route Outsidenetwork 10.19.4.0 255.255.255.0 10.19.4.254 1 Cannot add route, connected route exists At this point, you might gander if its a NAT or Access list problem. access-list Outsidenetwork_access_in extended permit ip any any access-list Internalnetwork_access_in extended permit ip any any There is no dynamic nat (or static nat for that matter), and Unnatted traffic is permitted. When I try pinging the above address (10.19.4.254 from Outsidenetwork), I get this error message from level 0 logging (debugging). Routing failed to locate next hop for icmp from NP Identity Ifc:10.19.200.3/0 to Outsidenetwork:10.19.4.1/0 This led me to set same-security traffic permit, and assigned the same, lesser and greater security numbers between the two interfaces. Am I overlooking something obvious? Is there a command to set static routes that are classified higher than connected routes?

    Read the article

  • Load Balancing Rails on Apache 2.x

    - by revgum
    My situation is that I need to proxy traffic to the root of my web server to port 81 for IIS, and then any traffic to a sub-directory needs to be directed to the rails app. my-server.com/ - needs to proxy to port 81 my-server.com/myapp - needs to point to the rails app This seems to be working alright for the rails application but the images, javascripts, and stylesheets are not actually working (proxied). I've tried to fiddle with the proxypass lines but it still doesn't work for me..can anyone help? Here's my complete VirtualHost portion of the config; LoadModule proxy_module modules/mod_proxy.so LoadModule proxy_http_module modules/mod_proxy_http.so ProxyRequests off <Proxy balancer://myapp_cluster> BalancerMember http://127.0.0.1:3001 BalancerMember http://127.0.0.1:3002 </Proxy> <VirtualHost *:80> DocumentRoot "c:\ruby\apps\myapp\public" <Directory /myapp > Options FollowSymLinks AllowOverride None </Directory> ProxyPass /myapp/images ! ProxyPass /myapp/stylesheets ! ProxyPass /myapp/javascripts ! ProxyPass /myapp/ balancer://myapp_cluster/ ProxyPassReverse /myapp/ balancer://myapp_cluster/ ProxyPreserveHost on ProxyPass / http://localhost:81/ ErrorLog "c:\ruby\apps\myapp\log\error.log" # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog "c:\ruby\apps\myapp\log\access.log" combined </VirtualHost>

    Read the article

  • Local SSL connections are causing redirect loop (after Ubuntu update)

    - by codeinthehole
    Following a recent Ubuntu update, my local websites are no longer serving their pages over SSL. For example, my .htaccess file attempts to ensure /sign-in is always served over HTTPS: RewriteEngine On RewriteCond %{HTTPS} off RewriteCond %{REQUEST_URI} /sign-in RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} [L,QSA,R=301] However when I make a request to /sign-in on the domain site2-local , I get the error "The page isn't redirecting properly" with the following in /var/log/apache2/error.log [Tue Jun 08 12:20:57 2010] [info] [client 127.0.1.1] Connection to child 0 established (server site1-local:443) [Tue Jun 08 12:20:57 2010] [info] Seeding PRNG with 656 bytes of entropy [Tue Jun 08 12:20:57 2010] [info] Initial (No.1) HTTPS request received for child 0 (server site2-local:443) [Tue Jun 08 12:20:57 2010] [info] Subsequent (No.2) HTTPS request received for child 0 (server site2-local:443) [Tue Jun 08 12:20:57 2010] [info] Subsequent (No.3) HTTPS request received for child 0 (server site2-local:443) [Tue Jun 08 12:20:57 2010] [info] Subsequent (No.4) HTTPS request received for child 0 (server site2-local:443) [Tue Jun 08 12:20:57 2010] [info] Subsequent (No.5) HTTPS request received for child 0 (server site2-local:443) [Tue Jun 08 12:20:57 2010] [info] Subsequent (No.6) HTTPS request received for child 0 (server site2-local:443) [Tue Jun 08 12:20:57 2010] [info] Subsequent (No.7) HTTPS request received for child 0 (server site2-local:443) [Tue Jun 08 12:20:57 2010] [info] Subsequent (No.8) HTTPS request received for child 0 (server site2-local:443) [Tue Jun 08 12:20:57 2010] [info] Subsequent (No.9) HTTPS request received for child 0 (server site2-local:443) [Tue Jun 08 12:20:57 2010] [info] Subsequent (No.10) HTTPS request received for child 0 (server site2-local:443) [Tue Jun 08 12:21:12 2010] [info] [client 127.0.1.1] (70007)The timeout specified has expired: SSL input filter read failed. [Tue Jun 08 12:21:12 2010] [info] [client 127.0.1.1] Connection closed to child 0 with standard shutdown (server site2-local:443) There is a connection to site1-local (another site on my machine which shares the certificate), which I don't understand. Anyone know what is causing this issue?

    Read the article

< Previous Page | 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045  | Next Page >