Search Results

Search found 10815 results on 433 pages for 'stored procedure'.

Page 258/433 | < Previous Page | 254 255 256 257 258 259 260 261 262 263 264 265  | Next Page >

  • Citrix Access Gateway with Citrix Receiver

    - by vm370
    I'm currently using a Citrix Access Gateway with firmware 5.0.4 to provide access over an SSL-VPN to an isolated environment, which is connected to over the Citrix Access Gateway Client, which is delivered by the device by default. However we encountered different problems, e.g. that it's somehow not possible to get it out of the autostart (not registered as a service or in the autostart?!) and it killed the Cisco VPN Client, which is used in the company and unfortunately cannot be replaced. The Cisco client can also just be used again after a procedure with cleaning the registry from all CAG Client remains, which requires a lot of effort. Because of that, I'd like to check if there is an alternative to this client, since is this is a real pain... Unfortunately I couldn't find a way to use the Receiver with the CAG yet, but if you have any resources on how to build this workaround, I'd be very happy. Thanks a lot in advance UPDATE: If there are other alternatives I'd be even more happy, since using the Receiver would also mean that there is an issue with the ICA-Client, which is also used in our environments. From my experience, the Receiver and the ICA-Client are also no good friends...

    Read the article

  • stunnel not working - stunnel.pem: No such file or directory

    - by Marronsuisse
    I am trying to install stunnel on an amazon LINUX machine. (i want to configure postfix so that it sends its emails through amazon ses) I first tried to install from the tar.gz package download from http://www.stunnel.org and installed with the commands: ./configure make make install but than the stunnel command was still not found. Then I installed with yum install stunnel. But now when I try I get: sudo stunnel 2012.06.23 06:51:53 LOG7[20071:3078289200]: Snagged 64 random bytes from /root/.rnd 2012.06.23 06:51:53 LOG7[20071:3078289200]: Wrote 1024 new random bytes to /root/.rnd 2012.06.23 06:51:53 LOG7[20071:3078289200]: RAND_status claims sufficient entropy for the PRNG 2012.06.23 06:51:53 LOG7[20071:3078289200]: PRNG seeded successfully 2012.06.23 06:51:53 LOG3[20071:3078289200]: stunnel.pem: No such file or directory (2) So it seems there is still a problem with the install. When I use the locate stunnel command, I see files a bit everywhere. How can I do to have a clean install of stunnel? Edit: i was following this procedure: http://docs.amazonwebservices.com/ses/latest/DeveloperGuide/SMTP.MTAs.SecureTunnel.html when I got stuck at point 5 and got the stunnel.pem: No such file or directory message.

    Read the article

  • "A disk read error occurred" when booting XP disk image in VirtualBox

    - by intuited
    I'm trying to boot an XP installation cloned into VirtualBox from a real drive. I'm getting the message A disk read error occurred Press Ctrl+Alt+Del to restart whenever* I try to boot the machine. * This is not strictly true: with AMD-V enabled, the boot process appears to not make it this far and instead hangs at a black screen with cursor. I created the VirtualBox image from the original drive using the following method: $ sudo ddrescue -n /dev/sdd sdd.img logfile # completed without errors $ VBoxManage convertfromraw sdd.img disk.vdi The original disk (and the image) contain a single NTFS partition with XP installed on it. The owner of the drive indicates that it did boot okay the last time the system made it that far. The (Pentium 4) system has a broken (enormous) heat sink, so at some point it failed to boot because it would quickly overheat and shut down. If I boot the VM from a live cd, I am able to mount its /dev/sda1 without any problems. I ran ntfsfix and didn't have any luck. I've read through the instructions on doing this. I didn't really follow them. For example, I didn't run MergeIDE before imaging because the machine was not bootable. However, the symptom of that problem seems to be quite different. The emitted message is contained in the volume boot record of the XP partition, which leads me to suspect that this is a problem with the core operating system bootstrap procedure, and not related to anything in the registry. I don't have an XP boot CD.

    Read the article

  • route propogation using OSPF in a network

    - by liv2hak
    I am using Juniper J-series routers to emulate a small telco and VPN customer.The internal routing will be configured with OSPF,MPLS including a default and backup path,RSVP for distributing labels withing the telco,OSPF for distributing routes from the customer edge (CE) routers to the VRF's in the adjacent PE's and finally iBGP for distributing customer routes between VRF's in different PEs. The topology of the network is shown below. The Addressing scheme for the network is as follows. UOW-TAU ******* ge-0/0/0 192.168.3.1 TAU-PE1 ******* ge-0/0/0 10.0.1.0 ge-0/0/1 10.0.2.0 ge-0/0/2 192.168.3.2 TAU-P1 ****** ge-0/0/0 172.16.1.0 ge-0/0/1 172.16.3.1 ge-0/0/2 10.0.2.2 HAM-P1 ****** ge-0/0/0 172.16.3.2 ge-0/0/1 172.16.2.1 ge-0/0/3 10.0.3.2 ACK-P1 ****** ge-0/0/0 172.16.1.2 ge-0/0/2 172.16.2.2 ge-0/0/3 10.0.1.2 HAM-PE1 ******* ge-0/0/0 10.0.3.1 ge-0/0/2 192.168.4.2 UOW-HAM ******* ge-0/0/0 192.168.4.1 I also set up loopback address for each node. I want to setup OSPF so that path to each internal subnet and router loopback address is propogated to all PE and P nodes.I also want to select a single area for PE and P nodes,and on each node I should add each interface that should be propogated. How do I accomplish this.? With my understanding below is the procedure to achieve this.Is the below explanation correct? I set up OSPF on UOW-TAU ge-0/0/0 interface and ge-0/0/1 interface and UOW-HAM ge-0/0/0 interface and ge-0/0/1 interface. let me call this Area 100. Once I have done this I should be able to reach each node from others using ping and traceroute. Any help is highly appreciated.

    Read the article

  • Active Directory FRS problems. 13508 error and other problems

    - by ITPIP
    I have 3 Domain Controllers. We will call them DC1, DC2 and DC3. DC3 and DC2 show Event ID 13508 in their FRS logs with no follow-up event(13509 I think) to say the error had been fixed. DC1's FRS log no matter what you do never shows any events besides FRS service stopped and started. DC1 holds the SYSVOL that needs to be replicated to the other DC's. The other DC's sysvol folders are empty. I have tried the burflag method of fixing this but I haven't had any luck. My procedure for that was to stop all FRS services on all DC's. Then set the burflag on DC1 to D4 and the other two DCs burflag to D2. Started FRS on DC1 and the only event's I see in DC1's FRS event logs are service stopped and service started messages. This fact is leading me to believe that something is wrong on FRS for DC1. I believe there should be events 13553 and 13516 in the FRS event logs after an authoritative sysvol restore. The other two DC's do not have anything in their SYSVOL, otherwise I would have made one of them the authoritative sysvol. DC1 is MS Server 2003 Enterprise Edition SP2 DC2 is MS Server 2003 Standard Edition SP1 DC3 is MS Server 2003 R2 Standard Edition SP2 I did not setup this domain originally but I am now the administrator of it, so I don't have a lot of background on why certain things may have been done in the past. My main goal is to try and fix these issues to get myself better prepared to decommision DC1 and add a DC running Server 2008 to my domain. Thanks.

    Read the article

  • Apache doesn't immediately notice a change in the document root

    - by Tom
    We use capistrano for website deployments and our Apache document root is a symlink to a particular code release. The deployment procedure switches the symlink from the old release to the new release as the final step of the deployment. We are migrating our webservers from real servers running RHEL 5.6 to Amazon EC2 virtual machines running Ubuntu 11.10 and the new servers are suffering from a problem where Apache doesn't immediately notice the change to it's document root when the symlink is switched. It can take a second or so (and I think I've even seen it take a couple of minutes). It's kind of like Apache has cached the physical path of the symlink for some time. Does anyone know some Apache settings I could look at to get it to "scan" for changes to it's served files quicker. Thoughts: I read that the disks on virtual machines are much slower (since they are network attached storage). Perhaps the filesystem cache somehow works differently too? If so, is there anything that can be done? The website runs PHP code. Perhaps there is some PHP config differences between RHEL and Ubuntu? I checked realpath_cache_ttl but both servers have it commented out: e.g. ; Duration of time, in seconds for which to cache realpath information for a given ; file or directory. For systems with rarely changing files, consider increasing this ; value. ; http://www.php.net/manual/en/ini.core.php#ini.realpath-cache-ttl ;realpath_cache_ttl = 120 We do use the APC opcode cache but don't think it's the issue due to experimentation. The PHP code is in different file paths for each deployment and we ensure stat=1. Here is a similar question that is very interesting: 294107 - but doesn't provide an answer for me. One solution would be to reload Apache everytime we modify the document root symlink. I'll do this if we can't find another solution.

    Read the article

  • Weird Apache Crash (with Dump) zend_hash_find (), libphp5.so

    - by Jacob84
    To be honest I don't have experience working with Apache. I'm just putting the best of my intentions on solving this and don't know if I'm making it right. So any help will be greatly appreciated. We have a php page wich is throwing the following message in the browser: Error 324 (net::ERR_EMPTY_RESPONSE): The server closed the connection without sending any data. The logs from /var/log/httpd doesn't seem to help because It seems that the Apache is unable to write any information. So the exception or error is preventing the writing (maybe ocurring in some stage of the process that makes impossible to log?). I've read about the procedure to make dumps of the apache, and here we have the content: Reading symbols from /lib64/libgpg-error.so.0...(no debugging symbols found)...done. Loaded symbols for /lib64/libgpg-error.so.0 Reading symbols from /usr/lib64/php/modules/zip.so...(no debugging symbols found)...done. Loaded symbols for /usr/lib64/php/modules/zip.so Core was generated by `/usr/sbin/httpd'. Program terminated with signal 11, Segmentation fault. 0 0x00007fb828fff712 in zend_hash_find () from /etc/httpd/modules/libphp5.so Missing separate debuginfos, use: debuginfo-install httpd-2.2.15-15.el6.centos.1.x86_64 I've been looking in the PHP files and I haven't found any direct call to zend_hash_find (wich seems to be causing the error). I've been looking at Google but found nothing related. Can somebody please help? Is there any step that I need to accomplish to know more? Thanks a lot, as always!

    Read the article

  • Error during configuring kerberos5 using macports

    - by ario
    While trying to install libmemcached via MacPorts, I hit the following issue: libmemcached @0.40 +universal ---> Computing dependencies for libmemcached ---> Dependencies to be installed: cyrus-sasl2 kerberos5 ---> Configuring kerberos5 Error: org.macports.configure for port kerberos5 returned: configure failure: command execution failed Error: Failed to install kerberos5 It tells me to look in the log for details. Here's the last bit of the log file: :info:configure checking for setupterm in -lcurses... no :info:configure checking for setupterm in -lncurses... no :info:configure checking for tgetent... no :info:configure configure: error: Could not find tgetent; are you missing a curses/ncurses library? :info:configure configure: error: /bin/sh './configure' failed for appl/telnet :info:configure Command failed: cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_net_kerberos5/kerberos5/work/krb5-1.7.2/src" && ./configure --prefix=/opt/local --disable-dependency-tracking --mandir=/opt/local/share/man :info:configure Exit code: 1 :error:configure org.macports.configure for port kerberos5 returned: configure failure: command execution failed :debug:configure Error code: NONE :debug:configure Backtrace: configure failure: command execution failed while executing "$procedure $targetname" :info:configure Warning: targets not executed for kerberos5: org.macports.activate org.macports.configure org.macports.build org.macports.destroot org.macports.install :error:configure Failed to install kerberos5 :debug:configure Registry error: kerberos5 not registered as installed & active. invoked from within "registry_active ${subport}" invoked from within "$workername eval registry_active \${subport}" :notice:configure Please see the log file for port kerberos5 for details: /opt/local/var/macports/logs/_opt_local_var_macports_sources_rsync.macports.org_release_ports_net_kerberos5/kerberos5/main.log It seems to say it's missing ncurses. Looks like it's there though, since if I run port installed I see these: ncurses @5.7_0 ncurses @5.9_1 (active) ncursesw @5.7_0 Any ideas on how to get around this error?

    Read the article

  • Ubuntu rm not deleting files

    - by ILMV
    My colleague and I have been struggling with deleting a directory and its contents. We are working on a new version of our websites source code on Ubuntu 8.04 (dir: /var/www/websites), what we want to do is delete the websites directory and recreate it from a .tar backup we created a couple weeks ago. The purpose of this is so we can run our deployment procedure in a local environment before we do so on our live / public environment. We use this command: rm -r websites This deletes the directory and the files within it. The problem occurs when we un-tar our backup file and view the website we are getting files that don't exist in the .tar backup, in fact these files were only created a few days ago and should have been deleted. We delete the directory once more in the manner stated above, we then create a new websites directory using the mkdir command. Strangely at this stage the 'deleted files' do not come back, but if we unpack our .tar file the 'deleted files' appear again. Is there a way to ensure these files are deleted, or at least the pointers that associate them with said directory. Our .tar backup does not include these files We do not want to use the shred command We do not want to use 3rd party applications Solution should be functional via terminal (SSH) Many thanks! EDIT Er... we fixed it. Turns out the files that are reappearing are because of a link we have to another directory (outside the /var/www/websites), we were restoring the link but not deleting the files on the other end. D'oh! Many thanks for your help guys... friday afternoon syndrome :-)

    Read the article

  • hdparm - how to secure erase SATA SSD over USB

    - by cc0
    I have been following this guide on how to secure erase an SSD (trying to improve the performance of mine, which currently only writes at about 30mb/s seq). However, I'm using an USB--Sata docking device to avoid having the harddrive frozen. Apparently using this solution the SATA device is recognized as a SCSI drive, which is giving me trouble. I use the "hdparm -I /dev/sda" command with those parameters, and I get the error; HDIO_DRIVE_CMD (identify) failed: Invalid Exchange After a lot of googling on the issue I can't seem to find anyone who has actually solved this problem. However, I have not tried to just go ahead and use the secure erase. So I'm not sure if this would actually still work. I would love any and all input I can get on this, especially on whether it will still work to do a secure erase with the drive being recognized as a SCSI drive. The drive itself is a Samsung 256gb SSD (pm800), I'm sure you can understand my reluctance to go through this procedure without feeling reasonably safe that I won't mess it up beyond repair.

    Read the article

  • Intel Rapid Storage Technology (pre-OS) driver installation

    - by Nero theZero
    My desktop machine is built on Gigabyte GA-Z87-UD3H and Gigabyte provides the latest driver for Intel Rapid Storage Technology (IRST), which I installed after installing the OS. Same goes for my Lenovo Thinkpad-T420. And for both machine, checking the controller device under the IDE ATA/ATAPI Controllers section in Device Manager I see the driver has been updated to the latest version. I set the SATA controller to AHCI from BIOS On the desktop machine I have one WD 2TB BLACK & one WD 3TB Green I don’t use RAID, & no chance of using in near future, but according to Intel IRST improves performance in single disk scenario too. Now I have the following questions – What is the actual purpose of IRST (pre-OS install) driver that doesn’t get served with a post-OS driver that I installed? There must be some difference, otherwise there wouldn’t be a pre-OS version of the driver. Right? In the pre-OS procedure (loading the drivers at OS-installation time) after successfully completing the OS installation, do I need that post-OS driver? Because after installing from that one I got a quick launch icon that runs the IRST configuration application. Where do get that after installing the pre-OS driver? As it is “pre-OS”, when I load it at OS-installation time, does it updates anything at BIOS level or anywhere other than HDD? That’s because I’m going to dual boot Windows 7 with Windows 8.1, and after installing Windows 7 when I install Windows 8.1 & load the IRST driver for that, is there any chance of any “overwriting” or OS-incompatibility? In short, is there anything specific to follow while installing the second OS?

    Read the article

  • MS SQL Server 2005 Express rebuild master DB problem

    - by PaN1C_Showt1Me
    Hi ! There has been a power loss on our server and i cannot start the SQL service because the master DB is corrupted (as the log states). I found many articles recommending running the setup.exe with optional parameters: This is what I did: I've downloaded SQLEXPR32.EXE from MS page and ran it The first problem was, that it extracted all the setup files and started the default installation procedure. (which was unuseful for me as I need those params). If I canceled it, all the extracted files disappeared. That's why I decided to copy the extracted files somewhere and than cancel the default installation. Now I'm trying to run the setup.exe from the extraction: setup.exe /qb INSTANCENAME=MSSQLSERVER REINSTALL=SQL_Engine REBUILDDATABASE=1 SAPWD=xxxxx it asks me if I want to rewrite the system db, which is what I need, but then while installing I get this error: *An installation package for the product Microsoft SQL Server 2005 Express Edition cannot be found. Try the installation again using a valid copy of the installation package 'SqlRun_SQL.msi'* Then it tries to install something and it states: cannot install because the same instance name already exists. But I don't want to install a new instance .. Any idea how to solve this, please? Thank you in advance !

    Read the article

  • SSD, AHCI and write performance

    - by Dan
    We've started to deploy SSD drives to our developers workstations. At this moment we're having the unpleasant surprise that the systems using the new SSDs often freeze, with the HDD activity led blinking or being continuously on. Benchmarks shows read speeds around 180 MB/s, but write speeds around 5 MB/s. All developers are using Windows 7 Enterprise, 64 bit, SP1. One of our developers suggested (based on his experience) the following sequence: backup the workstation use a tool to completely erase the SSD make sure AHCI is enabled in BIOS install Windows restore from backup So far, this procedure seems to work (we're still testing, but write speed seems to be 120 MB/s). There are some questions in this context: why do we have to completely reinstall Windows? Is it possible to clean the SSD without reinstalling Windows? Is there a reliable tool? If AHCI was disabled when Windows was installed and we enable it, shouldn't this be enough to correct the write performance issue? If we have to completely erase the SSDs, does this mean the SSDs we've received were used before (SH)? I'm wondering this because the package I've got was open (I didn't think about it at that time, as I considered one of my coworkers simply took a peek inside the package). Has anyone seen a similar problem before?

    Read the article

  • SQL Full-Text indexing not populating

    - by Sam
    Hi, We installed a clustered SQL 2005 installation on windows 2008 and reattached our san drives from another machine and restored to do a migration to new hardware. There have been a few minor issues, but this one has me stuck. Trying to populate Full-Text indexes is not working. I create a basic table with some simple text in a new database and get the same results as old indexes. 2010-09-27 10:30:46.85 spid19s Informational: Full-text Full population initialized for table or indexed view '[SQL_DBA].[dbo].[CIS_Report_Executions]' (table or indexed view ID '1767677345', database ID '5'). Population sub-tasks: 1. 2010-09-27 10:31:15.36 spid19s Error '0x80070003' occurred during full-text index population for table or indexed view '[SQL_DBA].[dbo].[CIS_Report_Executions]' (table or indexed view ID '1767677345', database ID '5'), full-text key value 0x000001DF. Attempt will be made to reindex it. 2010-09-27 10:31:15.37 spid19s The component 'MSFTE.DLL' reported error while indexing. Component path 'D:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Binn\MSFTE.DLL'. 2010-09-27 10:31:15.37 spid19s Error '0x80070003' occurred during full-text index population for table or indexed view '[SQL_DBA].[dbo].[CIS_Report_Executions]' (table or indexed view ID '1767677345', database ID '5'), full-text key value 0x000001E0. Attempt will be made to reindex it. The rebuild/repopulate procedure finishes, but I get zero rows in the index. The .dll in the message is present and the service accounts have access to this. My FTData also has data in it, so it seems there wouldn't be permission issue on this folder. Application throws this error: “PHP Warning: mssql_query() [function.mssql-query]: message: Full-text catalog 'ikm_PageIndex_FText' is in an unusable state. Drop and re-create this full-text catalog. (severity 16) in E:\Inetpub\knowledgebase_insidemesa\lib\database\mssql.php on line 154” A microsoft discussion is the only post I found which had claimed to fix this - said it was registry related, but then didn't post the fix.

    Read the article

  • Apache start failing after apache config modifications, showing syntax error, cannot load php5apache2_2.dll into server

    - by Sandeepan Nath
    I am stuck again with apache setup guys. I am working on a Windows 7 system. I copied the working php5 installation directory from teammates, copied the necessary .dll files from inside php5 installation folder (like they were in the working setup of teammates) to my windows/system32/. Apache server started successfully with the default apache config file. I was able to access localhost in browser. But php code was not parsing. I noticed no such line like the following in the apache config file:- # PHP5 module LoadModule php5_module D:/php5/php5apache2_2.dll If I add this line, apache server start fails. Running test configuration gives the following error - httpd.exe: Syntax error on line 60 of C:/Program Files (x86)/Apache Software Foundation/Apache2.2/conf/httpd.conf: Cannot load D:/php5/php5apache2_2.dll into server: The specified procedure could not be found. But the dll file is there in the specified location and I have given all permissions to the current system user to the php5 installation directory. The same line also appears in the apache error log, though I am not sure when exactly logs are written to the log file. I am confused if log entries are not made if I have opened the log file for reading? lol ... because I could not observe a pattern in when entries are made. I saw some log entries being made, some not. Oh, why is apache setup such a headache always????

    Read the article

  • Best practices for settings for Oracle database creation

    - by Gary
    When installing an Oracle Database, what non-default settings would you normally apply (or consider applying) ? I'm not after hardware dependent setting (eg memory allocation) or file locations, but more general items. Similarly anything that is a particular requirement for a specific application rather than generally applicable isn't really useful. Do you separate out code/API schemas (PL/SQL owners) from data schemes (table owners) ? Do you use default or non-default roles, and if the latter, do you password protect the role ? I'm also interested in whether there's any places where you do a REVOKE of a GRANT that is installed by default. That may be version dependent as 11g seems more locked down for its default install. These are ones I used in a recent setup. I'd like to know whether I missed anything or where you disagree (and why). Database Parameters Auditing (AUDIT_TRAIL to DB and AUDIT_SYS_OPERATIONS to YES) DB_BLOCK_CHECKSUM and DB_BLOCK_CHECKING (both to FULL) GLOBAL_NAMES to true OPEN_LINKS to 0 (did not expect them to be used in this environment) Character set - AL32UTF8 Profiles I created an amended password verify function that used the apex dictionary table (FLOWS_030000.wwv_flow_dictionary$) as an extra check to prevent simple passwords. Developer logins CREATE PROFILE profile_dev LIMIT FAILED_LOGIN_ATTEMPTS 8 PASSWORD_LIFE_TIME 32 PASSWORD_REUSE_TIME 366 PASSWORD_REUSE_MAX 12 PASSWORD_LOCK_TIME 6 PASSWORD_GRACE_TIME 8 PASSWORD_VERIFY_FUNCTION verify_function_11g SESSIONS_PER_USER unlimited CPU_PER_SESSION unlimited CPU_PER_CALL unlimited PRIVATE_SGA unlimited CONNECT_TIME 1080 IDLE_TIME 180 LOGICAL_READS_PER_SESSION unlimited LOGICAL_READS_PER_CALL unlimited; Application login CREATE PROFILE profile_app LIMIT FAILED_LOGIN_ATTEMPTS 3 PASSWORD_LIFE_TIME 999 PASSWORD_REUSE_TIME 999 PASSWORD_REUSE_MAX 1 PASSWORD_LOCK_TIME 999 PASSWORD_GRACE_TIME 999 PASSWORD_VERIFY_FUNCTION verify_function_11g SESSIONS_PER_USER unlimited CPU_PER_SESSION unlimited CPU_PER_CALL unlimited PRIVATE_SGA unlimited CONNECT_TIME unlimited IDLE_TIME unlimited LOGICAL_READS_PER_SESSION unlimited LOGICAL_READS_PER_CALL unlimited; Privileges for a standard schema owner account CREATE CLUSTER CREATE TYPE CREATE TABLE CREATE VIEW CREATE PROCEDURE CREATE JOB CREATE MATERIALIZED VIEW CREATE SEQUENCE CREATE SYNONYM CREATE TRIGGER

    Read the article

  • Sporadic crash of master-slave MySQL replication process

    - by obarshay
    Hello, I was wondering if someone has experienced this and can perhaps provide some insight into this issue. We have a plan-vanilla MySQL master-slave replication set up. The tables are MyISAM and the master can get quite read/write active. We use the slave instance to perform full daily backups in order to avoid bringing down the master server. The backup process does the following: STOP SLAVE SQL_THREAD mysqlhotcopy all tables START SLAVE SQL_THREAD Every once in a while (once a month or so) the replication breaks with varying error messages indicating a corrupt query or log file. Here's one that happened last night: mysql> show slave status \G *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: server8.propreports.com Master_User: nexus8 Master_Port: 3306 Connect_Retry: 60 Master_Log_File: bin.000045 Read_Master_Log_Pos: 581644327 Relay_Log_File: relay.000086 Relay_Log_Pos: 94131 Relay_Master_Log_File: bin.000045 Slave_IO_Running: Yes Slave_SQL_Running: No Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 1064 Last_Error: Error 'You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '138070603'£' at line 1' on query. Default database: 'wtsdb'. Query: 'UPDATE fill SET clearing_fee='0.0E id='138070603'£' Skip_Counter: 0 Exec_Master_Log_Pos: 4164743 Relay_Log_Space: 577574251 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: NULL I follow the following procedure to recover from above error and resume replication: stop slave; change master to MASTER_LOG_POS = 4164743, MASTER_LOG_FILE = 'bin.000045'; start slave; We have multiple servers set up this way and they all sporadically stop replicating with a similar error. Any advice on how to resolve this would be greatly appreciated.

    Read the article

  • VMware Workstation reboot 32-bit host when starting 64-bit guest

    - by Powerman
    I'm trying to start 64-bit guest (MacOSX and Windows7) on 32-bit host (Hardened Gentoo Linux, kernel 2.6.28-hardened-r9) using VMware Workstation (6.5.3.185404 and 7.0.1.227600). If VT-X disabled in BIOS, VMware refuse to start 64-bit guest (as expected). If VT-X enabled in BIOS, VMware start guest without complaining, but then, in about a second (I suppose as soon as guest try to switch on 64-bit) my host reboots (actually, it's more like reset - normal reboot procedure skipped and BIOS POST start immediately). My hardware is Core 2 Duo 6600 on ASUS P5B-Deluxe with latest stable BIOS 1101. I've power-cycled system, then enabled Vanderpool in BIOS. My CPU doesn't support Trusted Execution Technology, and there no way to disable it in BIOS. I've rebooted several times after that, sometimes with power-cycled, and ensure Vandertool is enabled in BIOS. I've also run VMware-guest64check-5.5.0-18463 tool, and it report "This host is capable of running a 64-bit guest operating system under this VMware product.". About a year ago I tried to disable hardened in kernel to ensure this isn't because of PaX/GrSecurity, but that doesn't help. I have not checked 32-bit guests with VT-X enabled yet, but without VT-X they works ok. ASUS provide "beta" BIOS updates, but according to their descriptions these updates doesn't fix this issue, so I'm not sure is it good idea to try it. My best guess now it's motherboard/BIOS bug. Any ideas?

    Read the article

  • Unable to get prosody running on Ubuntu 10.04 (lua issues)

    - by user90374
    All this is performed on Ubuntu 10.04.4 LTS Server I installed LUA 5.1.4 following this procedure - http://ubuntuforums.org/showthread.php?t=1874860 I installed prosody following this command (after downloading the package) - sudo dpkg -i prosody_0.8.2-1_i386.deb After installation, I get the following error: I have tried to use as suggested luarock and sudo apt-get install to fix these. But still it keeps showing me these errors. Selecting previously deselected package prosody. (Reading database ... 59416 files and directories currently installed.) Unpacking prosody (from prosody_0.8.2-1_i386.deb) ... Setting up prosody (0.8.2-1) ... * Starting Prosody XMPP Server prosody ************** Prosody was unable to find luaexpat This package can be obtained in the following ways: Source: www[dot]keplerproject[dot]org/luaexpat/ Debian/Ubuntu: sudo apt-get install liblua5.1-expat0 luarocks: luarocks install luaexpat luaexpat is required for Prosody to run, so we will now exit. More help can be found on our website, at prosody[dot]im/doc/depends ************ Prosody was unable to find luasocket This package can be obtained in the following ways: Source: www[dot]tecgraf[dot]puc-rio[dot]br/~diego/professional/luasocket/ Debian/Ubuntu: sudo apt-get install liblua5.1-socket2 luarocks: luarocks install luasocket luasocket is required for Prosody to run, so we will now exit. More help can be found on our website, at prosody[dot]im/doc/depends ************ Prosody was unable to find LuaSec This package can be obtained in the following ways: Source: www[dot]inf[dot]puc-rio[dot]br/~brunoos/luasec/ Debian/Ubuntu: prosody[dot]im/download/start#debian_and_ubuntu luarocks: luarocks install luasec SSL/TLS support will not be available More help can be found on our website, at prosody[dot]im/doc/depends [fail] invoke-rc.d: initscript prosody, action "start" failed. dpkg: error processing prosody (--install): subprocess installed post-installation script returned error exit status 1 Processing triggers for man-db ... Processing triggers for ureadahead ... Errors were encountered while processing: prosody Thanks a lot for your patience and answers.

    Read the article

  • multiple ssh aliases is selecting wrong user when forwarding

    - by Chris Beck
    I'm following the dual identity procedure for bitbucket: I have 2 bitbucket accounts ccmcbeck and chrisbeck. The former is personal, the latter is work. On my local Mac, I have this in my ~/.ssh/config Host *.work.com User chris ForwardAgent yes IdentityFile ~/.ssh/work_dsa Host bitbucket-personal HostName bitbucket.org User ccmcbeck ForwardAgent no IdentityFile ~/.ssh/bitbucket_ccmcbeck_rsa Host bitbucket-work HostName bitbucket.org User chrisbeck ForwardAgent no IdentityFile ~/.ssh/bitbucket_chrisbeck_rsa On my local Mac I ssh -T all is good, I get: $ ssh -T git@bitbucket-personal logged in as ccmcbeck. $ ssh -T git@bitbucket-work logged in as chrisbeck. On my local Mac, the ssh version is OpenSSH_6.2p2, OSSLShim 0.9.8r 8 Dec 2011 When I ssh foo.work.com to my Linux box, I get: $ ssh-add -l 1024 ... /Users/chris/.ssh/work_dsa (DSA) 2048 ... /Users/chris/.ssh/bitbucket_ccmcbeck_rsa (RSA) 2048 ... /Users/chris/.ssh/bitbucket_chrisbeck_rsa (RSA) On foo.work.com, I also have this in my ~/.ssh/config Host bitbucket-personal HostName bitbucket.org User ccmcbeck ForwardAgent no IdentityFile ~/.ssh/bitbucket_ccmcbeck_rsa Host bitbucket-work HostName bitbucket.org User chrisbeck ForwardAgent no IdentityFile ~/.ssh/bitbucket_chrisbeck_rsa However, on foo.work.com when I ssh -T, it references the wrong User for git@bitbucket-work $ ssh -T git@bitbucket-personal logged in as ccmcbeck. $ ssh -T git@bitbucket-work logged in as ccmcbeck. On foo.work.com, the ssh version is OpenSSH_4.3p2, OpenSSL 0.9.8e-fips-rhel5 01 Jul 2008 Why is my configuration causing foo.work.com to reference the wrong User?

    Read the article

  • Hubs/switches taking out switches?

    - by Bart Silverstrim
    Here's the issue...we have a network with a lot of Cisco switches. Someone plugged in a hub on the network, and then we started seeing "weird" behavior; errors in communication between clients and servers, or network timeouts, dropping network connections, etc. It seemed that somehow that hub (or SOHO switch) was particularly freaking out our Cisco 3700 series switches. Disconnect that hub or netgear-type SOHO switch and things settled down again. We're in the process of trying to get a centralized logging server for SNMP and management, etc., to see if we can trap errors or narrow down when someone does this sort of thing without our knowledge because things seem to work, for the most part, without issue, we just get freaky oddball incidents on particular switches that don't seem to have any explanation until we find out someone decided to take matters into their own hands to expand available ports in their room. Without getting into procedure changes or locking down ports or "in our organization they'd be fired" answers, can someone explain why adding a small switch or hub, not necessarily a SOHO router (even a dumb hub apparently caused the 3700's to freak out) sending DHCP request out, will cause issues? The boss said it's because the Cisco's are getting confused because that rogue hub/switch is bridging multiple MAC's/IP's into one port on the Cisco switches and they just choke on that, but I thought their routing tables should be able to handle multiple machines coming into the port. Anyone see that behavior before and have a clearer explanation of what's happening? I'd like to know for future troubleshooting and better understanding that just waving my hand and saying "you just can't".

    Read the article

  • Google-Bot fell in love with my 404-page

    - by 32bitfloat
    Every day my access-log looks kind of this: 66.249.78.140 - - [21/Oct/2013:14:37:00 +0200] "GET /robots.txt HTTP/1.1" 200 112 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" 66.249.78.140 - - [21/Oct/2013:14:37:01 +0200] "GET /robots.txt HTTP/1.1" 200 112 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" 66.249.78.140 - - [21/Oct/2013:14:37:01 +0200] "GET /vuqffxiyupdh.html HTTP/1.1" 404 1189 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" or this 66.249.78.140 - - [20/Oct/2013:09:25:29 +0200] "GET /robots.txt HTTP/1.1" 200 112 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" 66.249.75.62 - - [20/Oct/2013:09:25:30 +0200] "GET /robots.txt HTTP/1.1" 200 112 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" 66.249.78.140 - - [20/Oct/2013:09:25:30 +0200] "GET /zjtrtxnsh.html HTTP/1.1" 404 1186 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" The bot calls the robots.txt twice and after that tries to access a file (zjtrtxnsh.html, vuqffxiyupdh.html, ...) which cannot exist and must return a 404 error. The same procedure every day, just the unexisting html-filename changes. The content of my robots.txt: User-agent: * Disallow: /backend Sitemap: http://mysitesname.de/sitemap.xml The sitemap.xml is readable and valid, so there seems to be no reason why the bot should want to force a 404-error. How should I interpret this behaviour? Does it point to a mistake I've done or should I ignore it?

    Read the article

  • How to backup a NAS drive to a USB drive?

    - by Tim Murphy
    How would you backup 600+ GB of data on a NAS (Network-Attached Storage) drive to a USB external drive? The NAS drive does not contain mission critical data nonetheless I wish to make weekly copies of it just in case. The NAS drive is almost exclusively used as an archive dump and is rarely updated. However the backup strategy used must have a simple restore procedure so I can confidently say the data now on the NAS drive is exactly how it was at the time of backup. I did try xcopy but seemed like it would take many-many hours and eventually crashed with insufficient memory. http://www.ctunion.com/node/114 suggests I would need to use xxcopy instead due to folder/file name lengths. My concern with xcopy/xxcopy is the length of time it takes. Hoping something else is faster. NAS drive is DLink DNS-313. 1TB drive installed. Connected to router via Ethernet cable. USB drive is Seagate 1TB. Can be connected to Windows Vista (preferred) or Windows 7 PCs. Both PCs are usually connected Wirelessly however ethernet cable can be used during backup to speed up the process.

    Read the article

  • CPU / Affinity mask problem in SQL 2005

    - by Robert Moir
    Hi folks, Having a problem with a SQL Server which was virtualised. The CPU mask was set on the physical host for some reason and now advanced options are not available. So I need to reconfigure the CPU affinity mask settings - which are advanced options, so this is blocked because of the affinity mask issue. I've tried doing this from the SQL server in single user command line mode, I've googled and found lots of people with similar problems but no real solution. I'm stumped. Any ideas? Sample commands and output from query analyser below. sp_configure 'show advanced options', 1 GO RECONFIGURE WITH OVERRIDE GO sp_configure 'affinity mask', 0x00000000 GO RECONFIGURE GO ----------------------------------------- Configuration option 'show advanced options' changed from 0 to 1. Run the RECONFIGURE statement to install. Msg 5832, Level 16, State 1, Line 1 The affinity mask specified does not match the CPU mask on this system. Msg 15123, Level 16, State 1, Procedure sp_configure, Line 51 The configuration option 'affinity mask' does not exist, or it may be an advanced option.

    Read the article

  • Swap static public IPs without creating DNS conflicts?

    - by Jakobud
    Our ISP is Comcast and we have 5 static public IPs from them that we use for various services, including customers connecting to our network, VPN, web, DNS, etc... We need more IP addresses from Comcast. Unfortunately, Comcast is telling us that they can't just simply give us 5 more addresses. They only give static IP addresses in blocks of 1, 5 or 13. In order for us to get more static IPs, they have to take away our current 5 static IPs and give us 13 new ones. How do we make this transition without causing all sorts of DNS chaos? We run public DNS servers, so we can make the DNS changes ourselves, but it will take some time obviously for those DNS changes to propagate throughout the internet. Are there any easy ways to make this transition? Like create some type of fallback DNS entry or something? Surely there must be some sort of procedure for this kind of thing. The Comcast support guy was useless.

    Read the article

< Previous Page | 254 255 256 257 258 259 260 261 262 263 264 265  | Next Page >