Search Results

Search found 21063 results on 843 pages for 'stochastic process'.

Page 652/843 | < Previous Page | 648 649 650 651 652 653 654 655 656 657 658 659  | Next Page >

  • recursive grep started at / hangs

    - by Martin
    I have used following grep search pattern on multiple platforms: grep -r -I -D skip 'string_to_match' / For example on FreeBSD 8.0, FreeBSD 6.4 and Debian 6.0(squeeze). Command does a recursive search starting from root directory, assumes that binary files do not have the 'string_to_match' and skips devices, sockets and named pipes. FreeBSD 8.0 and FreeBSD 6.4 use GNU grep version 2.5.1 and Debian 6.0 uses GNU grep version 2.6.3. On FreeBSD 6.4, last information printed to stderr was "grep: /dev/cuad0: Device busy". After this grep just idles as according to "top -m io -o total" the I/O usage of grep is nonexistent. Same behavior is true under FreeBSD 8.0, but last information sent to stderr is "grep: /tmp/.wine-0: Permission denied" on my installation. In case of Debian, last output to stderr is "grep: /proc/sysrq-trigger: Input/output error". If I check the I/O usage of grep process under Debian, it is following: root@Debian:~# iotop -bp 22439 Total DISK READ: 0.00 B/s | Total DISK WRITE: 0.00 B/s TID PRIO USER DISK READ DISK WRITE SWAPIN IO COMMAND 22439 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % grep -r -I -D skip 10.10.10.99 / Total DISK READ: 0.00 B/s | Total DISK WRITE: 0.00 B/s TID PRIO USER DISK READ DISK WRITE SWAPIN IO COMMAND 22439 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % grep -r -I -D skip 10.10.10.99 / Total DISK READ: 0.00 B/s | Total DISK WRITE: 0.00 B/s TID PRIO USER DISK READ DISK WRITE SWAPIN IO COMMAND 22439 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % grep -r -I -D skip 10.10.10.99 / ^Croot@Debian:~# What might cause this? Is there a way to view which file grep is currently processing in case lsof is not present? I'm able to use lsof under Debian and looks like the problematic file name there is "0xc6b2c230 file struct, ty=0, op=0xc0d34120". I'm not sure what this is.. I'm not able to use lsof or fstat under FreeBSD. PS: I know I could use find utility, but this is not the question.

    Read the article

  • Problem with setting up raid5 on Freenas, Please help.

    - by Benjy23
    Hey Guys, I've been running Freenas for awhile now. Hardware is 1.8celeron Ram 1GB Sata Card is Via - not sure the model.... its 2 ports and I have 6 x 1.5TB HDs All ran ok while running on 1.5TB, no raid. I'm now trying to create a raid5 with my 6 hds. Software raid... is it normal for it to take roughly up to 2 weeks just to build the raid? Sorry, I'm very new to implementing raid and googling doesn't tell much other than it takes a long time. Also the Raid building process seems to fail many times... going to degraded. I suspect its cos 4 of my HDs are connected to my motherboard and the other 2 are connected to my sata card...what's your take? I'm considering 2 options now... either get a 8 port sata card and attach all the HDs to it. Or get a raid controller 8 portcard which is probably gonna be more pricey... also how do you access hardware raid through Freenas? I like how Freenas emails you should your harddrive fails so can this be done as well with hardware raid? Thanks in advance guys.

    Read the article

  • Conditionally permitting HTTP-only requests to Tomcat?

    - by Mike
    I have 2 versions of a system: Tomcat webserver Nginx reverse-proxy sitting in front of a tomcat webserver. In version 2, nginx only ever talks to Tomcat over HTTP. A user could configure the system so that only HTTPS requests are allowed. If the user does this in Version 1 and then the XML configuration files for Tomcat takes care of this. In version 2, nginx takes care of this. The problem is this: I cannot force a user to update their Tomcat XML config files when they upgrade from version 1 to version 2 (it will be recommended that they do so) because this is done as part of a larger process. This means that if they upgrade and don't update the Tomcat config, an HTTPS request will arrive at nginx, which will proxy it over HTTP to Tomcat which will reject the request because it is not HTTPS. So I can't force an update to the Tomcat XML, and I have to use HTTP between nginx and Tomcat. Any ideas? Is there some way I can affect how Tomcat reads its config in Version 2 so that it ignores the HTTPS-only section?

    Read the article

  • Problem IIS 7.0 Locking files durring upload

    - by viscious
    I am running a server 2008 with iis7 and the ftp addon on to iis 7.0 I have the ftp site configured and mostly working Except that about 70% of the time when transferring a file the upload will hang forever. If I disconnect the ftp client and reconnect and try to upload the same file I will get an error on the client saying the file is locked. I have to restart the ftp service to clear the lock. I fired up process explorer and did a search on the file in question and sure enough the ftp service has a lock on the file and it takes around 20 minutes to release the lock on its own (and sometimes longer). This lock stays around even after I disconnect the client. Like I said this only happens about 70% of the time, the other 30% of the time it goes through just fine. Things i have verified. -Not a firewall issue. Server is using passive port range 8000-9000 which is allowed on the firewall. -Not a nat issue, server has a globally rout-able ip address -all recommended/required updates installed I have 5 other servers in a very similar configuration and this is the only one i have problems with.

    Read the article

  • Ubuntu 64bit Xen DomU Issues after upgrade from Karmic to Lucid

    - by Shoaibi
    I was upgrading my servers today and it all went fine except the last machine which has the following issues: [Resolved using http://www.ndchost.com/wiki/server-administration/upgrade-ubuntu-pre-10.04#post-1004-upgradefinal-steps] No login prompt on console Done. Begin: Mounting root file system... ... Begin: Running /scripts/local-top ... Done. [ 0.545705] blkfront: xvda: barriers enabled [ 0.546949] xvda: xvda1 [ 0.549961] blkfront: xvde: barriers enabled [ 0.550619] xvde: xvde1 xvde2 Begin: Running /scripts/local-premount ... Done. [ 0.870385] kjournald starting. Commit interval 5 seconds [ 0.870449] EXT3-fs: mounted filesystem with ordered data mode. Begin: Running /scripts/local-bottom ... Done. Done. Begin: Running /scripts/init-bottom ... Done. Also tried by pressing ENTER and CTRL+C many times, no use. Resolved: [/tmp was mounted as noexec, changing that fix it]: I get errors when i try to re-install udev in single user mode: Unpacking replacement udev ... Processing triggers for ureadahead ... ureadahead will be reprofiled on next reboot Processing triggers for man-db ... Setting up udev (151-12.1) ... udev start/running, process 1003 Removing `local diversion of /sbin/udevadm to /sbin/udevadm.upgrade' update-initramfs: deferring update (trigger activated) Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-2.6.32-25-server /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/local-premount/fixrtc: Permission denied /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/local-premount/ntfs_3g: Permission denied /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/local-premount/resume: Permission denied /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/nfs-top/udev: Permission denied /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/panic/console_setup: Permission denied /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/init-top/all_generic_ide: Permission denied /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/init-top/blacklist: Permission denied /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/init-top/udev: Permission denied /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/init-bottom/udev: Permission denied /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/local-bottom/ntfs_3g: Permission denied

    Read the article

  • Sharepoint Central Administration stuck / high CPU usage

    - by johnnyb10
    I'm using WSS 3 and I recently added a new web application to my SharePoint Server. After adding it, I wasn't able to open the Central Administration site. I also noticed that there was a w3wp.exe error (Event ID 1000) in the Event Viewer. The situation now is that the w3wp.exe process is hovering around 50% CPU usage continuously. I installed a program called IIS Peek, and it shows continuous GET requests on the Central Administration site; this happens even if I stop the Central Administration site in IIS. The IP addresses identified in the GET request is my workstation, which is what I used to attempt to access Central Administration after I created the new web application. Can someone explain what's going on and how I might fix it? It seems as if my computer tried to access Central Administration and then it hung, but the page requests that were happening at the time are somehow continuing over and over again. So my two problems are the inability to access Central Administration, and the CPU Usage of w3wp.exe, which I'm assuming are two symptoms of the same problem. I'd like to know if there's anything I can do besides restarting IIS, because we have clients accessing other sites on this server. Thanks.

    Read the article

  • Scheduling Automatic Backups for Virtual Private Web Server running CENTOS 6.3 and WHM

    - by Oliver Farrell
    I'm pretty new to administering my own VPS - but thus far am finding it quite a compelling experience. There's something quite refreshing about having complete control over everything it does. One thing that I would like to look at is a suitable backup solution (a few times a day). My current setup is as follows: I'm running a CENTOS 6.3 VPS with a single 25GB hard drive solely for the purpose of hosting websites. I'm using WHM & cPanel for administering them. I now plan on adding an additional hard disk and hooking it up to my VPS. What I'm not sure about is how I get the two disks talking and get the backup process going. I'm not a seasons SSH-er so don't really know where to start. I'm hosting with Serverlove (one of the best hosting providers I've used) and am provided with a number of unique identifiers for each hard disk so I imagine these may play a part in linking them together. I appreciate that this is a little vague (I'm clutching at straws) but any assistance is very much appreciated.

    Read the article

  • chkconfig creating service symlinks with the wrong order

    - by Robert
    On RHEL 6.3, I have a system service that should be starting after postgresql and httpd (order 64 and 85, respectively), but chkconfig always places it at order 50. I tried an experiment on a CentOS 6.0 virtual machine to make sure I understood the LSB stanza syntax. I created /etc/init.d/foo, owner root, permissions 755, with this text: ### BEGIN INIT INFO # Provides: foo # Required-Start: postgresql httpd # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Description: Foo init script ### END INIT INFO And then ran chkconfig --add foo. Result: /etc/rc5.d/S86foo is created, as expected. (The other runlevels are also as expected.) I repeated the exact same experiment on the RHEL machine, and it created /etc/rc5.d/S50foo instead. I can't see anything different between the two that would lead to different results. Both machines have postgresql and httpd starting at the same orders and runlevels. Any thoughts? I could just use # chkconfig: 2345 86 50, or manually rename the service symlinks to the correct order, but I'm trying to document an install process for later users, and I want to know how to do it right and understand why it's not working as expected.

    Read the article

  • Can I use IIS to do ActiveDirectory single-sign-on for another website?

    - by brofield
    I'm trying to add Active Directory single-sign-on support to an existing SOAP server. The server can be configured to accept a trusted reverse-proxy and use the X-Remote-User HTTP header for the authenticated user. I want to configure IIS to be the trusted proxy for this service, so that it handles all of the Active Directory authentication for the SOAP server. Basically IIS would have to accept HTTP connections on port X and URL Y, do all the authentication, and then proxy the connection to a different server (most likely the same X and Y). Unfortunately, I have no knowledge of IIS or AD (so I am trying my best to learn enough to build this solution) so please be gentle. I would assume that this is not an uncommon scenario, so is there some easy way to do this? Is this sort of functionality built into IIS or do I need to build some sort of IIS proxy program myself? Is there a better option for getting the authentication done and the X-Remote-User HTTP header set than requiring IIS? Update: For example, what I am trying to create is: [CLIENT] [IIS] [AD] [SOAP-SERVER] 1. |---------------->| 2. |<--------------->|<---------->| 3. |--------------------------->| 4. |<---------------------------| 5. |<----------------| 1. POST to http://example.com/foo/bar.cgi 2. Client is not authenticated, so do authentication 3. Once validated, send request to server (X-Remote-User: {userid}) 4. Process request, send response 5. Forward response to client I need to know how to configure IIS to do the automatic authentication of the user using AD, and then to proxy the request to the actual server, sending the userid in the X-Remote-User HTTP header.

    Read the article

  • nmap installation issue

    - by daasf
    vanilla centos with latest updates, installed gcc, and after ./configure:.... Configuration complete. Type make (or gmake on some *BSD machines) to compile. [root@winxp nmap-5.51]# make Makefile:375: makefile.dep: No such file or directory g++ -MM -I./liblua -I./libdnet-stripped/include -I./libpcre -I./libpcap -I./nbase - I./nsock/include -DHAVE_CONFIG_H -DNMAP_NAME=\"Nmap\" -DNMAP_URL=\"http://nmap.org\" - DNMAP_PLATFORM=\"x86_64-unknown-linux-gnu\" -DNMAPDATADIR=\"/usr/local/share/nmap\" - D_FORTIFY_SOURCE=2 main.cc nmap.cc targets.cc tcpip.cc nmap_error.cc utils.cc idle_scan.cc osscan.cc osscan2.cc output.cc payload.cc scan_engine.cc timing.cc charpool.cc services.cc protocols.cc nmap_rpc.cc portlist.cc NmapOps.cc TargetGroup.cc Target.cc FingerPrintResults.cc service_scan.cc NmapOutputTable.cc MACLookup.cc nmap_tty.cc nmap_dns.cc traceroute.cc portreasons.cc xml.cc nse_main.cc nse_utility.cc nse_nsock.cc nse_dnet.cc nse_fs.cc nse_nmaplib.cc nse_debug.cc nse_pcrelib.cc nse_binlib.cc nse_bit.cc > makefile.dep /bin/sh: g++: command not found make: *** [makefile.dep] Error 127 [root@winxp nmap-5.51]# yum install g++ -y Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * addons: mirror.ash.fastserv.com * base: centos.mirror.choopa.net * extras: mirror.trouble-free.net * updates: mirror.nexcess.net Setting up Install Process No package g++ available. Nothing to do [root@winxp nmap-5.51]#

    Read the article

  • Defrag starting when not scheduled. What is triggering the defrag

    - by leroyclark
    I have a fileserver that is starting a defrag around 2:00 PM everyday. This is killing performance as it runs for ours becuase this is a file server and has multiple drives. All scheduled tasks regarding defrag have been disabled. I have verified that it is accessing the data drives(using SysInternals tools). The reason I might have though otherwise was the event log has multiple entries regarding defragging a db file related to shadow copies. Oh yes these drives take shadow copy snapshots multiple times per day but the times of them don't coincide with the defrag task. There is nothing in the event logs regarding defrag except those noted above in relation to shadow copies. I'm out of ideas looking for what is starting these jobs. One possiblility is that the drives are not being defgramented, but being analyized to determine if they need to be defragmented. I manually ran an analysis and the cpu usage(by dfrgntfs.exe) seems to be similar to what I'm seeing everday while the defrag process is running. However I've found no setting that schedules this analysis.

    Read the article

  • Why does this preseed for gitolite fail?

    - by troutwine
    I'm installing gitolite on a Debian Squeeze box with the following preseed: gitolite gitolite/gituser string git gitolite gitolite/adminkey string ssh-rsa AAAAB3ECT gitolite gitolite/gitdir string /var/lib/git On installation: # debconf-set-selections /var/cache/debconf/gitolite.preseed # apt-get install gitolite Reading package lists... Done Building dependency tree Reading state information... Done Suggested packages: git-daemon-run gitweb The following NEW packages will be installed: gitolite 0 upgraded, 1 newly installed, 0 to remove and 26 not upgraded. Need to get 0 B/114 kB of archives. After this operation, 348 kB of additional disk space will be used. Preconfiguring packages ... Selecting previously deselected package gitolite. (Reading database ... 24715 files and directories currently installed.) Unpacking gitolite (from .../gitolite_1.5.4-2+squeeze1_all.deb) ... Setting up gitolite (1.5.4-2+squeeze1) ... adduser: The home dir must be an absolute path. dpkg: error processing gitolite (--configure): subprocess installed post-installation script returned error exit status 1 configured to not write apport reports Errors were encountered while processing: gitolite E: Sub-process /usr/bin/dpkg returned an error code (1) Why? The pre-seed was extracted from a manually configured installation, per here and exists without issue on another machine.

    Read the article

  • PC won't PXE boot to WDS/MDT with Dell Optiplex 755

    - by Moman10
    I am trying to set up a basic MDT solution. I have set one up in the past at a previous job and it worked flawlessly, however here I'm running into a problem and am having no luck getting around it. I've installed Windows Server 2012 and MDT 2013, along with adding on the WDS role. I haven't configured much outside of the defaults for WDS, basically just set PXE response to respond to all clients (and unchecked admin approval). This machine does not run a DHCP server. I looked on the DHCP scope of our DHCP server, it shows options 66/67 checked and the server name of the WDS server is in there as well. I didn't add this but I assume it was put on during the install process (I believe I had to manually make some adjustments at my old job for this). The PC I have is a Dell Optiplex 755. I have enabled the onbard NIC w/PXE boot option in BIOS and attempted to boot. I get a "TFTP...." error but nothing offering out a DHCP address like I'm used to. In my previous job it pretty much worked right out of the box. I've verified that PortFast is enabled on the port and I've tried a couple different PCs (but both are the same model, only model I have to work with). No matter what, I get the same error. The subnet the PC is in is a different subnet than where the WDS server is sitting, but there are IP helper statements on the switch and the PCs can get regular DHCP addresses just fine from the DHCP server, just doesn't seem to get offered out a PXE boot option. I don't know if the problem is a configuration with the server or the PC itself...but after a few days of Googling I'm running out of ideas. Does anyone have a good idea of something it may be?

    Read the article

  • How does it hurt to use Linux (Ubuntu) as a guest OS for all my tasks?

    - by sauparna
    I have a machine running Windows, where the disk has two partitions C (50 GB) and D (250GB). I do research in Information Retrieval and need to work with a large corpus (more than 50 GB) and in Linux. So if I want to install Linux on the existing system, keeping the Windows installation intact, will it be fine to run it in a virtual box? (say, QEMU, VMWare, etc.) An alternative is using Wubi. In that case the Linux installation has to be on drive C. Then, if I keep a small Linux installation (say 5GB) on C, and my corpus on D (mounted in Linux), how will it affect the performance of my programs which would be accessing the mounted Windows drive D. Is it feasible to use Linux this way? Which of the above is better if at all they are a way out? Note : Since my post in July 2010, I have been using and have tried several ways of maintaining a disk-image that I can mount in Linux. I had a 100GB qcow2 disk and a 100GB raw disk, both formatted to an EXT3 file system. I was mounting and connecting to the qcow2 disk using qemu-nbd. The problem was that every now and then, the connection to the disk would get lost and the running programs would throw disk I/O errors. The raw disk would mount and work fine as a loop mounted device, but when writing data to it, the mount.ntfs program would hog the CPU and the process would take an enormous amount of time. I was in fact running make on a piece of software located on this raw disk, and after a point of time make was waiting while mount.ntfs would show 100% CPU usage.

    Read the article

  • VMWare converter performance

    - by bellocarico
    Hello, I have a question about my test lab. It's more to understand the concept more than apply this into production: I have an ESXi with few VMs linux/windows configured and I'd like to use VMWare converter to create backups. To speedup the process I decided to create a Windows VM on the same ESXi host where I've installed Windows 7 and VMWare Converter. The Host has a gigabit card but it's currently connected to a 100Mb FD port. Windows 7 sees a 1gb card connected. When I do the backup using VMWare converter I specify the host IP as source and destination, so I thought the copy could be faster then use my laptop across the network. Well, to cut a long sotry short: I get dreadful performance (4Mb/sec). I'm a buit confused on this because despite the fact that the host is running 100Mb communication between VMs and hosts shouldn't (correct me if I'm wrong) have any limitation instead. I did tweak windows 7 to optimise network performane but I got just a little improvement. i still need 4 hours to back up a 50Gb (thin) VM. Additionally I wanted to ask: Would jumbo frame help in this? I know that jumbo frame have to be supported end to end, and the network switch where the host is currently connected doesn't support this, but I was wondering: 1) Does ESXi host support jumbo frames at all? 2) Can I enable it somehow? 3) If I do so, I guess bulk transfert between VMs and host would improve, but would this affect the communication going through the real switch as this doesn't do jumbo? Thanks for reading

    Read the article

  • ubuntu 9.10 installer doesn't recognize the hard drive

    - by dan
    I downloaded Ubuntu 9.10 x86_64 and am trying to install it on a fairly modern system with a Gigabyte GA-MA770-UD3 motherboard. Ubuntu 9.04 installed fine and still will when I stick that disc in, but 9.10 doesn't see my hard drive (western digital 250GB). If I boot from the disc, I can install gparted and it does recognize the drive, but when I try to start the install process from the live disc, Ubuntu again doesn't recognize the hard drive. I checked /var/log/messages and see this: Nov 12 17:28:08 ubuntu activate-dmraid: Serial ATA RAID disk(s) detected. If this was bad, boot with 'nodmraid'. Nov 12 17:28:08 ubuntu activate-dmraid: Enabling dmraid support Nov 12 17:28:08 ubuntu activate-dmraid: ERROR: either the required RAID set not found or more options required. Nov 12 17:28:08 ubuntu activate-dmraid: ERROR: either the required RAID set not found or more options required. Nov 12 17:28:08 ubuntu activate-dmraid: ERROR: either the required RAID set not found or more options required. Nov 12 17:28:08 ubuntu activate-dmraid: no raid sets and with names: "nvidia_ciiajheb-0" Nov 12 17:28:08 ubuntu activate-dmraid: ERROR: either the required RAID set not found or more options required. I checked my BIOS, SATA is enabled and is set to IDE mode, so there shouldn't be software RAID, but nonetheless, I added nodmraid to the boot line and tried again. It still doesn't recognize the drive. I checked /var/log/messages again and now see this: Nov 12 17:49:38 ubuntu activate-dmraid: Serial ATA RAID disk(s) detected. If this was boad, boot with 'nodmraid'. Nov 12 17:49:38 ubuntu activate-dmraid: Enabling dmraid support Nov 12 17:49:38 ubuntu activate-dmraid: WARNING: dmraid disabled by boot option Nov 12 17:49:38 ubuntu activate-dmraid: WARNING: dmraid disabled by boot option Any ideas on things to try? I've tried all of the various BIOS settings for SATA. IDE,RAID, etc. Nothing seems to work.

    Read the article

  • Virtual Fileserver

    - by Sergei
    Hi, We are planning to move our production servers to the datacenter and virtualize remaining servers in the process.Datacenter will have HP blades with vSphere on top.Currentliy we are using Celerra NS20 as fileserver.Since datacenter is using HP kit and EVA 4400 as SAN, we cannot have Celerra there, as EMC supoprt for Celerra does not work for non EMC array. I have searched for possible options and one of them was to have HP NAS blade X3800sb instead of Celerra.However this seems like overkill for me.We are only using Celerra for about 100 users and 50 servers and I think having X3800sb could be waste of resources. The other option would be to have a virtual fileserver as a part of vmware environment in datacenter.We only need CIFS to be provided.The only option I can think of is Windows Storage server.We had a bad expirience with Windows servers used as fileservers ( memory leaks one thing) in the past and this was one of the reasons we moved to Celerra. What are the other options?We need something as reliable as Celerra with as many options as possible.For example , Celerra has per folder quotas, deduplication, dynamic volume allocation, automatic failover, VTLU, replication. Also we would need to replicate NAS data to the failover site.We could use block level replication , SAN-to-SAN, but this would mean wasted bandwidth, as we need only subset of folders to be replicated.We used CA XSoft for windows servers in the past and Celerra has option for Celerra replication. Thank you very much in advance, Please ask me if I missed any details!

    Read the article

  • MySQL Master - Master Broken

    - by Recc
    I've Inherited a Mysql master master system, I've noticed the second master (lets call it slave from now on as it's running on a 'slave' machine) stopped getting its db's updated. I saw that Master: Slave_IO_Running: Yes Slave_SQL_Running: Yes Slave: (with an error I truncated) Slave_IO_Running: Yes Slave_SQL_Running: No Last_Errno: 1062 Last_Error: Error 'Duplicate entry '3' for key 'PRIMARY'' on [...] I don't know what caused it to process considering we cant get duplicate there. What's important is to resume normal operations; Right now I've stop slave; on the Master and stop slave; on the Slave because I saw that if I change records on the Slave the changes Do Get Propagated to Master which is in active use. How do I: Force sync EVERYTHING from master to slave without affecting data on master? Then hopefully have slave pickup replication as usual? UPDATE OK I Tried deleting all tables on slave then it complained in that error section that the 'table' doesnt exist. So i made a no data dump of Master, and made sure I have only empty tables in Secondary (slave). I start slave; on slave BUT now it's complaining about bloody alter table statements for instance: Last_Errno: 1060 Last_Error: Error 'Duplicate column name [...] Query: 'ALTER TABLE [...] How to skip the fracking alter statements I just want to replicate the bloody data and be done with it, my tables have the lates changes already FFS and now its complaining about changes made after the replication seized weeks ago How do I reset the log or something? OUTSTANDING Why would this start happening? The "Secondary" is propagating to "Primary". "Primary" is not propagating to "Secondary". But any fixes I tried to do left it in the same state Yes-Yes Yes-No with same Last_Error. I think around that time the server was taken off the network, could that confuse MySQL in some way?

    Read the article

  • Computer won't go into standby

    - by Robert
    When I select Start-Turn off computer-Standby the 'turn off computer' option window closes, and then nothing else happens. I can start new applications, and Windows acts like I never selected standby. I ran it for several hours after that. If I have a TV program scheduled to record when I select standby I get a window (the Pinnacle TV software) asking if I'm sure, there are programs scheduled to record - and the computer just keeps running after I select yes, never going into standby. I added that detail as it shows the standby process is starting. [This problem also happens if a TV program is not scheduled, so the scheduler task in not running/in memory. This problem happens regardless of whether I'm not watching TV. This problem happens regardless of whether Media Center is running (it usually isn't, I'm using Pinnacle to watch TV).] I looked at "How to troubleshoot hibernation and standby issues in Windows XP" http://support.microsoft.com/kb/907477 - ACPI is enabled, and "standby" is an option in "Power Options Properties." So it appears to be setup correctly. Windows XP SP3 Media Center Edition, all current updates installed.

    Read the article

  • Oracle 10g for Windows does not start up on system boot

    - by Mike Dimmick
    We have an Oracle 10g Enterprise Edition installation (10.2.0.1.0) on a Windows Server 2003 virtual machine. It was initially created with Virtual Server 2005 R2 SP1 but has now been migrated to Windows Server 2008 Hyper-V. The services start on system boot, but the instance does not start up. This problem was actually occurring on Virtual Server after a migration from one server to another, but I managed to fix it then with: oradim -edit -sid ORCL -startmode auto However, this now has no effect. oradim.log (in %OracleHome%\database\oradim.log) says: Thu Jun 10 14:14:48 2010 C:\oracle\product\10.2.0\db_3\bin\oradim.exe -startup -sid orcl -usrpwd * -log oradim.log -nocheck 0 Thu Jun 10 14:14:48 2010 ORA-12560: TNS:protocol adapter error sqlnet.log in the same folder has: Fatal NI connect error 12560, connecting to: (DESCRIPTION=(ADDRESS=(PROTOCOL=BEQ)(PROGRAM=oracle)(ARGV0=oracleorcl)(ARGS='(DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))'))(CONNECT_DATA=(SID=orcl)(CID=(PROGRAM=C:\oracle\product\10.2.0\db_3\bin\oradim.exe)(HOST=ORACLE-VM)(USER=SYSTEM)))) VERSION INFORMATION: TNS for 32-bit Windows: Version 10.2.0.1.0 - Production Oracle Bequeath NT Protocol Adapter for 32-bit Windows: Version 10.2.0.1.0 - Production Time: 10-JUN-2010 14:14:48 Tracing not turned on. Tns error struct: ns main err code: 12560 TNS-12560: TNS:protocol adapter error ns secondary err code: 0 nt main err code: 530 TNS-00530: Protocol adapter error nt secondary err code: 2 nt OS err code: 0 The ORA_ORCL_AUTOSTART registry value is set to TRUE, so it should be auto-starting - and you can see that it's trying to. The problem also occurs when stopping and restarting the OracleServiceORCL service. I've enabled SQL*Net tracing which shows: [10-JUN-2010 15:09:33.919] snlpcss: entry [10-JUN-2010 15:09:34.419] snlpcss: Unable to spawn Oracle oracle (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq))) orcl, error 2. [10-JUN-2010 15:09:34.419] snlpcall: exit On a hunch that error 2 is Windows error 2 (file not found) I tried restarting the service with Process Monitor watching oradim.exe, but this appears to delay things just enough that it always works. Right now I have a horrible hack where I've created a Scheduled Task to run oradim -startup -sid ORCL when the Administrator account logs on, and set the VM to auto-logon. I'd still like to work out why it's not working.

    Read the article

  • Certain Japanese characters aren't displayed properly

    - by Nisto
    On the following site: http://www.nciku.com/search/radical the first 2 characters on the second row of the "Step 2" table aren't displayed properly. All other characters look fine. I tried re-installing the Asian fonts via the checkboxes regarding Asian fonts in the "Regional and Language Options" control panel applet. I have tried removing every single Font from the Fonts folder (some were ofcourse not possible to remove), and re-installing them all again. I did this by... Running cmd Closing down the explorer process In cmd; using the command DEL /F /S /Q * in the Fonts folder Putting in my XP SP3 Retail disc In cmd; using expand -r *.tt_ in the I386 folder on the XP disc (and any other font file, in the I386\LANG folder) I also tried installing this pack from Microsoft, but this solved nothing either. I even tried running my browser (Firefox) through AppLocale. And changing character encoding -- again, does not help. I've also tried viewing the page in Internet Explorer. What could be wrong? I have checked my Fonts folder, to make sure that every single font available on the XP disc is available in WINDOWS\Fonts. What shows in the first square on the second row - I can't really tell what it's supposed to look like (but it's not the proper character)... but the second square shows a rectangular symbol containing HEX code. I've been in this situation before -- and it has been when I've been missing fonts. But how could I possibly be missing a necessary font? Shouldn't it be provided in the Asian "font packages"? I've talked to some other users that has viewed the page, and they had no problems displaying those characters on second row - even though they're only using the fonts provided on the Windows installation disc. Windows XP Professional Service Pack 3 (x86 - with latest updates) Firefox 3.6.15

    Read the article

  • What is causing sudden freezing during running real-time program?

    - by Trevor Boyd Smith
    So I run a high intensive (CPU/GPU) real-time program. During normal execution suddenly everything freezes for 1-4 seconds. I opened "Process Explorer" in the background to help gain insight and maybe identify something. Here is what the CPU/GPU graphs looks like when I align them in time: Notice the 4 distinct drops in both the CPU/GPU. You can see that it goes from some sort of positive CPU/GPU usage to almost zero. These drops in the graph align with when the real-time program suddenly freezes. How do I find what is causing these sudden drops? NOTE: When you put your mouse over the graph it tells you the time, accurate to the second, for where your cursor is. Maybe this mouse over feature could be helpful in some way (e.g. what if you had a log of all processes every 100ms). EDIT: The real-time program is a video game and so I can't watch some sort of instrumentation while the video game is running. I need a solution that let's you look back in time somehow to see what was happening when the slow down occurred. EDIT: RE - Recording Data vs using real-time monitor: So the windows performance recorder is for some reason not recording what I expect it to record. So I switched to using "perfmon" and then opening it's "resource monitor". RE - Setting it up so I can view real-time monitor: In the video game I set it to spectate and then put the video game in "windowed" mode so that I can view the real time display that Resource Monitor has. Now that I can get semi-real time (only once per second... how do you get more than once per second?) I started looking at the various real time data readouts. Getting to the cause: I noticed a strong correlation in high disk IO and low CPU usage (which is also seen by having in-game freezing). How do you use resource monitor to find out who is doing all this offending disk IO?

    Read the article

  • Java VM problem in OpenVZ

    - by Ginnun
    Hi, I bought a vps for hosting my java needs. But I can't run java on it. Everything about java is correctly installed but when I try to run java ("java -version" forexample) I get this error : Error occurred during initialization of VM Could not reserve enough space for object heap Could not create the Java virtual machine. I don't think this is a java centered problem, Out of memory for sure. I contacted the vps admin, but he says everything is fine, you have 2gb ram, expandable to 4gb! I did a bit search on the subject, here is my BEANS file (numbers converted to humanredable form using a script). By the way do JVM heap memory allocs count on kmemsize or privvmpages ? How much ram does that configuration allows me to allocate with jvm for a single process? resource held maxheld barrier limit failcnt kmemsize 2.25 mb 2.35 mb 13.71 mb 14.10 mb 0 lockedpages 0 0 1024.00 kb 1024.00 kb 0 privvmpages 20.54 mb 21.33 mb 256.00 mb 272.00 mb 156 shmpages 5.00 mb 5.00 mb 84.00 mb 84.00 mb 0 numproc 13 14 240 240 0 physpages 9.36 mb 9.45 mb 0 MAX_ULONG 0 vmguarpages 0 0 132.00 mb MAX_ULONG 0 oomguarpages 9.36 mb 9.45 mb MAX_ULONG MAX_ULONG 0 numtcpsock 3 3 360 360 0 numflock 3 3 188 206 0 numpty 2 2 16 16 0 numsiginfo 0 1 256 256 0 tcpsndbuf 69.17 kb 69.17 kb 1.64 mb 2.58 mb 0 tcprcvbuf 48.00 kb 48.00 kb 1.64 mb 2.58 mb 0 othersockbuf 6.80 kb 6.80 kb 1.07 mb 2.00 mb 0 dgramrcvbuf 0.00 kb 0.00 kb 256.00 kb 256.00 kb 0 numothersock 9 10 360 360 0 dcachesize 0.00 kb 0.00 kb 3.25 mb 3.46 mb 0 numfile 704 746 9312 9312 0 numiptent 10 10 128 128 0 Thanks in advance!

    Read the article

  • Using psftp to upload and download files

    - by macha
    Hello I am trying to upload and download files from my desktop to my server. Now after some search I did download psftp. I used to use filezilla earlier, but I cannot install it on my desktop due to a few reasons. Since psftp (similar to putty) is just an executable for file transfer. So now after going through this link http://www.math.tamu.edu/~mpilant/math696/psftp.html. I understood that put and get are two commands I would use to download and upload files. Now when I logon to the server and say get filename, it actually is throwing back an error "local: unable to open filename". I tried that with other files too, and I end up getting the same error. The psftp.exe file is on my desktop. The process that I am using is I double click the .exe file open "servrname" cd /path/where/files/are get "filename" And I get this error "local: unable to open filename". Am I making a mistake or is it a problem with this executable?

    Read the article

  • Problems with Windows 7 restore

    - by Chris Lively
    My WD raptor 150 failed with a nice clicking noise. So, I picked up a velociraptor 300 and popped it in. I had windows set to do a full system backup nightly, so I figured the recovery ought to go easily. Well, it isn't. It is currently stuck on a screen that says: "Windows is restoring your computer from the system image. This might take from a few minutes to a few hours" Below that is a rather large progress meter with maybe the first block filled in. Below that is a message that says "Restoring disk (C:)..." It's been that way for over an hour. The first time around, I gave up after 2 hours. I then booted into the system recovery options and went to a command prompt and ran a chkdsk on the new drive. It showed several file inconsistencies and not much else. I ran a chkdsk /f on it and tried again... Which is where I'm at now. I can't see that the restore process should take this long before. Any ideas? UPDATE After 10 hours, it's still on "Restoring disk (C:)" and the progress meter is at roughly 5%. I'm guessing at the 5% as there isn't an actual number or anything else that I can look at showing what it's actually doing. The backup contains roughly 120GB of data. How slow is windows restore?

    Read the article

< Previous Page | 648 649 650 651 652 653 654 655 656 657 658 659  | Next Page >