Search Results

Search found 20313 results on 813 pages for 'batch size'.

Page 658/813 | < Previous Page | 654 655 656 657 658 659 660 661 662 663 664 665  | Next Page >

  • Long 'pause' after copying large files on windows 2008

    - by Ian
    I have a mystery regarding pauses after file copies on windows server 2088 (and other releases) When copying large files, like vhds, to locally attached USB disks I often see a long pause after the copy has completed 100%. As an example: robocopying vhd files. The bytes read/written count matches the vhd file size and robocopy shows 100% but it pauses for several minutes. If I do nothing it will continue, but I will have to wait for quite some time - about the same amount of time as it took to get to 100%. The bytes read/bytes written counters for robocopy do not change. My first thought was that the AV had to scan it, but I'm looking at a machine right now which doesn't have an AV installed and this is occurring, so impossible. No other processes are showing read/write byte counts as going up. The behavior is the same if I use the copy command or xcopy. I've seen this on other systems but have never worked out what the cause is. Anyone got any suggestions as to what might be going on?

    Read the article

  • MSSQL Timing out, a couple of questions...

    - by user29000
    Hi, About once a week, my MSSQL server is timing out, or rather the machine runs out of RAM. This morning it reached 3.9GB of the available 4, with MSSQL taking up 2.5GB. I'm concerned that i've not configured SQL to release memory as it should, so I ran sp_who2 while the timeouts were occuring to see what process were running. If i could post the CSV datafile i would, however, there were 85 processes in total, mostly related to the Full Text service: FT Gatherer - About 35 of these running under the 'sa' account against the master database with status of either sleeping or background, many were dependant on other processes. Is that normal? MySite database - There were only 5 processes for the one active site/database and all were either sleeping or suspended - but their lastBatch dates were set to 1/12/2020. Is that normal? The datbase is only about 20mb in size the traffic levels are very low, so i'm thinking of maybe limiting the amount of RAM SQL has access to (from unlimted to maybe 2GB). Any thoughts / advise would be appreciated. Mny thanks Ben

    Read the article

  • SQL 2008 R2 replication error: The process could not connect to Distributor

    - by Lance Lefebure
    I have two servers running SQL 2008 R2 Standard, each with an instance named "MAIN". I have a small test database on my primary server (one table, 13 rows) that I want to replicate to a second server as a proof-of-concept for some larger databases that I want to replicate. I set up the primary server to be a publisher and distributor, and set the database to do transactional replication. I copied the data to the second server via a backup/restore, not via a snapshot (which I'll have to do with the larger databases due to database size and limited bandwidth). I followed the instructions here: http://gnawgnu.blogspot.com/2009/11/sql-2008-transactional-replication-and.html Now on the subscriber, I go under Replication / Local Subscriptions / Right click / Properties on my subscription to the DB. The status of the last synchronization shows a status of: "The process could not connect to Distributor 'PRIMARYSERVER\MAIN'." Data IS replicating from the primary to the secondary. Any record I add on the primary shows up on the secondary server within seconds. Is the Distributor part of the Snapshot system that I'm not using, or is it part of the transaction replication stuff? Thanks, Lance

    Read the article

  • What is the best cloud technology to use for MongoDB/GridFS database servers

    - by Nerian
    We are going to launch a service that will require between 1 and 2 GB for file storage per paid user. I am going to use GridFS for storing files. GridFS is a module for MongoDB that allows to store large files in de database. I am pondering the different options for storing the database. But since I am unexperienced at deployment and it is my first time with Mongodb I need your experience. Criteria: I want to spend my time developing my core business, that is, my own application. I am a Ruby on Rails developer. I do not like to mess with server configuration. Hence, I would like a fully managed hosting solution. But I would like to know about any other option, if you think it is worth it. It should be able to scale. Cloud style. Pay as you go. The lower the price, the better. So far I known of these services: https://mongohq.com/pricing https://mongomachine.com/pricing https://mongolab.com/about/pricing/ http://cloudcontrol.com/add-ons/mongodb/ And they seem to be OK for common needs, that is no file storage. But I am going to use GridFS, so the size matters. These services seems to scale, in price, quite poorly. MongoHQ: The larger plan max storage is 20 GB. Seems like a very little storage, for GridFS. MongoMachine: Flat price, 2.5$ per GB. I didn't found the limit. Seems like a good price, comparing the others. MongoLab: 3.984 GB max, which I don't think I will hit, so perfect. 8$ per GB, quite costly. CloudControl: The larger plan is 20 Gb. The custom service starts at 250€ plus some unspecified charge per GB. What is your experience with these services? Any downtimes? Other possibilities? Edit: Added meaning of GridFS

    Read the article

  • ixgbe driver: Limit the max number of cores

    - by Shellex Wai
    I have a Linux workstation with 48 cores and runs ixgbe driver for fiber interface. And I have to test a project name Netmap on it. NetMap is a high performance network framework for high speed interfaces, which has been ported to Linux recently. For some reasons, I must try it on the machine. So I compile it and follow the instructions to run the test problems, but it doesn't work. I check dmesg and it says: [10399.085736] 794.159015 netmap_set_ringid [486] ringid o4o1 set to all 48 HW RINGS [10399.085742] 794.282011 netmap_obj_malloc [220] netmap_if request size 816 too large I asked the author of netmap for help. He told me that I have too many cores in the machine and it should work if I tell ixgbe use less cores (2 to 4 is ok). I am not familiar to driver development and I don't know how to limit the ring numbers by passing arguments to ixgbe driver. So I check the spec from intel's website but found nothing about it. So I come here for more helps. Thank you.

    Read the article

  • Can't start Windows 7 after cloning HDD

    - by Paul
    Brief description: cloned HDD1 - HDD2 HDD1 partition 1 boots HDD1 partition 2 boots HDD2 partition 1 boots HDD2 partition 2 doesn't boot Windows, but is bootable in general Now verbosely: In all the cases computer is the same. I have two Windows 7 installations on HDD1 - both are booting fine. I choose between them using standard Windows 7 boot loader menu. Technically there are 4 partitions: 100 MB Boot loader partition (active), Windows 7 copy 1 (25 GB), Windows 7 copy 2 (150 GB) and Working partition. All are primary. In past few days I tried to clone the whole HDD1 to HDD2 of the same size (but 2,5 inch form factor) as is using Minitool Partition wizard. Everything has been copied, all files are accessible, no faults in file system structure, even boot loader wasn't damaged and I hadn't to repair it. But I can boot only first installation of Windows 7 (it boots without issues). When I choose the second installation, I get immediately a completely black screen without any texts, cursors and other data. HDD isn't accessed after that. This black screen is sensitive to Ctrl-Alt-Delete which causes computer reboot. I did some experimenting: Installed Windows 7 to that partition - it booted fine. Then I renamed "Windows" to "Windows.old" and copied Windows directory from HDD1 as it was, using Far Manager, and got the same troubles - black screen. (Of course I performed renaming and copying from other copy of Windows). So, it seems that problems are inside this installation of Windows, somewhere in its files.

    Read the article

  • Could hybrid SSD + HDD be made with fixed internal partitions?

    - by Aaron
    I was pretty close to getting Seagate's Momentus XT but have been scared off by the many problems reported on forums and feedback sites, especially in Mac Book Pros. So I'm waiting for mk 2 with some extra flash and better reliablilty I'm assuming will come out this year. What would suit me better though is a 32+500 hybrid drive where I have more control over what is on the flash drive and what is on the disk drive. So there are 2 physical partitions within the one 2.5" hard drive enclosure which use different media internally (32GB for core files and 500GB for data and multimedia). The partitions would be locked so they can't be changed. - Or even better, the disk driver just makes them appear as two disks to the OS that share the same bus... Perhaps it's ok if the bios just sees the first drive until the OS is loaded. Is either of it technically possible? Obviously difficult to market outside of the enthusiast market. The SSD memory modules can be pretty small right, so they could even make them a card that plugs into a secondary connection on the enclosure. That would be good for computer builders as well as for upgrading and recoverability. Then future operating systems could recognise these system SSD drives and automatically install the OS + swap files on it. While placing document libraries on the larger data drive. While in the longer term HDD will probably disapear there will always be a trade off between speed, storage size and expense.

    Read the article

  • Exchange 2010 sends out spam.

    - by Magnus Gladh
    Hi. I have an Exchange Server 2010, that uses a smart host to send out mails. A day ago the owner of smart host contact us and told us that we send out spam. I have try different open relay test on the net and all of them come back saying that this server is secured and can not be used as relay server. But I can see in my Exchange Queue Viewer that it keeps coming in new messages. Here is an example of how it looks. Identity: mailserver\3874\13128 Subject: Olevererbart:: [email protected] Pfizer -75% now Internet Message ID: <[email protected]> From Address: <> Status: Ready Size (KB): 6 Message Source Name: DSN Source IP: 255.255.255.255 SCL: -1 Date Received: 2010-12-09 21:46:22 Expiration Time: 2010-12-11 21:46:22 Last Error: Queue ID: mailserver\3874 Recipients: [email protected] How can I secure our exchange server more, to stop this from happening? Could I have got an virus that hooks up to our exchange server and send mail throw that? As I can see the From Address is always <, is there someway that I can stop sending mails that don't have a from address that I describe? Pleas help

    Read the article

  • GUI interfaces to ATI card behave weirdly out of the box and after updates.

    - by jdk
    My Lenovo W500 came with an ATI Mobility FireGL V5700 and both the Catalyst control center software and Vista display manager display four monitors. What's really annoying is the behaviour. My two active displays (laptop display + my external monitor) are always #s 3 and 4 respectively which doesn't make sense. This is out of the box. Additionally dragging & dropping is jumpy and displays #1 and 2 (always inactive because they don't exist to the software) are often preventing me from dragging #3 and 4 to the rightmost side. They also auto-snap to weird positions and certain sensible positions like position one directly over top of the other are not possible. The exact same annoyances are present when using the Windows Display manager too. In other words the interface is crap and I'm looking for a fix that's not wishing I had gone with nVidia instead. I've updated drivers, and Catalyst control centre. Have latest Windows and AMD/ATI updates. Any thoughts? Graphics Software Driver Packaging Version 8.563.2.1-090401a-079160C-Lenovo Provider ATI Technologies Inc. 2D Driver Version 7.01.01.849 2D Driver File Path /REGISTRY/MACHINE/SYSTEM/ControlSet001/Control/Class/{4D36E968-E325-11CE-BFC1-08002BE10318}/0001 Direct3D Version 7.14.10.0630 OpenGL Version 6.14.10.8306 Catalyst® Control Center Version 2009.0401.1328.22301 Graphics Hardware Primary Adapter Graphics Card Manufacturer Powered by ATI Graphics Chipset ATI Mobility FireGL V5700 Device ID 9591 Vendor 1002 Subsystem ID 2126 Subsystem Vendor ID 17AA Graphics Bus Capability PCI Express 2.0 Maximum Bus Setting PCI Express 2.0 x16 BIOS Version 010.088.000.021 BIOS Part Number BK-ATI VER010.088.000.021.034663 BIOS Date 2009/09/30 Memory Size 512 MB Memory Type DDR3 Core Clock in MHz 600 MHz Memory Clock in MHz 700 MHz

    Read the article

  • Winamp has slow /skipping video playback on Windows 7

    - by Roy Rico
    Hello, I have Windows 7 x64 (7600 90-day trial version) and Winamp 5.6 installed. When I play a video in Windows Media Player, the video plays smooth, however when I play a video in winamp, the video is mostly ok when played back at the original size (but not completely), but if I play it back in fullscreen, the playback gets really slow. The video's audio track plays just fine. I have a DELL XPS 420 computer (8GB of RAM) with a Nvidia GeForce 8800 CTS 512 video card. I've updated to the latest drivers. I have the default Windows 7 codecs, and the CCCP codec pack which used to be all I needed under Windows XP to play all types of videos. Are the codecs needed for Windows Y the same? What's going on? UPDATE: As suggested, I turned off Aero and winamp ran just fine again. So I just have to wait for winamp to be rewritten to work with the way Vista/Windows 7 runs? UPDATE 2: Winamp has updated their player, and it works great with Windows 7 now.

    Read the article

  • Completed downloads freeze Windows

    - by Ben Hooper
    The Issue Shortly after a file download via Google Chrome for Windows completes, the download will get stuck on "0 seconds left" and all other programs (except Google Chrome, for some reason, but browsing will not work) completely freezes into Windows' infamous "Not Responding" state, affecting Explorer particularly badly. Eventually, the programs will recover themselves but they will recover significantly faster if you cancel the file download, relative to how quickly you react. Performing the exact same operation immediately after cancelling the download usually works without issue. This issue occurs when with any file type (.ZIP, .MSI, .MSG, .PNG, .URL, etc) of any size from any source (Dropbox, SourceForge, Imgur, even tiny and locally-generated BLObs created by my own Chrome extension, etc) to any location.   Potential Causes As this issue is so inconsistent, I haven't been able to prove whether the issue is Chrome-specific or being caused by my system or my Chrome configuration but it's happening on both my work and home PCs. I originally suspected that this issue was being caused by security software scanning completed downloads for threats but I'm not as confident in that theory anymore as the issue persisted even after changing my security software from ESET NOD32 and Malwarebytes Anti-Malware Pro to ESET Endpoint to Microsoft Security Essentials.   System Information (of both PCs) Windows version: 7 Service Pack 1 64-bit Google Chrome version: 30.0.1599.101 (but has been happening for a long time)   Screenshots

    Read the article

  • Spreadsheet application that can handle big data OS X

    - by Peter
    I've been working with Excel for quite a while for some statistical analysis that I do regularly. The size of the data that I'm working with has gotten much larger as of late, however. The layout of the databases in question is quite simple, usually just three rows which includes a UNIX timestamp, and EST value, a proprietary numeric value and finally an average of the rows that have a timestamp +/- 1000 that row's timestamp (little AVERAGEIFS() formula). That formula and the EST conversion are the only formulas in the sheet. I'm beginning to work with files with 500,000+ rows. Running the average formula down the entire row takes forever. The end result is the production of print-worthy graphs. I'm looking for either a UNIX CL utility or separate spreadsheet/database application that can handle this amount of data without melting my CPU or making me wait an hour. Is there anything out there? TL;DR: Simple excel sheet with over half a million rows is getting too slow to work with. OS X alternatives?

    Read the article

  • Can applications use all of the memory in Windows 8?

    - by Barleyman
    Windows 7 (and Windows Vista) have a built-in limit of not being able to use the last 25% of RAM. You will get a low memory warning when you get close to the limit. Even if you disable that warning, applications will run out of memory and crash since the OS will refuse to allocate memory from that last 25%. That was fine when Vista was designed, when machines had 1 GB of total memory, but is pretty daft for today's 8 GB machines. Yes, the system will run cache, etc. on that extra 2 GB, but running out of memory when you have "merely" 2 GB left.... NB: this has nothing to do with the page file. If you limit the page file to a sensible size like 2 GB, you will still see this behavior. The system will cram the page file to the last byte while refusing to touch that 1/4th of the RAM. Does Windows 8 change this behavior? Is there now some fixed minimum free RAM requirement, like 512 MB, or is it still 25%? Can you actually adjust the low memory limit?

    Read the article

  • Nginx Ubuntu Postfix Config - Can't connect to incoming IMAP server 'server not responding' but can send mail via outgoing using same details?

    - by daveaspinall
    I'm pretty to new server admin and especially nginx but seem to be getting ok fine apart from accessing my mail via my iPhone? I've changed my domain to 'domain.com' The thing is I can send mail via my outgoing IMAP server but can't connect to the incoming one? I just get the message "the mail server at mail.domain.com is not responding" /etc/postfix/main.cf alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases append_dot_mydomain = no biff = no broken_sasl_auth_clients = yes config_directory = /etc/postfix home_mailbox = Maildir/ inet_interfaces = all inet_protocols = all mailbox_command = mailbox_size_limit = 0 mydestination = domain.com, mail.domain.com, localhost.com, , localhost, localhost.localdomain mydomain = domain.com myhostname = mail.domain.com mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 myorigin = /etc/mailname recipient_delimiter = + relayhost = smtp_tls_note_starttls_offer = yes smtp_tls_security_level = may smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_recipient_restrictions = permit_sasl_authenticated,permit_mynetworks,reject_unauth_destination smtpd_sasl_auth_enable = yes smtpd_sasl_local_domain = smtpd_sasl_security_options = noanonymous smtpd_tls_CAfile = /etc/ssl/certs/cacert.pem smtpd_tls_auth_only = no smtpd_tls_cert_file = /etc/ssl/certs/smtpd.crt smtpd_tls_key_file = /etc/ssl/private/smtpd.key smtpd_tls_loglevel = 1 smtpd_tls_received_header = yes smtpd_tls_security_level = may smtpd_tls_session_cache_timeout = 3600s tls_random_source = dev:/dev/urandom telnet localhost 25 ehlo locahost 250-mail.domain.com 250-PIPELINING 250-SIZE 10240000 250-VRFY 250-ETRN 250-STARTTLS 250-AUTH LOGIN PLAIN 250-AUTH=LOGIN PLAIN 250-ENHANCEDSTATUSCODES 250-8BITMIME 250 DSN Using the following details to connect: username password hostname: mail.domain.com port: 25 iptables --list Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination I also sent mail to the server as a test and got this missage if it helps? Technical details of temporary failure: [mail.domain.com. (10): Connection refused] I also looked in /var/log/mail.log and it has multiple entries of: postfix/smtpd[12239]: connect from 5acefc9a.bb.sky.com[90.206.252.xxx] Mar 23 06:47:09 new-domain postfix/smtpd[12239]: lost connection after CONNECT from 5acefc9a.bb.sky.com[90.206.252.154] Notice new-domain which is incorrect but the server hostname and hostname in the configs are correct? I recently moves servers and the host has set the primary domain on the service as new-domain.com so this may be the issue? Like I said, it works to connect to outgoing server, but incoming gets the not responding error? Any idea would be much appreciated!

    Read the article

  • Creating a partitioned raid1 array for booting a debian squeeze system

    - by gucki
    I'd like to have the following raid1 (mirror) setup: /dev/md0 consists of /dev/sda and /dev/sdb I created this raid1 device using mdadm --create --verbose /dev/md0 --auto=yes --level=1 --raid-devices=2 /dev/sda /dev/sdb This gave a warning about metadata being 1.2 and my system might not boot. I cannot use 0.9 because it restricts the size of the raid to 2TB and I assume grub shipped with latest debian (squeeze) should be able to handle metadata 1.2. So then I created the needed partitions like this: # creating new label (partition table) parted -s /dev/md0 mklabel 'msdos' # creating partitions sfdisk -uM /dev/md0 << EOF 0,4096 ,1024,S ; EOF # making root filesystem mkfs -t ext4 -L boot -m 0 /dev/md0p1 # making swap filesystem mkswap /dev/md0p2 # making data filesystem mkfs -t ext4 -L data /dev/md0p3 Then I mounted the root partition, copied a minimal debian install inside and temporary mounted /dev /proc /sys. Afer this I chrooted to the new root folder and executed: grub-install --no-floppy --recheck /dev/md0 However this fails badly with: /usr/sbin/grub-probe: error: unknown filesystem. Auto-detection of a filesystem of /dev/md0p1 failed. Please report this together with the output of "/usr/sbin/grub-probe --device-map=/boot/grub/device.map --target=fs -v /boot/grub" to I don't think it's a bug in grub (so I didn't report it yet) but a fault of mine. So I really wonder how to properly setup my raid1, everything I tried so far failed.

    Read the article

  • Network topology for both direct and routed traffic between two nodes

    - by IndigoFire
    Despite it's small size, this is the most difficult network design problem I've faced. There are three nodes in this network: PC running Windows XP with an internal WiFi adapter.Base station with both WiFi and a Wireless Modem (WiModem)Mobile device with both WiFi and WiModem The modem is a low-bandwidth but high-reliability connection. We'd like to use WiFi for high-bandwidth stuff like file transfers when the mobile is nearby, and the modem for control information. Here's the tricky part: we'd like the wifi traffic to go directly from the mobile to the PC, as rebroadcasting packets on the same WiFi channel takes up double the bandwidth. We can do that with a manual configuration by giving the both the PC and the base station two IP addresses for their WiFi interfaces: one on a subnet shared with the mobile, and one on their own subnet. The routes on the PC are set up so that any traffic going to the mobile via WiModem goes through the secondary IP address so that return traffic from the mobile also goes through the WiModem. Here's what that looks like: PC WiFi 1: 192.168.2.10/24 WiFi 2: 192.168.3.10/24 Default route: 192.168.2.1 Base Station WiFi 1: 192.168.2.1/24 WiFi 2: 192.168.3.1/24 WiModem: 192.168.4.1/24 Mobile WiFi: 192.168.3.20/24 WiModem: 192.168.4.20/24 We'd like to move to having the base station automatically configure the mobile and PC, as the manual setup is problematic when you start having multiple mobiles and PCs. This means that the PC can only have 1 IP address and needs to be treated as being pretty simple. Is it possible to have a setup driven by DHCP on the base station that is efficient with bandwidth?

    Read the article

  • Imagemagick convert with resample option

    - by coneybeare
    I am creating thumbnails from much larger images and have been using this command successfully for some time: convert FILE -resize "64x" -crop "64x64+0+16" +repage -strip OUTFILE I also do some other processing that is not relevant to the question. I realized that this does not adjust the resolution at all, so if I use a 300dpi image, it ends up displaying really small on some devices. I want to resample it to 72x72 so I have been trying with this command: convert FILE -resize "64x" -crop "64x64+0+16" +repage -strip -resample 72x72 OUTFILE And expected the 64x64 image at 300dpi to be resampled to a 64x64 image at 72dpi, but instead, I am getting a very funny size and density. Here is "identify" output for the original and post-processed file WITHOUT the resample: coneybeare $ convert "aa.jpg" -crop "64x64+0+16" +repage -strip "aa.png" coneybeare $ for image in `find . -type f`; do identify $image; identify -verbose $image | egrep "^ Resolution"; done ./aa.jpg JPEG 1130x1695 1130x1695+0+0 8-bit DirectClass 1.492MiB 0.000u 0:00.000 Resolution: 300x300 ./aa.png PNG 64x64 64x64+0+0 8-bit DirectClass 7.46KiB 0.000u 0:00.000 Resolution: 118.11x118.11 And here is the "identify output for the command WITH the resample: coneybeare $ convert "aa.jpg" -crop "64x64+0+16" +repage -strip -resample 72x72 "aa.png" coneybeare $ for image in `find . -type f`; do identify $image; identify -verbose $image | egrep "^ Resolution"; done ./aa.jpg JPEG 1130x1695 1130x1695+0+0 8-bit DirectClass 1.492MiB 0.000u 0:00.000 Resolution: 300x300 ./aa.png PNG 15x15 15x15+0+0 8-bit DirectClass 901b 0.000u 0:00.000 Resolution: 28.34x28.34 So, the question is: What am I doing wrong and how can I fix it so the end result is a 64x64 cropped thumbnail image at 72dpi?

    Read the article

  • Samba share will not connect (was working yesterday)

    - by David Gard
    I have a CentOS websver with a Samba share set up (\\webserver\websites). I was connected to this share just yesterday without issue, but today my Windows 8 PC will not connect to it. I've also tried making a connection from Windows 7 and Windows XP, all without success. I initially tried restarting my computer, but that did not work. I then tried restarting the Samba service on the webserver (service smb restart), and when that failed I restarted the webserver. All of that was to no avail, and I still cannot connect to the share. The webserver is contactable from my PC (and the others I tried), as the websites it hosts work fine and I'm able to Putty to the server. When connected to the webserver, I can see that Samba is running by using service smb status - service smb status smbd (pid 4685) is running... nmbd (pid 4688) is running... Can anyone please help me to get this share working? Here is my full Samba config (/etc/samba/smb.conf) - [global] workgroup = MYGROUP server string = Samba Server %v log file = /var/log/samba/log.%m max log size = 50 security = user encrypt passwords = yes socket options = TCP_NODELAY SO_RCVBUF=8192 SO_SNDBUF=8192 local master = no [websites] comment = Websites browseable = yes writable = yes path=/var/www/html/ valid users = dgard

    Read the article

  • Looking for an application that scrolls or pans netbook screens running Windows

    - by therobyouknow
    I'm looking for a Windows 7 and XP compatible Windows desktop panning/scrolling tool. This is to solve a problem where some applications for example MSN have settings/preference Windows that are not resizeable. I have a Netbook with a small maximum screen resolution e.g. 1024x600. The fixed non-resizeable windows are too large for this display screen size so I cannot see all of the items on these windows, particularly the OK button to save settings. What I would like is a desktop scrolling/panning tool where if I move my mouse pointer to any edge of the display, it pans to show the region of the too-large-fixed window that I could not see. I use a Samsung N110 and Toshiba NB100 netbooks. I'm looking for: A general program that provides desktop panning/scrolling/expanded resolution to allow all regions of a non-resizeable fixed window Preferably a non-graphics hardware specific program but will accept a solution that works with both the above machines I'm NOT looking for (i.e. unsatisfactory answers others have asked that I've already searched and found): Advice on what programs to use that DON'T have the problem of fixed windows Alternative operating system solutions Plugging in an external monitor with larger resolution - I use this option but I need a solution when one is not available, e.g. while travelling etc Advice about not using small screen netbooks - I enjoy the compact convenience of them Advice about change the dpi settings in the Control Panel Display settings Advice about guesswork with the tab key to move the focus the off-screen item I cannot see Thank you in advance.

    Read the article

  • Mysqld shutting down by itself

    - by AJ Naidas
    I'm running a Wordpress Blog that gets medium-high traffic. It is hosted in an Ubuntu Server 2GB Memory 2 Core Processor 40GB SSD Disk, 3TB Transfer. The problem is that MySQL shuts down by itself after an hour or two. I had to restart mysql each and every time this happens. I checked the logs and this is what I found: 140612 6:48:14 [Warning] Using unique option prefix myisam-recover instead of myisam-recover-options is deprecated and will be removed in a future release. Please use the full name instead. 140612 6:48:14 [Note] Plugin 'FEDERATED' is disabled. 140612 6:48:14 InnoDB: The InnoDB memory heap is disabled 140612 6:48:14 InnoDB: Mutexes and rw_locks use GCC atomic builtins 140612 6:48:14 InnoDB: Compressed tables use zlib 1.2.3.4 140612 6:48:14 InnoDB: Initializing buffer pool, size = 1.4G InnoDB: mmap(1502412800 bytes) failed; errno 12 140612 6:48:14 InnoDB: Completed initialization of buffer pool 140612 6:48:14 InnoDB: Fatal error: cannot allocate memory for the buffer pool 140612 6:48:14 [ERROR] Plugin 'InnoDB' init function returned error. 140612 6:48:14 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 140612 6:48:14 [ERROR] Unknown/unsupported storage engine: InnoDB 140612 6:48:14 [ERROR] Aborting 140612 6:48:14 [Note] /usr/sbin/mysqld: Shutdown complete judging by this line: 140612 6:48:14 InnoDB: Fatal error: cannot allocate memory for the buffer pool I suspect that this is a memory problem, but I would like to hear from the experts here before I conclude. Is this a lack of memory problem? Do you think the value of max_connections in my.cnf (currently 100) is a potential cause and needs increasing? TIA.

    Read the article

  • Exceptional slowdown of robocopy copying from VM to DFS array

    - by user1588867
    I've got an old win 2003 VM (VMware) on a blade cluster of VMs that I'm moving a considerable amount of files to our new DFS array. There are two main folders with about 1.7 million and half a million smaller files (letters, memos, and other smaller files) respectively. Total size is ~420 GB and ~100 GB. We're using the gui version of robocopy on the server to copy the files. We had initiated a file copy about a month ago to test the process and found that it was taking around 4 hours for the large file. Now that I'm in the process of actually switching the files over it has been taking 18-20 hours. Nothing has changed on the server side and nothing has changed on the settings of the copy (no logs, 1 retry with a wait of 1 second). Our intent is to shut off the share and force the copy over again to get all the files that have been left out of the copy due to being locked by users. I can't take a 20 hour outage to do that though. Does anyone have any theories about what could be causing such a delay for robocopy compared to previously shorter runs?

    Read the article

  • unable to set xmx beyond 4gb on system having 8gb RAM

    - by Arun
    I need to set ANT_OPTS=-Xms1024m -Xmx6144m -XX:PermSize=1024m -XX:MaxPermSize=1024m JAVA_OPTS=-Xms1024m -Xmx6144m -XX:PermSize=1024m -XX:MaxPermSize=1024m I have a system with 8gb(recently upgraded from 4 gb) But once i set the ant opts to above said value I am not able to run any of my ant targets and I get the following error [ERROR] Argument error: -Xmx6144m [ERROR] Specified maximum heap size (6144 MB) is larger than the address space on this platform (4 GB). [WARN ] -XX:PermSize=1024m is not a valid VM option. Ignoring [WARN ] -XX:MaxPermSize=1024m is not a valid VM option. Ignoring Could not create the Java virtual machine. This indicates the Java that I have on my system java version "1.6.0_20" Java(TM) SE Runtime Environment (build 1.6.0_20-b02) Oracle JRockit(R) (build R28.1.0-123-138454-1.6.0_20-20101014-1351-windows-x86_64, compiled mode) and I am running a Windows 7 on Intel Core 2 duo 3Ghz processor and 8gb RAM Any pointers in solving this issue would be of great help. PS: I did google for the error and it was one of my 1st such occurence where I didnot get any links pointing to the specific solution. Maybe no one has encountered such a scenario

    Read the article

  • Issue with InnoDB engine while enabling and [ skip-innodb ]

    - by Ahn
    How to enable InnoDB, which was previously disabled with skip-innodb option. Case 1: Disabled the innodb with skip-innodb option and show engines givens as below. Engine | Support ... | InnoDB | NO ...... Case 2: As I want to enable the innodb, I commanded the #skip-innodb option and restarted. But now the show engines even not showing the InnoDB engine in the list. ? Mysql Version : 5.1.57-community-log OS : CentOS release 5.7 (Final) Log: 120622 13:06:36 InnoDB: Initializing buffer pool, size = 8.0M 120622 13:06:36 InnoDB: Completed initialization of buffer pool InnoDB: No valid checkpoint found. InnoDB: If this error appears when you are creating an InnoDB database, InnoDB: the problem may be that during an earlier attempt you managed InnoDB: to create the InnoDB data files, but log file creation failed. InnoDB: If that is the case, please refer to InnoDB: http://dev.mysql.com/doc/refman/5.1/en/error-creating-innodb.html 120622 13:06:36 [ERROR] Plugin 'InnoDB' init function returned error. 120622 13:06:36 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 120622 13:06:36 [Note] Event Scheduler: Loaded 0 events 120622 13:06:36 [Note] /usr/sbin/mysqld: ready for connections. Version: '5.1.57-community-log' socket: '/data/mysqlsnd/mysql.sock1' port: 3307 MySQL Community Server (GPL)

    Read the article

  • Extract sender activity from postfix logs for auditing user

    - by Aseques
    We have a mail user on our postfix server that was using the company mail to send compromising information to the competence. I've been asked to make a report of the actions for that user in the last time. There are tools like pflogsumm and others that can extract statistic data, but I haven't so far find anything useful to get all the info for a user because the data is in multiple lines. I'd like to get something like this: For the sent mail 11/11/11 00:00:00 [email protected] -> [email protected] 11/11/11 00:00:01 [email protected] -> [email protected] For the received mail 10/10/11 00:00:00 [email protected] -> [email protected] 10/10/11 00:00:01 [email protected] -> [email protected] I know I can do a script by myself, but matching the postfix ID for every mail is not something that can be made with a simple grep, and I've a lot of mail history that I have to recheck distributed among diferent files and so on. The source log is the standard postfix format, for example this one... Sep 13 16:15:57 server postfix/qmgr[18142]: B35CB5ED3D: from=<[email protected], size=10755, nrcpt=1 (queue active) Sep 13 16:15:57 server postfix/smtpd[32099]: disconnect from localhost[127.0.0.1] Sep 13 16:15:57 server postfix/smtp[32420]: 58C3E5EC9C: to=<[email protected]>, relay=127.0.0.1[127.0.0.1]:10024, delay=1.4, delays=0.01/0/0/1.4, dsn=2.0.0, status=sent (250 2.0.0 Ok, id=32697-04, from MTA([127.0.0.1]:10025): 250 2.0.0 Ok: queued as B35CB5ED3D) Sep 13 16:15:57 server postfix/qmgr[18142]: 58C3E5EC9C: removed Sep 13 16:15:57 server postfix/smtp[32379]: B35CB5ED3D: to=<[email protected]>, relay=mail.anothercompany.com[123.123.123.163]:25, delay=0.06, delays=0.03/0/0.01/0.02, dsn=2.0.0, status=sent (250 2.0.0 Ok: queued as 77D0EB6C025) Sep 13 16:15:57 server postfix/qmgr[18142]: B35CB5ED3D: removed

    Read the article

  • MAMP Pro mysqld won't start on os x lion

    - by Mike
    getting a Start MySQL Failed error in the GUI.. when i attempt to start mysqld from the CLI i get the following error: ? /Applications/MAMP/Library/bin/mysqld 120623 23:12:47 [Warning] Setting lower_case_table_names=2 because file system for /Applications/MAMP/db/mysql/ is case insensitive 120623 23:12:47 [Note] Plugin 'FEDERATED' is disabled. 120623 23:12:47 InnoDB: The InnoDB memory heap is disabled 120623 23:12:47 InnoDB: Mutexes and rw_locks use GCC atomic builtins 120623 23:12:47 InnoDB: Compressed tables use zlib 1.2.3 120623 23:12:47 InnoDB: Initializing buffer pool, size = 128.0M 120623 23:12:47 InnoDB: Completed initialization of buffer pool 120623 23:12:47 InnoDB: highest supported file format is Barracuda. 120623 23:12:47 InnoDB: Waiting for the background threads to start 120623 23:12:48 InnoDB: 1.1.5 started; log sequence number 1595675 120623 23:12:48 [ERROR] /Applications/MAMP/Library/bin/mysqld: unknown option '--skip-locking' 120623 23:12:48 [ERROR] Aborting 120623 23:12:48 InnoDB: Starting shutdown... 120623 23:12:49 InnoDB: Shutdown completed; log sequence number 1595675 120623 23:12:49 [Note] /Applications/MAMP/Library/bin/mysqld: Shutdown complete i have deleted the mysql.pid file located at /application/mamp/tmp/mysql/mysql.pid and i still get the error above. I can't find where MAMP has set --skip-locking set, my.cnf doesnt have it anywhere. Activity monitor gives me a mysqld process running by me, and everytime i KILL the process both via Activity Monitor and via kill =9 pid it starts right back up.. Sampling the process points back to the MAMP mysqld.. wtf?! About to throw MAMP out the window and boot up a VM of CentOS =)

    Read the article

< Previous Page | 654 655 656 657 658 659 660 661 662 663 664 665  | Next Page >