Search Results

Search found 7122 results on 285 pages for 'wait cursor'.

Page 211/285 | < Previous Page | 207 208 209 210 211 212 213 214 215 216 217 218  | Next Page >

  • EMC VNX iSCSI setup - unsure about SP/port assignment

    - by pauska
    We have a new VNX5300 waiting to get configured, and I need to plan out the network infrastructure before the EMC tech arrives. It has 4x1gbit iSCSI per SP (8 ports in total), and I'd like to get the most out of the performance until we jump over to 10gig iSCSI. From what I can read from the docs - the recommendation is to use only two ports per SP, with 1 active and 1 passive. Why is this? It seems kind of pointless to have quad-port i/o-modules and then recommend to not use more than two of them? Also - I'm a bit unsure about the zoning. The best practices guide state that you should separate each port on each SP from each other on different logical networks. Does this mean that I have to create 4 logical networks to be able to use all 8 ports? It also gives the following example: Does this mean that A0 and B0 should sit on the same physical switch aswell? Won't this make all traffic go on one switch (if both A1 and B1 are passive)? Edit: Another brainpuzzle I don't get it - each host (as in server) should not have more iSCSI bandwidth available than the storage processor. What on earth does this matter? If serverA have 1gbit and serverB have 100mbit, then the resulting bandwith between them is 100mbit. How can this result in some kind of oversubscription? Edit4: Wait, what. Active and passive ports? The VNX runs in a ALUA configuration with asymmetrical active/active.. there shouldn't be any passive ports, only preferred ones..

    Read the article

  • EXCEL workbook, intermitently, takes 30 seconds to load

    - by Julio Nobre
    I am trying to figure out why a simple .XLS EXCEL workbook is taking, randomly, 30 seconds to open. Before answering: Please, bear mind the following: Problem symptoms Hanging is intermitent and it takes exactly 30 seconds; During hanging there is no cpu or disk activity; It only happens during workbook load. Every runs smooth after that; Windows Explorer.exe hangs on folder, but all other folders, system and applications are still responsive; There are no consecutive hangings. I have to wait for while to reproduce this behaviour; All workbooks where located on a local drive (C:\BPI); The workbook has no macros and no addins; Office 2003 is being used for several years; The computer is running Windows XP; Computer has several network mapped drives, all addressed to main file server; Recently, main fileserver was replaced by Windows 2011 SBS Standard Edition What I have done so far I have traced machine Explorer.exe, using Process Monitor, added Duration column, and filtered by Duration 1. That's is how I found that hanging was taking exactly 30 seconds. For further information, please refer to Oliver Salzburg tutorial. Using Process Monitor, I have also figured out than five operations were taking most of sample collecting duration. Looking at sample image below, column Operation below you will notice that one single operation was taking 29 seconds; I have tried different workbooks (all of them smaller than 30 KB); I have, temporarily, removed all shortcuts on User Document's folder that were pointing to network drives or shares; I have runned CCleaner to fix registry issues; I made sure that there were no external links on tested workbooks; I have reproduced this behaviour for hours; I have extensivelly researched for hours on the web; Process Monitor's collected and filtered data

    Read the article

  • Auto-rotate rotated images with mogrify

    - by Frank Presencia Fandos
    Some of my images have been taken rotated but kept this data. The problem is that, when using mogrify to convert them from JPG to png, that data seems to dissapear. For showing this problem, I think the best is to show the script and an screenshot. Script with the code. Put it in a text file, give it execution permission, double click, run (from terminal if you wish) and wait a while. All the JPGs in that folder will be converted to png. #! /bin/bash echo "Converting JPG to png. Please don't close this window." mogrify -alpha on -format png *.JPG mogrify -alpha on -format -alpha on png *.jpg It works great and adds an alpha channel. This is personally useful when I edit them later, not to add the channel individually. Now the screenshot that illustrates the problem: As you can see, the original ones' (JPGs) preview is right, the modified preview is wrong, the Shotwell rendering is right and the GIMP edit is wrong and didn't even say the image was rotated, as it uses to do with other images. How can I edit my script to preserve the orientation?

    Read the article

  • Linux - preventing an application from failing due to lack of disk space [migrated]

    - by Jernej
    Due to an unpredicted scenario I am currently in need of finding a solution to the fact that an application (which I do not wish to kill) is slowly hogging the entire disk space. To give more context I have an application in Python that uses multiprocessing.Pool to start 5 threads. Each thread writes some data to its own file. The program is running on Linux and I do not have root access to the machine. The program is CPU intensive and has been running for months. It still has a few days to write all the data. 40% of the data in the files is redundant and can be removed after a quick test. The system on which the program is running only has 30GB of remaining disk space and at the current rate of work it will surely be hogged before the program finishes. Given the above points I see the following solutions with respective problems Given that the process number i is writing to file_i, is it safe to move file_i to an external location? Will the OS simply create a new instance of file_i and write to it? I assume moving the file would remove it and the process would end up writing to a "dead" file? Is there a "command line" way to stop 4 of the 5 spawned workers and wait until one of them finishes and then resume their work? (I am sure one single worker thread would avoid hogging the disk) Suppose I use CTRL+Z to freeze the main process. Will this stop all the other processes spawned by multiprocessing.Pool? If yes, can I then safely edit the files as to remove the redundant lines? Given the three options that I see, would any of them work in this context? If not, is there a better way to handle this problem? I would really like to avoid the scenario in which the program crashes just few days before its finish.

    Read the article

  • How to organize deployment process in Chef-controlled environment?

    - by Alex
    I have a web Linux-based infrastructure which consists of 15 virtual machines and over 50 various services. It is fully controlled by Chef. Most of the services are developed internally. Basically the current deployment process is triggered by a shell script. A build system (a mix of Python and shell scripts) packages the services as .deb files and puts these packages into a repo. It runs apt-get update on all 15 nodes then because the standard Chef apt cookbook only runs apt-get once per day and we definitely do not want to run apt-get update unconditionally on each chef-client wake. The build system restarts chef-client daemons on all 15 nodes finally (we need this step because of pull Chef nature). The current process has a number of drawbacks we want to address. First off, it is asynchronous because the deployment script does not check chef-client logs after restart so we don't even know if the deployment was successful. It does not even wait for Chef clients to complete the cycle. Second, we definitely do not want to force chef-client restarts on all nodes because we usually deploy only a small number of packages. And third, I am not quite sure using chef-client for deployment is legitimate, probably we are just doing it wrong from the start. Please share your thoughts/experience.

    Read the article

  • Computer sending data while turned off

    - by Nicklas Ansman
    I have a some what strange problem (which could have and easy and obvious solution for all I know). My problem is that when I've booted ubuntu (now 10.4 but same problem with 9.10) and turns it off it starts sending a HUGE amount of data via the ethernet cable, so much in fact that my router can't handle it and stops responding. As far as I can tell the computer is completely turned off with no fans spinning. I can add that if I boot windows I do not have this problem, just when exiting ubuntu. There are two "fixes" for my problem: Pull the ethernet cable until the next boot Turn off power to the PSU and wait for the capacitors to unload Is there anyone who knows what could be going on? I'd be happy to post some logs or conf-files. Currently I'm using the ethernet port on my motherboard which is a Asus P6T Deluxe V2 with an updated version of the BIOS (maybe not the latest but since it only happens when I've been in ubuntu I don't wanna mess with the BIOS too much). Regards Nicklas ---------Update 1---------- The router is a D-Link DIR 655 with the latest firmware. ---------Update 2---------- I've now reinstalled ubuntu (with 10.4) and I still experience the same problem.

    Read the article

  • Suddenly getting lock timeouts with MySQL

    - by Marc Hughes
    We've got a web app hosted on Amazon Web services. Our database is a multi-az RDS MySQL server running 5.1.57 and 3-4 app servers talk to it. Today, we started seeing a lot of errors along the lines of "Lock wait timeout exceeded; try restarting transaction" - almost 1% of POST requests are seeing this. There have been no modifications to the code running on the site. There have been no schema changes. We haven't had a big spike in traffic. I've been looking at the processes running, and none seem out of control. I tried scaling our RDS instance from a small to a large, with no effect. Two days ago, Amazon had some outages. As part of the recovery from that, our RDS server, and our app servers ended up in different availability zones, but all within the same region. But yesterday, everything was fine so I'm not convinced that's related. The lock timeouts are in different types of requests and occur in different InnoDB tables. I have noticed the number of open connections jumped when we started seeing problems, but they may be a symptom and not a cause. What are my next steps in debugging this?

    Read the article

  • How to manage multiple email addresses on multiple domains in Exchange

    - by CAD bloke
    Using Hosted Exchange Server, mostly because I use an iPhone, webmail & Outlook on 2 laptops. I want to keep everything consistent and unfragmented. Also, I want push notifications. I have 2 domains, a professional one & a personal one. Each domain has about 5 (give or take) email addresses I use for various purposes. Each domain also has a few parked domains (.net, .org, .info) aliased to the .com domain. I would like to keep emails from the 2 domains separated. Do I need an extra mail box, meaning extra expense or can I create another Exchange user on the same mailbox and create an extra account in Outlook? In either case I will have to wait for iOS4 on the iPhone to manage 2 Exchange accounts. Or am I better off just using a set of rules and folders? The aliased domains are another joy to behold entirely. It looks like I will have to add each email address variant individually. Alternatively, I reckon I may just leave the aliased domains at the pop3 host and let Outlook gather those as edge-cases. Surely I can't be the only one making my life this difficult. Anyone out there done this? From the left field - is this (much) easier in gMail? I'm not committed to Exchange (yet). Previously I used Outlook as a pop3 client with a set of filters to direct incoming traffic to folders. This worked with the aliased domains because my host directed all the aliased TLDs to the same mailbox.

    Read the article

  • Production deployment to EC2 with minimal downtime

    - by jensendarren
    I have a simple web application deployed on a large instance with EC2. I now want to deploy the latest code to this server but I want to do this in a way which minimizes downtime and is a smooth as possible for the end user. Here is my plan: Fire up another large instance Install all the software layers on that instance Restore and attach an EBS drive to the instance Deploy our latest production ready code on the new instance Run all tests (including manual testing of the application) (If tests pass) Put a "Site Under Maintenance" notice on the live site. Backup the EBS instance on the live site Detach the EBS instance from the new server and replace with the latest backup Use ec2-associate-address to move the IP address to the new instance Sit back and wait for traffic to start flowing though the new instance Terminate the old instance Does this seem like a good strategy? Are there any tutorials or books that might cover this topic? I have already read Cloud Application Architectures by George Reese, which is an excellent book, but does not cover deployment. Additionally, I know that there are tools that can help with this like RightScale or enStratus which I will use when I start using more than one instance.

    Read the article

  • How to increase the speed between two external hard drives on my laptop?

    - by Roman
    Hello, I own Sony Vaio Z laptop with two external USB ports. It's quite new and has USB 2.0 support. I'm using Vista x64 on it. I also have two external usb hard drives, Iomega 500GB and WD for 1TB. Every hard drive has USB 2.0 support. I connect two devices to my laptop and trying to copy date from one hard drive to another. But it takes a lot of time! The speed is about 15 Megabytes per second. I have to wait toooooo long to copy all the information from one hard drive to another. When I try to copy information from my internal (SSD) hard drive, it works fine for both external drives. The speed is very high and it shows me something about 100 Megabytes per second. It makes me feel that USB 2.0 is OK on both drives. But when I'm trying to copy from one external drive to another external, I still get very low speed. I checked out Device Manager and here is the settings I have: (sorry, can't upload image because of my rating, check this url: http://picbite.com/image/122073daljo/ ) I think it's because two of my external drives use the same USB 2.0 controller. Is there any way to make it work faster? Is it possible to move one of my USB ports to other USB 2.0 controller? Or is there any software which can help me to automate copying all the files thru my internal drive? I have only about 3 gigabytes free space on internal drive and it's quite difficult to move manually every file from one hard drive to internal and then again to another internal.

    Read the article

  • How to increase the speed between two external hard drives on my laptop?

    - by Roman
    Hello, I own Sony Vaio Z laptop with two external USB ports. It's quite new and has USB 2.0 support. I'm using Vista x64 on it. I also have two external usb hard drives, Iomega 500GB and WD for 1TB. Every hard drive has USB 2.0 support. I connect two devices to my laptop and trying to copy date from one hard drive to another. But it takes a lot of time! The speed is about 15 Megabytes per second. I have to wait toooooo long to copy all the information from one hard drive to another. When I try to copy information from my internal (SSD) hard drive, it works fine for both external drives. The speed is very high and it shows me something about 100 Megabytes per second. It makes me feel that USB 2.0 is OK on both drives. But when I'm trying to copy from one external drive to another external, I still get very low speed. I checked out Device Manager and here is the settings I have: (sorry, can't upload image because of my rating, check this url: http://picbite.com/image/122073daljo/ ) I think it's because two of my external drives use the same USB 2.0 controller. Is there any way to make it work faster? Is it possible to move one of my USB ports to other USB 2.0 controller? Or is there any software which can help me to automate copying all the files thru my internal drive? I have only about 3 gigabytes free space on internal drive and it's quite difficult to move manually every file from one hard drive to internal and then again to another internal.

    Read the article

  • Under what conditions will sendmail try to immediately resend a message instead of waiting for the standard requeue interval?

    - by Mike B
    CentOS 5.8 | Sendmail 8.14.4 I used to think that if SendMail experienced a temporary (400-class) error during delivery, it would place the message in a deferred queue (e.g. /var/spool/mqueue) and retry an hour later. For the most part, that appears to be the case. But every now and then, I'll notice log entries like this (email/domains renamed to protect the innocent :-) ) : Dec 5 01:43:03 foobox-out sendmail [11078]: qBE3l7js123022: to=<[email protected]>, delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=124588, relay=exbox.foo.com. [10.10.10.10], dsn=4.0.0, stat=Deferred: 421 4.3.2 The maximum number of concurrent connections has exceeded a limit, closing transmission channel Dec 5 01:53:34 foobox-out sendmail [12763]: qBE3l7js123022: to=<[email protected]>, delay=00:10:31, xdelay=00:00:00, mailer=relay, pri=214588, relay=exbox.foo.com., dsn=4.0.0, stat=Deferred: 452 4.3.1 Insufficient system resources Dec 5 02:53:35 foobox-out sendmail [23255]: qBE3l7js123022: to=<[email protected]>, delay=01:10:32, xdelay=00:00:01, mailer=relay, pri=304588, relay=exbox.foo.com. [10.10.10.10], dsn=2.0.0, stat=Sent (<[email protected]> Queued mail for delivery) Why did Sendmail try again just 10 minutes after the first attempt and then wait another hour before trying again? If this is expected behavior, what scenarios will cause this faster requeue interval to occur?

    Read the article

  • Mailgun Is Not Detecting My New MX Records

    - by Tyler Crompton
    When I issue a DiG command to verify my MX records, I get the following output: $ dig example.com MX ; <<>> DiG 9.9.5-3-Ubuntu <<>> example.com MX ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 47700 ;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 5, ADDITIONAL: 5 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ;; QUESTION SECTION: ;example.com. IN MX ;; ANSWER SECTION: example.com. 85468 IN MX 10 mxa.mailgun.org. example.com. 85468 IN MX 10 mxb.mailgun.org. ;; REMAINDER OF OUTPUT REMOVED FOR BREVITY However, when I click "Check DNS Records Now" on Mailgun, it verifies the changes to the TXT and CNAME records but says that my MX records have not been changed. Type | Priority | Enter This Value | Current Value -----+----------+------------------+-------------------- MX | 10 | mxa.mailgun.org | 10 mail.example.com MX | 10 | mxb.mailgun.org | 10 mail.example.com I updated these records three to fours ago. I know it said to wait up to twenty-four to forty-eight hours. But I feel that if it detected the other DNS changes, then it should detect the MX record changes. Am I being impatient or is this a legitimate concern? What do you suggest I do? Note: I'd create a Mailgun tag for this; I feel that it'd be appropriate, but I don't have enough reputation to do so.

    Read the article

  • How to send from my Z88 to my PC

    - by Bevan
    I've got a Cambridge Z88 that I want to get working with my PC. Around 6 years ago - in 2004 - I made heavy use of my Z88 to do a whole bunch of writing on the train while commuting to and from work. The Z88 is solid state, lightweight and has a full size silent keyboard, so it works very well as a writing instrument. I still have the serial cable I soldered up back then and used successfully in 2004. It has these connections: Z88 9 pin ----- ------- 2 TxD ------> RxD 2 3 RxD <------ TxD 3 7 GND <-----> GND 5 4 RTS ------> CTS 8 5 CTS <-+ RTS 7 8 DCD <-+---- DTR 4 9 DTR ----+-> DCD 1 +-> DSR 6 Unfortunately, I haven't been able to find my notes from 2004 that describe how I got it to work back then. I've spent several hours trying to Google a result, but to no avail. I'm pretty sure the cable is fine - after all, it's what I used successfully six years ago, and I've checked it out with a multimeter - so I'm focusing on the PC end of things, which is where I'd like some assistance. Q1: In my recent attempts, I've been using both Hyperterminal (as built into Windows XP) and the command line (copy com2: con:), but with no success. What's a good (better!) serial communications application to use? Is there one that allows me to see as deep as the signalling that's occurring on the wire? Q2: If you have a Z88 that works correctly with your PC, what software do you use on the PC end, and what's the pinout of your cable? I'm pretty sure that the Z88 itself is working properly: When using the built in Import/Export tool to send a file, I see different behaviour when my serial cable is connected compared to disconnected. When disconnected, the transmission appears to work, with a progress meter counting up and then finishing; when connected, nothing happens 'cept a timeout if I wait long enough.

    Read the article

  • How can I make WSUS less invasive for our users?

    - by Cypher
    We have WSUS pushing updates out to our user's workstations, and things are going relatively well with one annoying caveat: there seems to be an issue with a pop-up being displayed in front of some users informing them that their machine will be rebooted in 15 minutes, and they have nothing to say about it: This may be because they did not log out the prior night. Nevertheless, this is a bit too much and is very counter-productive for our users. Here is a bit about our environment: Our users are running Windows XP Pro and are part of an Active Directory Domain. WSUS is being applied via Group Policy. Here is a snapshot of the GPO that is enforcing the WSUS rules: Here is how I want WSUS to work (ideally - I'll take whatever can get me close): I want updates to automatically download and install every night. If a user is not logged in, I would like the machine to reboot. If a user is logged in, I would like their machine not to reboot, but instead wait until the next "installation period" where it can perform any other needed installations and reboot then (provided the a user account is not still logged in). If a user is to be prompted for reboot, it should only happen once per day (if possible), but every time they are prompted, they must have a way to postpone the reboot. I do not want users to be forced to restart their computer whenever the computer thinks it should happen (unless it's after an update installation and there are no logged in users). That doesn't seem productive to force a system restart in the midst of a person's workday. Is there something that I can do with the GPO that would help make WSUS less intrusive? Even if it gave the user an option to Restart Later - that would be better than what is happening now.

    Read the article

  • Winamp has slow /skipping video playback on Windows 7

    - by Roy Rico
    Hello, I have Windows 7 x64 (7600 90-day trial version) and Winamp 5.6 installed. When I play a video in Windows Media Player, the video plays smooth, however when I play a video in winamp, the video is mostly ok when played back at the original size (but not completely), but if I play it back in fullscreen, the playback gets really slow. The video's audio track plays just fine. I have a DELL XPS 420 computer (8GB of RAM) with a Nvidia GeForce 8800 CTS 512 video card. I've updated to the latest drivers. I have the default Windows 7 codecs, and the CCCP codec pack which used to be all I needed under Windows XP to play all types of videos. Are the codecs needed for Windows Y the same? What's going on? UPDATE: As suggested, I turned off Aero and winamp ran just fine again. So I just have to wait for winamp to be rewritten to work with the way Vista/Windows 7 runs? UPDATE 2: Winamp has updated their player, and it works great with Windows 7 now.

    Read the article

  • Sun Power Button Won't Shut Down System

    - by user36680
    Background: We are running NIS and have NFS mounts from a Solaris 10 workstation to a Solaris 8 server. If the workstation loses its network connection for some reason, when I look at the workstation's console I see repeated messages of the form: <date> <time> <hostname> ypbind[<pid>]: NIS server not responding for domain "<domain>"; still trying. If I try to login at the console as a user, it won't work because it can't authenticate my account through NIS. Also, it won't return to a login prompt again, so I can't log in as root. If I press the power button (don't hold it in) on the workstation, I see: <date> <time> <hostname> power: WARNING: Power off requested from power button or SC, powering down the system! Shutdown started. <date> <time> Changing to init state 5 - please wait. <date> <time+2 minutes> <hostname> power: WARNING: Failed to shut down the system! And continue to see messages of the form: <date> <time> <hostname> ypbind[<pid>]: NIS server not responding for domain "<domain>"; still trying. So, the questions are How do I make NIS stop trying (because I know it will fail)? Why won't it shut down?

    Read the article

  • Spreadsheet application that can handle big data OS X

    - by Peter
    I've been working with Excel for quite a while for some statistical analysis that I do regularly. The size of the data that I'm working with has gotten much larger as of late, however. The layout of the databases in question is quite simple, usually just three rows which includes a UNIX timestamp, and EST value, a proprietary numeric value and finally an average of the rows that have a timestamp +/- 1000 that row's timestamp (little AVERAGEIFS() formula). That formula and the EST conversion are the only formulas in the sheet. I'm beginning to work with files with 500,000+ rows. Running the average formula down the entire row takes forever. The end result is the production of print-worthy graphs. I'm looking for either a UNIX CL utility or separate spreadsheet/database application that can handle this amount of data without melting my CPU or making me wait an hour. Is there anything out there? TL;DR: Simple excel sheet with over half a million rows is getting too slow to work with. OS X alternatives?

    Read the article

  • why is Mac OSX Lion losing login/network credentials?

    - by Larry Kyrala
    (moved from stackoverflow...) Symptoms So at work we have OSX 10.7.3 installed and every once in a while I will see the following behaviors: 1) if the screen is locked, then multiple tries of the same user/pass are not accepted. 2) if the screen is unlocked, then opening a new bash term may yield prompts such as: `I have no name$` or lkyrala$ ssh lkyrala@ah-lkyrala2u You don't exist, go away! Even when our macs are working normally, everyone here has to login twice. The first time after boot always fails, but the second time (with the same password, not changing anything, just pressing enter again) succeeds. Weird? Workarounds There are some workarounds that resolve the immediate problem, but don't prevent it from happening again: a) wait (maybe an hour or two) and the problems sometimes go away by themselves. b) kill 'opendirectoryd' and let it restart. (from https://discussions.apple.com/thread/3663559) c) hold the power button to reset the computer Discussion Now, the evidence above points me to something screwy with opendirectory and login credentials. Some other people report having these login problems, but it's hard to determine where the actual problem is (Mac, or network environment?). I should add that most of the network are Windows machines, but we have quite a few Macs and Linux machines as well, but I'm not sure of the details of how the network auth is mapped from various domains to others... all I know is that our network credentials work in Windows domains as well as mac and linux logins -- so something is connecting separate systems, or using the same global auth system.

    Read the article

  • Internet Troubles - PPPoE vs PPPoA?

    - by AkkA
    I have been having some internet troubles at home (ADSL2+ connection in Australia). We get random drop-outs from the authentication connection. It will keep the connection to the DSL service, but we lose authentication and either have to restart the router/modem (its combined, a Belkin one, not sure on model number) or unplug the phone cable, wait about 30 seconds and plug it in again. I've called the ISP (Telstra) a few times, but they only offer limited support when we dont use their supported hardware. Apparently something had happened on their side, they checked the box again (at least it sounded that simple), and told me it would be fine. It wasnt. I've replaced all the filters around the house, but that didnt help either. We do live a little bit away from the exchange (get a sync speed of about 3000/900), so I thought it could be due to line noise but that hasnt helped. Telstra allow both PPPoE and PPPoA connections (which I'm configuring through my router, dont have software on the PC side). I've been running PPPoA the whole time, would it make any difference changing it to PPPoE? If not, are there any other theories as to why we would be experiencing these drop-outs? It has been fine for at least 12 months, then suddenly started about 2 months ago.

    Read the article

  • Server not accepting uploads

    - by Tatu Ulmanen
    I'm having a strange problem with my VPS: I can download files from it, I can use PuTTy to connect to it and all behaves normally. But sometimes, when I try to upload a file to the server or save a file via SFTP, the connection inexplicably fails. I am using jEdit to edit files remotely via SFTP. When it works, it works fine. When it doesn't, I get an error message: Cannot save: java.io.IOException: inputstream is closed Cannot save: java.io.IOException: 4: I can see that a temporary save file (#file.php#save#) is created on the server with a filesize of 0. So the connection works, but when it comes to sending the actual data, something fails. The same thing with WinSCP, but the error is different: Copying file fatally failed. Copying files to remote side failed. And I can always browse the server with PuTTy without a problem. I see nothing abnormal in any log files. Auth.log shows this when I try to save: sshd[32638]: Accepted password for - from - port 62272 ssh2 sshd[32638]: pam_unix(sshd:session): session opened for user - by (uid=0) sshd[32640]: subsystem request for sftp sshd[32638]: pam_unix(sshd:session): session closed for user - When I wait for a while (say, an hour), everything works fine again. It can't be a temporary ban, as I am still allowed to connect to the server, right? I know this may not be enough info to solve the problem, but I am grateful for any clues or bits of information that might help me. What are the possible causes for this kind of behaviour, what log files can I check for clues etc.. I'm running out of ideas!

    Read the article

  • Transcoding media server streaming to the iPhone

    - by pilif
    I have a huge collection of videos in different formats, but with one thing in common: They are not playable on an iPhone (or iPod Touch). Instead of complaining about Apple's IMHO broken world view ("there are no video formats but quicktime and mp4"), I wonder if there's a solution out there that allows streaming these different videos to the iPhone. This means that the source media needs to be transcoded on the fly. I already tried a few solutions out there, but with varying success: PS3 Media Server kind of worked, but only once and only for one single file. TVersity is said to work, but it requires UAC to be disabled and I don't see any need for this. The solution I'm looking for should run on Windows 2008 Server or Linux. I just can't believe that there's nothing out there that would allow me to stream my huge video collection on my iPhone (we're talking Wifi here, not 3G). After looking at the answers provided and after retrying TVersity without much success, I gave Orb another try and while the web interface failed to work for me, the iPhone Application (I tried the free one at first) actually worked flawlessly. And not only that, it also manages to convert the streams on-the-fly, so you don't have to wait for the transcoding process to finish before playback starts. On my 2.26 Ghz MacMini Server, this worked even with 1080p material. For Windows 2008 Server users out there: Remember to install the Desktop Experience Feature in the Server Manager if you intend this to work. Of all the stuff I had a look at, this really provided instant-success - even though I'm now probably sending the contents of my harddrive to orb's central server (sigh)

    Read the article

  • SQL Server log backups "stalling"

    - by MattK
    I have interited a box running SQL Server 2008 and Windows 2003, and have had a few events where largeish (35GB) log backups "stall", both before and after the installation of SQL 2008 SP1. The server log ships to a standby, so regular log backups are taken at 15 minute intervals. However, after an index reorg causes the log to grow to about 35GB (on a DB with about 17GB of data), the next log backup runs to ~95% completion, then seems to stop. The process shows as suspended, with a wait state of BACKUPIO. CPU, read, and write activity on the SPID also does not change, and the process stays in this state for hours, when normally a backup of this size should complete in about 20 minutes. This server has a single RAID-1 volume, thus the source database files and destination backup files are on the same volume. However, I cannot determine if another process is blocking the backup. The backup SPID cannot be killed, and the only way to terminate the log backup and clear the lock on the backup file is to cycle the SQL Server service. There was one event where the backup terminated completely, with an error that another process had locked the backup file, but no details about what that process was. Can anyone suggest a cause or diagnostic process to this situation?

    Read the article

  • Will wear induced by turning computers off in the evening be offset by energy savings?

    - by sharptooth
    I'm asking this here because this is primarily a huge office scenario and administrators will more likely have the answer I'm looking for. Employees' desktop computers can be either left turned on for the whole night or switched off in the evening and turned back on in the morning. The latter will surely save energy. In the same time turning on and off is very harmful for the equipment - hardware often breaks specifically when turned on. Both energy and hardware replacements cost money. With energy it's quite obvious - you pay every month according to what your power meter shows. With hardware replacements it's worse - you need qualified stuff to quickly diagnose the problems and once something breaks the affected employee will have to wait for some time while his computer is fixed/replaced and the data is recovered. So the company has to choose between saving money on energy and saving money on computer maintaince and lost hours. Such decisions must be well though. Is there any detailed study of how turning computers off each evening affects their lifetime?

    Read the article

  • MacBook Pro with OSX 10.6.3 (Snow Leopard) Wi-Fi network connection breaks after few minutes

    - by Yanick Landry
    I have a MacBook Pro with OSX 10.6.3 (Snow Leopard). After connecting on a Wi-Fi network, the connection "breaks" after a few minutes. What I mean by "breaking" is that all requests, whether it is loading a web page, connecting to a share folder, connecting to my local router at 192.168.0.1, or pinging anything doesn't get through (time out). When in a "break" situation, I can see in the Network Settings panel that I still have an active IP, which I can successfully ping. I have this problem at home with a router D-Link DI-624 and at work with a D-Link WBR-2310, all with updated firmwares. I thought DHCP was the issue. So I tried assigning a fixed IP address (192.168.0.166). It successfully connects, but after a few minutes, the connection still breaks. The solution I'm currently using is that I disable the AirPort (on the Network icon menu in the top bar), wait a few seconds then re-enable it. It then quickly works, but the connection still breaks after a few minutes. I tried Googling my problem but I think I can't find any good keywords ! It's my first question here, so sorry if I don't respect some rules.

    Read the article

< Previous Page | 207 208 209 210 211 212 213 214 215 216 217 218  | Next Page >