Search Results

Search found 7731 results on 310 pages for 'exit failure'.

Page 4/310 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Programmatic removing Exit Popup from Page? [closed]

    - by Jose Garcia
    I have a page A which has exit popup. I want it to be show on Page B. I used iframe for displaying page A on B. Edit:Page A is having a Exit Popup which i dont want in Page 2. But Page A is having annoying popup. Assuming i can't edit Code of Page A. Can i just make some code in my page B . To remove Exit Popup? Please provide me with sample code. I would prefer it to run on My Lamp Shared hosting. I can use anything in place of Iframe if need be. Thanks.

    Read the article

  • mysql backup and restore from ib_logfile failure

    - by i need help
    Here's the case: After computer being hacked, we are in a rush to backup all data out to other computer. As a result, the mysql databases are not backup out as sql statement. What we have done is backup out all the physical files/folders in the C drive to new computer. Eg: C:\Program Files\MySQL\MySQL Server 4.1\data In this case, all data for mysql are inside unreadable file. Inside data folder consist of files like ib_logfile0, ib_logfile1, but not ib_data1 All database's table structure format are inside each respective folder. (Some folder have .frm, .opt) (some other folder have .frm, .myd, .myi) How can I retrieve back the data from the database in a new computer? I tried to install the same mysql version(4.1) at new computer, then replace all backup files inside data folder into this mysql in new computer. Then restart mysql service. When I restart, it fail: Could not start mysql service on local computer. error 1067: process terminated unexpectedly. Error log showing: InnoDB: The first specified data file .\ibdata1 did not exist: InnoDB: a new database to be created! 090930 10:24:49 InnoDB: Setting file .\ibdata1 size to 10 MB InnoDB: Database physically writes the file full: wait... InnoDB: Error: log file .\ib_logfile0 is of different size 0 87031808 bytes InnoDB: than specified in the .cnf file 0 25165824 bytes! 090930 10:24:49 [ERROR] Can't init databases 090930 10:24:49 [ERROR] Aborting 090930 10:24:49 [Note] C:\Program Files\MySQL\MySQL Server 4.1\bin\mysqld-nt: Shutdown complete

    Read the article

  • Dealing with personal failure

    - by codeelegance
    A while ago I was given the task of updating and extending the functionality of a software project. I was given a year to make the needed changes working solo. A month into development I came to the conclusion that it would take longer to change the existing product than to rewrite it from the ground up. I'd never attempted a complete rewrite so I talked with my boss about it and he was thrilled with the idea. I'm a fan of agile development but had never had the opportunity to take advantage of all of the prescribed practices so when I set to work I tried to incorporate as many as I could. I didn't have direct access to the customer and my coworkers (non-programmers) knew the business domain but were already so busy they didn't really have time to participate in design meetings so I resigned to working in the dark and occasionally calling one of them over to my desk to get feedback on my progress. I used TDD and refactored mercilessly and even tried taking a domain driven design approach. Things went well for a while. As the deadline came closer and the complexity of the project grew my productivity start slipping. I found myself cutting corners and ignoring the practices I had established as the pressure increased to meet the deadline. I also started working late nights and weekends to keep up with the load. In the end it made little difference how hard I worked. The project missed its deadline and what was completed wasn't enough to give to the customer. I had failed. Not only had I not finished on time but the previous version had sat untouched for almost a year so it wouldn't be of any help. Luckily we had another product that offered some of the same functionality. My boss decided to cancel the project entirely and moved all our orphaned customers to the other product. I spent weeks (along with everyone else at the company) manning the phones providing technical support for those customers. After it was all over, my boss was gracious enough not to fire me for nearly ruining the company. I was moved to the other product and have been trying to redeem myself ever since. Where did I go wrong? Has anyone else had to deal with this kind of defeat? How did you recover?

    Read the article

  • New SSD, is the MBR broken? DISK BOOT FAILURE

    - by Shevek
    I've been running Windows 7 on a WD 500gb SATA single drive, single partition setup for some time with no issues. I've just installed a new Kingston V Series 64gb SSD and performed a clean install of Win7 to it, deleting the partitions on the 500gb and using that as a data drive. All was well for a few reboots but then I started to get "DISK BOOT FAILURE, INSERT SYSTEM DISK AND PRESS ENTER" messages. If I put the Win7 install DVD back in the drive it boots fine. Tried a clean install again, after replacing SATA cables and swapping SATA ports, with a complete partition wipe of both drives. Again, rebooted fine a few times then back with the "DISK BOOT FAILURE" error. Looked on the web and found some discussions about it so I then started from scratch again. This time I wiped the MBR on both drives using MBRWork, disconnected the 500gb and reinstalled to the SSD. Removed the install DVD and installed all the drivers which involved many reboots, all with no problem. To make sure I also did a few cold boots as well. Reconnected the 500gb, initialised, partitioned and formatted it. Copied data to it and did some more reboots and shutdowns. All was ok. Then out of the blue comes another "DISK BOOT FAILURE" and again, if the Win7 install DVD is in the drive it boots fine. So, is the SSD a bad'un? TIA UPDATE: It was a BIOS issue! I found a hidden away option for HDD boot order, which was separate from the usual HDD/CDRom/FDD boot order option. The WD was set to boot before the SSD... Swapped them round and all is well. Still don't understand how it worked at first though... Thanks Solaris

    Read the article

  • dpkg error : old pre-removal script returned error exit status 102

    - by Siva Prasad Varma
    I am unable to install or remove a package on my Ubuntu 10.04 due to the following error. $ sudo apt-get autoremove Password: Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be REMOVED: busybox 0 upgraded, 0 newly installed, 1 to remove and 9 not upgraded. 1 not fully installed or removed. Need to get 0B/212kB of archives. After this operation, 627kB disk space will be freed. Do you want to continue [Y/n]? y Selecting previously deselected package nscd. (Reading database ... 235651 files and directories currently installed.) Preparing to replace nscd 2.11.1-0ubuntu7.8 (using .../nscd_2.11.1-0ubuntu7.8_amd64.deb) ... invoke-rc.d: not a symlink: /etc/rc2.d/S76nscd dpkg: warning: old pre-removal script returned error exit status 102 dpkg - trying script from the new package instead ... invoke-rc.d: not a symlink: /etc/rc2.d/S76nscd dpkg: error processing /var/cache/apt/archives/nscd_2.11.1-0ubuntu7.8_amd64.deb (--unpack): subprocess new pre-removal script returned error exit status 102 update-rc.d: warning: /etc/rc2.d/S76nscd is not a symbolic link invoke-rc.d: not a symlink: /etc/rc2.d/S76nscd dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 102 Errors were encountered while processing: /var/cache/apt/archives/nscd_2.11.1-0ubuntu7.8_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) What should I do to resolve this error. I have tried sudo dpkg --remove --force-remove-reinstreq nscd but it did not work.

    Read the article

  • How to resolve - dpkg error : old pre-removal script returned error exit status 102

    - by Siva Prasad Varma
    I am unable to install or remove a package on my Ubuntu 10.04 due to the following error. $ sudo apt-get autoremove Password: Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be REMOVED: busybox 0 upgraded, 0 newly installed, 1 to remove and 9 not upgraded. 1 not fully installed or removed. Need to get 0B/212kB of archives. After this operation, 627kB disk space will be freed. Do you want to continue [Y/n]? y Selecting previously deselected package nscd. (Reading database ... 235651 files and directories currently installed.) Preparing to replace nscd 2.11.1-0ubuntu7.8 (using .../nscd_2.11.1-0ubuntu7.8_amd64.deb) ... invoke-rc.d: not a symlink: /etc/rc2.d/S76nscd dpkg: warning: old pre-removal script returned error exit status 102 dpkg - trying script from the new package instead ... invoke-rc.d: not a symlink: /etc/rc2.d/S76nscd dpkg: error processing /var/cache/apt/archives/nscd_2.11.1-0ubuntu7.8_amd64.deb (--unpack): subprocess new pre-removal script returned error exit status 102 update-rc.d: warning: /etc/rc2.d/S76nscd is not a symbolic link invoke-rc.d: not a symlink: /etc/rc2.d/S76nscd dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 102 Errors were encountered while processing: /var/cache/apt/archives/nscd_2.11.1-0ubuntu7.8_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) What should I do to resolve this error? I have tried sudo dpkg --remove --force-remove-reinstreq nscd but it did not work.

    Read the article

  • Video games, content strategy, and failure - oh my.

    - by Roger Hart
    Last night was the CS London group's event Content Strategy, Manhattan Style. Yes, it's a terrible title, feeling like a self-conscious grasp for chic, sadly commensurate with the venue. Fortunately, this was not commensurate with the event itself, which was lively, relevant, and engaging. Although mostly if you're a consultant. This is a strong strain in current content strategy discourse, and I think we're going to see it remedied quite soon. Not least in Paris on Friday. A lot of the bloggers, speakers, and commentators in the sphere are consultants, or part of agencies and other consulting organisations. A lot of the talk is about how you sell content strategy to your clients. This is completely acceptable. Of course it is. And it's actually useful if that's something you regularly have to do. To an extent, it's even portable to those of us who have to sell content strategy within an organisation. We're still competing for credibility and resource. What we're doing less is living in the beginning of a project. This was touched on by Jeffrey MacIntyre (albeit in a your-clients kind of a way) who described "the day two problem". Companies, he suggested, build websites for launch day, and forget about the need for them to be ongoing entities. Consultants, agencies, or even internal folks on short projects will live through Day Two quite often: the trainwreck moment where somebody realises that even if the content is right (which it often isn't), and on time (which it often isn't), it'll be redundant, outdated, or inaccurate by the end of the week/month/fickle social media attention cycle. The thing about living through a lot of Day Two is that you see a lot of failure. Nothing succeeds like failure? Failure is good. When it's structured right, it's an awesome tool for learning - that's kind of how video games work. I'm chewing over a whole blog post about this, but basically in game-like learning, you try, fail, go round the loop again. Success eventually yields joy. It's a relatively well-known phenomenon. It works best when that failing step is acutely felt, but extremely inexpensive. Dying in Portal is highly frustrating and surprisingly characterful, but the save-points are well designed and the reload unintrusive. The barrier to re-entry into the loop is very low, as is the cost of your failure out in meatspace. So it's easy (and fun) to learn. Yeah, spot the difference with business failure. As an external content strategist, you get to rock up with a big old folder full of other companies' Day Two (and ongoing day two hundred) failures. You can't send the client round the learning loop - although you may well be there because they've been round it once - but you can show other people's round trip. It's not as compelling, but it's not bad. What about internal content strategists? We can still point to things that are wrong, and there are some very compelling tools at our disposal - content inventories, user testing, and analytics, for instance. But if we're picking up big organically sprawling legacy content, Day Two may well be a distant memory, and the felt experience of web content failure is unlikely to be immediate to many people in the organisation. What to do? My hunch here is that the first task is to create something immediate and felt, but that it probably needs to be a success. Something quickly doable and visible - a content problem solved with a measurable business result. Now, that's a tall order; but scrape of the "quickly" and it's the whole reason we're here. At Red Gate, I've started with the text book fear and passion introduction to content strategy. In fact, I just typo'd that as "contempt strategy", and it isn't a bad description. Yelling "look at this, our website is rubbish!" gets you the initial attention, but it doesn't make you many friends. And if you don't produce something pretty sharp-ish, it's easy to lose the momentum you built up for change. The first thing I've done - after the visual content inventory - is to delete a bunch of stuff. About 70% of the SQL Compare web content has gone, in fact. This is a really, really cheap operation. It's visible, and it's powerful. It's cheap because you don't have to create any new content. It's not free, however, because you do have to validate your deletions. This means analytics, actually reading that content, and talking to people whose business purposes that content has to serve. If nobody outside the company uses it, and nobody inside the company thinks they ought to, that's a no-brainer for the delete list. The payoff here is twofold. There's the nebulous hard-to-illustrate "bad content does user experience and brand damage" argument; and there's the "nobody has to spend time (money) maintaining this now" argument. One or both are easily felt, and the second at least should be measurable. But that's just one approach, and I'd be interested to hear from any other internal content strategy folks about how they get buy-in, maintain momentum, and generally get things done.

    Read the article

  • Looking for an actual experience of RAID 5 2 drive failure?

    - by Brian
    I'm wondering if anyone has any personal experience of RAID 5 2 drive failure with large drives? As I understand it, the theory is that with large 1-2TB drives, if one drive fails in the raid set, it needs to rebuild everything so is thus hitting all the other drives very hard, and the chance of another failure goes up, especially if the drives were from the same manufacturing batch. And if you lose another drive, you lose all the data. This is usually explained after the statement "RAID is not backup" which I agree with. The theory of this makes sense, and I understand it, but does it really happen?

    Read the article

  • Multithreaded game fails on SwapBuffers in render thread at exit

    - by user782220
    The render loop and windows message loop run on separate threads. The way the program exits is that after PostQuitMessage is called in WM_DESTROY the message loop thread signals the render loop thread to exit. As far as I can tell before the render loop thread can even process the signal it tries SwapBuffers and that fails. My question, is there something about how Windows processes WM_DESTROY and WM_QUIT, in maybe DefWindowProc that causes various objects associated with rendering to go away even though I haven't explicitly deleted anything? And that would explain why the rendering thread is making bad calls at exit?

    Read the article

  • Android: How to properly exit application when inconsistent condition is unavoidable?

    - by Bevor
    First of all I already read about all this discussion that it isn't a good idea to manually exit an Android application. But in my case it seems to be needed. I have an AsyncTask which does a lof of operations in background. That means downloading data, saving it to local storage and preparing it for usage in application. It could happen that there is no internet connection or something different happens. For all that cases I have an Exception handling which returns the result. And if there is an exception, the application is unusable so I need to exit it. My question is, do I have to do some unregistration unloading or unbinding tasks or something when I exit the application by code or is System.exit(0) ok? I do all this in an AsyncTask, see my example: public class InitializationTask extends AsyncTask<Void, Void, InitializationResult> { private ProcessController processController = new ProcessController(); private ProgressDialog progressDialog; private Activity mainActivity; public InitializationTask(Activity mainActivity) { this.mainActivity = mainActivity; } @Override protected void onPreExecute() { super.onPreExecute(); progressDialog = new ProgressDialog(mainActivity); progressDialog.setMessage("Die Daten werden aufbereitet.\nBitte warten..."); progressDialog.setIndeterminate(true); //means that the "loading amount" is not measured. progressDialog.setCancelable(false); progressDialog.show(); }; @Override protected InitializationResult doInBackground(Void... params) { return processController.initializeData(); } @Override protected void onPostExecute(InitializationResult result) { super.onPostExecute(result); progressDialog.dismiss(); if (!result.isValid()) { AlertDialog.Builder dialog = new AlertDialog.Builder(mainActivity); dialog.setTitle("Initialisierungsfehler"); dialog.setMessage(result.getReason()); dialog.setPositiveButton("Ok", new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int which) { dialog.cancel(); //TODO cancel application System.exit(0); } }); dialog.show(); } } }

    Read the article

  • Apache error: could not make child process 25105 exit, attempting to continue anyway

    - by Temnovit
    Hello! I have a web server based on Ubuntu Server 9.10 with this software: apache 2 PHP 5.3 MySQL 5 Python 2.5 Few of my websites are PHP based, few use python/django through mod_wsgi. For month or so, every day my apache server stops responding until I manually restart it. Error logs show: [Fri Mar 05 17:06:47 2010] [error] could not make child process 25059 exit, attempting to continue anyway [Fri Mar 05 17:06:47 2010] [error] could not make child process 25061 exit, attempting to continue anyway [Fri Mar 05 17:06:47 2010] [error] could not make child process 24930 exit, attempting to continue anyway [Fri Mar 05 17:06:47 2010] [error] could not make child process 25084 exit, attempting to continue anyway [Fri Mar 05 17:06:47 2010] [error] could not make child process 25105 exit, attempting to continue anyway and so on. I tried to google this problem but it seems, that I can't find a solution there. How can I determine the cause of this error and how do I fix it? Thank you for your help. UPDATE Updating mod-wsgi to version 3.1 didn't solve the problem Updating PHP to 5.3 also didn't solve it Here is a list of all installed modules: core mod_log_config mod_logio prefork http_core mod_so mod_alias mod_auth_basic mod_authn_file mod_authz_default mod_authz_groupfile mod_authz_host mod_authz_user mod_autoindex mod_cgi mod_deflate mod_dir mod_env mod_mime mod_negotiation mod_php5 mod_rewrite mod_setenvif mod_status mod_wsgi Here's how my virtual host with wsgi looks: <VirtualHost *:80> ServerName example.net DocumentRoot /var/www/example.net #wcgi script that serves all the thing WSGIScriptAlias / /var/www/example.net/index.wsgi WSGIDaemonProcess example user=wsgideamonuser group=root processes=1 threads=10 WSGIProcessGroup example Alias /static /var/www/example.net/static #serving admin files Alias /media/ /usr/local/lib/python2.6/dist-packages/django/contrib/admin/media/ <Location "/static"> SetHandler None </Location> <Location "/media"> SetHandler None </Location> ErrorLog /var/www/example.net/error.log </VirtualHost> Error log now contains two types of errors fallowed one by another: [error] child process 9486 still did not exit, sending a SIGKILL [error] could not make child process 9106 exit, attempting to continue anyway

    Read the article

  • In terms of loss of volume or corruption, is failure probability of an Amazon EBS volume 'x', indepe

    - by Tony Morgan
    In terms of loss of volume or corruption, is failure probability of an Amazon EBS* volume 'x', independent of the failure of another volume 'y'. Amazon states[1] AFR** of between 0.1%-0.5%, lets say 0.5%, 0.005. To restate the question is the AFR composed of two EBSs mirrored actually 0.005*0.005 = 0.000025? To be clear I'm not interested in high availability here, just very high durability. *EBS = elastic block storage (amazons persistant disks) **AFR = annual failure rate. [1] http://aws.amazon.com/ebs/

    Read the article

  • No exit option if something is running

    - by max
    How can I catch something going on if the user chooses the exit option from a menu? I mean I'd like to be able to manage the event of a user who is going to close the application but some activity is being performed so he/she shouldn't be able to exit. Here's the code I wrote. In a nutshell : recording is being performed -- user clicks exit -- WARNING "Process is running you can't do it"(The process goes on) nothing is running -- user clicks exit -- application closes Is it possible to solve the problem by just adding a few lines of code without having to rewrite the entire program? thanks very much in advance. MAX exitAction = new AbstractAction("Exit") { public void actionPerformed(ActionEvent e) { System.exit(0); } }; exitAction.putValue(Action.NAME, "Exit"); // description exitAction.putValue(Action.SHORT_DESCRIPTION, "Exit application");

    Read the article

  • What can cause Java to keep running after System.exit()?

    - by uckelman
    I have a Java program which is being started via ProcessBuilder from another Java program. System.exit(0) is called from the child program, but for some of our users (on Windows) the java.exe process associated with the child doesn't terminate. The child program has no shutdown hooks, nor does it have a SecurityManager which might stop System.exit() from terminating the VM. I can't reproduce the problem myself on Linux or Windows Vista. So far, the only reports of the problem come from two Windows XP users and one Vista user, using two different JREs (1.6.0_15 and 1.6.0_18), but they're able to reproduce the problem every time. Can anyone suggest reasons why the JVM would fail to terminate after System.exit(), and then only on some machines?

    Read the article

  • System won't boot unless I type "exit" at initramfs prompt

    - by Ozzah
    I installed MDADM for my RAID, and ever since that when I boot up the system just sits at the purple screen forever. After pulling my hair out for a week, I finally discovered - purely by accident - that it's sitting at an initramfs prompt in the background and I have to blindly type "exit" and then Ubuntu resumes normal boot. How do I fix this? I am unable to reboot my machine if I'm not sitting in front of it because it will never boot!

    Read the article

  • Lessons from a SAN Failure

    - by Bill Graziano
    At 1:10AM Sunday morning the main SAN at one of my clients suffered a “partial” failure.  Partial means that the SAN was still online and functioning but the LUNs attached to our two main SQL Servers “failed”.  Failed means that SQL Server wouldn’t start and the MDF and LDF files mostly showed a zero file size.  But they were online and responding and most other LUNs were available.  I’m not sure how SANs know to fail at 1AM on a Saturday night but they seem to.  From a personal standpoint this worked out poorly: I was out with friends and after more than a few drinks.  From a work standpoint this was about the best time to fail you could imagine.  Everything was running well before Monday morning.  But it was a long, long Sunday.  I started tipsy, got tired and ended up hung over later in the day. Note to self: Try not to go out drinking right before the SAN fails. This caught us at an interesting time.  We’re in the process of migrating to an entirely new set of servers so some things were partially moved.  This made it difficult to follow our procedures as cleanly as we’d like.  The benefit was that we had much better documentation of everything on the server.  I would encourage everyone to really think through the process of implementing your DR plan and document as much as possible.  Following a checklist is much easier than trying to remember at night under pressure in a hurry after a few drinks. I had a series of estimates on how long things would take.  They were accurate for any single server failure.  They weren’t accurate for a SAN failure that took two servers down.  This wasn’t bad but we should have communicated better. Don’t forget how many things are outside the database.  Logins, linked servers, DTS packages (yikes!), jobs, service broker, DTC (especially DTC), database triggers and any objects in the master database are all things you need backed up.  We’d done a decent job on this and didn’t find significant problems here.  That said this still took a lot of time.  There were many annoyances as a result of this.  Small settings like a login’s default database had a big impact on whether an application could run.  This is probably the single biggest area of concern when looking to recreate a server.  I’d encourage everyone to go through every single node of SSMS and look for user created objects or settings outside the database. Script out your logins with the proper SID and already encrypted passwords and keep it updated.  This makes life so much easier.  I used an approach based on KB246133 that worked well.  I’ll get my scripts posted over the next few days. The disaster can cause your DR process to fail in unexpected ways.  We have a job that scripts out all logins and role memberships and writes it to a file.  This runs on the DR server and pulls from the production server.  Upon opening the file I found that the contents were a “server not found” error.  Fortunately we had other copies and didn’t need to try and restore the master database.  This now runs on the production server and pushes the script to the DR site.  Soon we’ll get it pushed to our version control software. One of the biggest challenges is keeping your DR resources up to date.  Any server change (new linked server, new SQL Server Agent job, etc.) means that your DR plan (and scripts) is out of date.  It helps to automate the generation of these resources if possible. Take time now to test your database restore process.  We test ours quarterly.  If you have a large database I’d also encourage you to invest in a compressed backup solution.  Restoring backups was the single larger consumer of time during our recovery. And yes, there’s a database mirroring solution planned in our new architecture. I didn’t have much involvement in things outside SQL Server but this caused many, many things to change in our environment.  Many applications today aren’t just executables or web sites.  They are a combination of those plus network infrastructure, reports, network ports, IP addresses, DTS and SSIS packages, batch systems and many other things.  These all needed a little bit of attention to make sure they were functioning properly. Profiler turned out to be a handy tool.  I started a trace for failed logins and kept that running.  That let me fix a number of problems before people were able to report them.  I also ran traces to capture exceptions.  This helped identify problems with linked servers. Overall the thing that gave me the most problem was linked servers.  In order for a linked server to function properly you need to be pointed to the right server, have the proper login information, have the network routes available and have MSDTC configured properly.  We have a lot of linked servers and this created many failure points.  Some of the older linked servers used IP addresses and not DNS names.  This meant we had to go in and touch all those linked servers when the servers moved.

    Read the article

  • Post raid5 setup reboot shows single hard drive failure on ubuntu 12.10?

    - by junkie
    I just set up raid 5 on linux using three HDDs as per a guide. It all went fine until when I rebooted I got the following text: http://i.stack.imgur.com/Zsfjk.jpg. Does this mean one of my HDDs has failed? How do I check if any of them are failing? I tried using smartctl and didn't see any issues. Or is it nothing to do with failure and something else altogether? I would like to get the raid 5 working again but I'm not sure where to go from here. I'm using ubuntu 12.10 and the three raid disks each have a gpt partition with a single full size partition of filesystem type ext4. Note I only got an error on reboot not while I was creating the raid array which went fine. Thanks.

    Read the article

  • SSD becomes hot, disk failure warning

    - by Aegluin
    I have a two weeks old SSD (Kingston SSDnow 64GB). Yesterday, the computer shutdown twice and after rebooting I was bombarded with disk failure warnings. I usually take such warnings serious (and backed up), but skeptical. After cooling down, the laptop boots again and the only red Smart value was the temperature (Ubuntu did not show the temperature of failure, but the at that time 29°). After refreshing the Smart status and doing a "self test", everything is green. Before contacting Kingston support, I would like to know whether it could be due to a software issue: Is it possible that it is false alarm, and how can I check? I installed Ubuntu 12.04 32bit and took care of alignment. I supposed Ubuntu set up with optimal settings for SSDs, how can I check that there was no mistake? The current temperature is around 40-56°. Is such a temperature abnormal for SSDs? Output of sudo smartctl --all /dev/sda: http://pastebin.ubuntu.com/1175940/

    Read the article

  • SH/BASH - Scan a log file until some text occurs, then exit. How??

    - by James
    Current working environment is OSX 10.4.11. My current script: #!/bin/sh tail -f log.txt | while read line do if echo $line | grep -q 'LOL CANDY'; then echo 'LOL MATCH FOUND' exit 0 fi done It works properly the first time, but on the second run and beyond it takes 2 occurrences of 'LOL CANDY' to appear before the script will exit, for whatever reason. And although I'm not sure it is specifically related, there is the problem of the "tail -f" staying open forever. Can someone please give me an example that will work without using tail -f? If you want you can give me a bash script, as OSX can handle sh, bash, and some other shells I think.

    Read the article

  • What is an appropriate way to programmatically exit an application?

    - by denchr
    I am evaluating user inputs as commands for my application. If the user presses Q, or q, and then hits enter, the application quits and execution terminates. Is there a proper context, or best practices on how to do that? I do not have any resources to release, or anything like that. Should I just use System.exit(0);? Is there a recommended way to do that? As my first approach I do something like this: while (true){ try{ BufferedReader br = new BufferedReader(new InputStreamReader(System.in)); //Other logic goes here... if (br.readLine().equalsIgnoreCase("Q")){ System.exit(0); } } catch (IOException ioe) { System.out.println("IO error trying to read your selection"); } }

    Read the article

  • Unable to run Tor even in terminal, Vidalia exit code 127

    - by Ubuntu Newb
    I'm using Ubuntu 12.10 (quantal) and I recently installed Tor (32 bits) following the instructions on the Tor project's page. Then I started the script after having it extracted from the console and got this: master@ubuntu:~/tor-browser_en-US$ ./start-tor-browser Launching Tor Browser Bundle for Linux in /home/master/tor-browser_en-US ./start-tor-browser: 225: ./start-tor-browser: ./App/vidalia: not found Vidalia exited abnormally. Exit code: 127 Then I ran Vidalia from the console and: master@ubuntu:~/tor-browser_en-US$ vidalia (<unknown>:11354): IBUS-WARNING **: Unable to load /var/lib/dbus/machine-id: Failed to open file '/var/lib/dbus/machine-id': Permission denied master@ubuntu:~/tor-browser_en-US$ vidalia (<unknown>:11358): IBUS-WARNING **: Unable to load /var/lib/dbus/machine-id: Failed to open file '/var/lib/dbus/machine-id': Permission denied And after Vidalia's GUI opens I get the error prompt about starting Tor: "Vidalia was unable to start Tor. Check your settings to ensure the correct name and location of your Tor executable is specified." How can I start Tor?

    Read the article

  • Exit link tracking with timestamped logs on 3rd party content

    - by dandv
    I want to track clicks on exit links, that are placed in 3rd party content, for example on Twitter. I also need the timestamps of the clicks. Google Analytics can't be embedded in 3rd party content. Another solution is to use a URL shortener like bit.ly. However, bit.ly or goo.gl don't log the time of the click with any better granularity than a full day. su.pr shows the time for the past day in its analytics graph. The analytics download only includes the day, not the time. cli.gs was touted as having the most detailed analytics, yet it doesn't show the time either, and forces the user through a preview page. Any ideas?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >