Search Results

Search found 59084 results on 2364 pages for 'local system'.

Page 256/2364 | < Previous Page | 252 253 254 255 256 257 258 259 260 261 262 263  | Next Page >

  • Mount of File System Failed. -- After upgrading from 9.04 to 9.10

    - by javanoob
    After upgrading from Ubuntu 9.04 to 9.10 i am getting this message on boot up: Mount of File System Failed. A maintenance shell will now be started. CONTROL-D will terminate this shell and retry. myusername@root:~$ After searching in google,i found out that i have to run the command fsck on the OS partition from ubuntu Live CD.. Here are my questions: 1) I dont know in which partition my ubuntu OS is installed..(Those are not user-friendly drive names to remember right :)) -- Is there any command to know in which partition my ubuntu os is installed? 2) Can i do this from Ubuntu 10.04 live CD? Thanks in Advance

    Read the article

  • Openbsd init script for ssh VPN tunnel

    - by manthis
    I have a server hosting SSH tunnels and Openbsd 4.5 clients connecting to it. Things work just fine but I am in the need of automating the connection from the client to the server. So that if the client is accidentally rebooted, then the connection initiates unattended. So it should be as straight forward as to include the ssh connection in an init script. However I have miserably failed to do so by including it to /etc/rc.local, which is the file I usually do this sort of things in. Right now I am using autossh to also restart the connection if necessary and the script that I put on /etc/rc.local follows: #!/bin/sh # # Example script to start up tunnel with autossh. # # This script will tunnel 2200 from the remote host # to 22 on the local host. On remote host do: # ssh -p 2200 localhost # # $Id: autossh.host,v 1.6 2004/01/24 05:53:09 harding Exp $ # ID=root HOST=example.com #AUTOSSH_POLL=600 #AUTOSSH_PORT=20000 #AUTOSSH_GATETIME=30 #AUTOSSH_LOGFILE=$HOST.log #AUTOSSH_DEBUG=yes #AUTOSSH_PATH=/usr/local/bin/ssh export AUTOSSH_POLL AUTOSSH_LOGFILE AUTOSSH_DEBUG AUTOSSH_PATH AUTOSSH_GATETIME AUTOSSH_PORT autossh -2 -f -M 20000 ${ID}@${HOST} The script detaches just fine when run manually so I just include it on /etc/rc.local as echo -n 'starting local daemons:' if [ -x /usr/local/sbin/autossh.sh ]; then echo -n 'ssh tunnel' /usr/local/sbin/autossh.sh fi echo '.' I have also tried calling it from /etc/hostname.tun0 in case there may be issues with /etc/rc.local not being called at the right time when network connections are ready, so I would use: inet 10.254.254.2 255.255.255.252 10.254.254.1 !/usr/local/sbin/autossh.sh Your input is highly appreciated.

    Read the article

  • How can I copy the output from a remote command into the local clipboard?

    - by cwd
    I use iTerm2 as my terminal client in Mac OS X. On the local system I can use pbcopy and pbpaste to transfer data between the system clipboard and the terminal, but of course this doesn't work when you're ssh'ed to another machine. Is there some way which I can take the result of a command and copy it to the clipboard automatically? Perhaps an applescript to grab the text on the iTerm windows, then get the next to last line? For instance, if I wanted to copy the current working directory: I run pwd, then use the mouse to select the text, and then press command + c. Is there any better / faster / automatic way of doing this? I'm not looking for a bulletproof solution that would work for every command (eg: might not work when there is a huge scrollback) - I'm just looking for something to make this task that I do quite often a little less tedious. Update I'm looking into using screen to do this, but I'm still not sure if it is possible.

    Read the article

  • fsck: FILE SYSTEM WAS MODIFIED after each check with -c, why?

    - by Chris
    I use a script to partition and format CF cards (connected with a USB card writer) in an automated way. After the main process I check the card again with fsck. To check bad blocks I also tried the '-c' switch, but I always get a return value != 0 and the message "FILE SYSTEM WAS MODIFIED" (see below). I get the same result when checking the very same drive several times... Does anyone know why a) the file system is modified at all and b) why this seems to happen every time I check and not only in case of an error (like bad blocks)? Here's the output: linux-box# fsck.ext3 -c /dev/sdx1 e2fsck 1.40.2 (12-Jul-2007) Checking for bad blocks (read-only test): done Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information Volume (/dev/sdx1): ***** FILE SYSTEM WAS MODIFIED ***** Volume (/dev/sdx1): 5132/245760 files (1.2% non-contiguous), 178910/1959896 blocks

    Read the article

  • A very useful custom component

    - by Kevin Smith
    Whenever I am debugging a problem in WebCenter Content (WCC) I often find it useful to see the contents of the internal data binder used by WCC when executing a service. I want to know the value of all parameters passed in by the caller, either a user in the web GUI or from an application calling the service via RIDC or web services. I also want to the know the value of binder variables calculated by WCC as it processes a service. What defaults has it applied based on configuration settings or profile rules? What values has it derived based on the user input? To help with this I created a  component that uses a java filter to dump out the contents of the internal data binder to the WCC trace file. It dumps the binder contents using the toString() method. You can register this filter code using many different filter hooks to see how the binder is updated as WCC processes the service. By default, it uses the validateStandard filter hook which is useful during a CHECKIN service. It uses the system trace section, so make sure that trace section is enabled before looking for the output from this component. Here is some sample output>system/6    10.09 09:57:40.648    IdcServer-1    filter: postParseDataForServiceRequest, binder start -- system/6    10.09 09:57:40.698    IdcServer-1    *** LocalData *** system/6    10.09 09:57:40.698    IdcServer-1    (10 keys + 0 defaults) system/6    10.09 09:57:40.698    IdcServer-1    ClientEncoding=UTF-8 system/6    10.09 09:57:40.698    IdcServer-1    IdcService=CHECKIN_UNIVERSAL system/6    10.09 09:57:40.698    IdcServer-1    NoHttpHeaders=0 system/6    10.09 09:57:40.698    IdcServer-1    UserDateFormat=iso8601 system/6    10.09 09:57:40.698    IdcServer-1    UserTimeZone=UTC system/6    10.09 09:57:40.698    IdcServer-1    dDocTitle=Check in from RIDC using Framework Folder system/6    10.09 09:57:40.698    IdcServer-1    dDocType=Document system/6    10.09 09:57:40.698    IdcServer-1    dSecurityGroup=Public system/6    10.09 09:57:40.698    IdcServer-1    parentFolderPath=/folder1/folder2 system/6    10.09 09:57:40.698    IdcServer-1    primaryFile=testfile5.bin     system/6    10.09 09:57:40.698    IdcServer-1    ***  RESULT SETS  ***>system/6    10.09 09:57:40.698    IdcServer-1    binder end -------------------------------------------- See the readme included in the component for more details. You can download the component from here.

    Read the article

  • Why does grub selection no longer appear on my dual-boot system?

    - by ksinkar
    I had installed a Ubuntu 12.04 and Fedora 17 dual boot on my system. During the installation I had installed Ubuntu first and Fedora later. Fedora had recognized Ubuntu and added it to the GRUB OS selection list. Afterwards I installed some routine updates on my Ubuntu and after that I am just not able to see the GRUB OS selection anymore when I boot. I am unable to understand what happened, both Fedora and Ubuntu use GRUB 2.0. Also it seems Ubuntu is not able to recognize other existing linux operating systems; because in the beginning I had installed Fedora first and Ubuntu later, but Ubuntu did not recognize Fedora at all, while Fedora recognized Ubuntu when I installed the other way round.

    Read the article

  • Simple rendering produces minor stutter

    - by Ben
    For some reason, this game loop renders the movement of a simple rectangle with no stuttering. double currTime; double prevTime = System.nanoTime() / NANO_TO_SEC; double FPSTIMER = System.nanoTime(); double maxTimeDiff = 100.0 / 1000.0; double delta = 1.0 / 60.0; int processes = 0, frames = 0; while(true){ currTime = System.nanoTime() / NANO_TO_SEC; if(currTime - prevTime > maxTimeDiff) prevTime = currTime; if(currTime >= prevTime){ process(); processes++; prevTime += delta; if(currTime < prevTime){ render(); frames++; } } else{ try{ Thread.sleep((long) (1000 * (prevTime - currTime))); } catch(Exception e){} } if(System.nanoTime() - FPSTIMER > 1000000000.0){ System.out.println("Process: " + (1000 / processes) + "ms FPS: " + (1000 / frames) + "ms"); processes = frames = 0; FPSTIMER += 1000000000.0; } } But for this game loop, I get really minor stuttering where the movement does not look smooth. long prevTime = System.currentTimeMillis(); long prevRenderTime = 0; long currRenderTime = 0; long delta = 0; long msPerTick = 1000 / 60; int frames = 0; int ticks = 0; double FPSTIMER = System.currentTimeMillis(); while (true){ long currTime = System.currentTimeMillis(); delta += (currTime - prevTime) / msPerTick; prevTime = currTime; while (delta >= 1){ ticks++; process(); delta -= 1; } prevRenderTime = System.currentTimeMillis(); render(); frames++; currRenderTime = System.currentTimeMillis(); try{ Thread.sleep((long) ((1000 / FPS) - (currRenderTime - prevRenderTime))); } catch(Exception e){} if(System.currentTimeMillis() - FPSTIMER > 1000.0){ System.out.println("Process: " + (1000.0 / ticks) + "ms FPS: " + (1000.0 / frames) + "ms"); ticks = frames = 0; FPSTIMER += 1000.0; } Is there any critical difference that I'm missing here? The one thing I noticed is that if I uncap the fps for the second game loop, the stuttering goes away. It doesn't make sense to me. Also, the second game loop came from Notch's Minicraft code with just my thread sleeping code added in.

    Read the article

  • Service Layer - how broad should it be, and should it also be on the local application?

    - by BornToCode
    Background: I need to build a desktop application with some operations (CRUD and more) (=winforms), I need to make another application which will re-use some of the functions of the main application (=webforms). I understood that using service layer is the best approach here. If I understood correctly the service should be calling the function on the BL layer (correct me if I'm wrong) The dilemma: In my main winform UI - should I call the functions from the BL, or from the service? (please explain why) Should I create a service for every single function on the BL even if I need some of the functions only in one UI? for example - should I create services for all the CRUD operations, even though I need to re-use only update operation in the webform? YOUR HELP IS MUCH APPRECIATED

    Read the article

  • How to install Edubuntu on a system with low memory (256 Mb)?

    - by int_ua
    I'm preparing an old system with 256 Mb RAM to send it to some children. It doesn't have Ethernet controller and there are no Internet access at the destination. I've chosen Edubuntu for obvious reasons and modified it with UCK trying to minimize memory usage just to install, let alone using it yet. But Ubiquity won't start even in openbox (edited /etc/lightdm/lightdm.conf) because there are no space left on /cow right after booting. I've already deleted things like ibus, zeitgeist, update-manager (no network access after all), twisted-core, plymouth logos. I'm thinking about creating a swap partition on HDD, can it be later added to expand this /cow ? Is there a package for the text-mode installation which is used on Alternate CDs? I don't want to re-create Edubuntu from an Alternate CD. This behavior is reproducible in VM limited to 256Mb RAM.

    Read the article

  • How configure 2 Lan cards in Windows 7/8 pc one to connect to Internet and other to Local Network

    - by Maharshi Raval
        I am about to install a dedicated VOIP server in our office. It is a 3CX pbx system on Windows 7/8 machine. The environment currently is a Windows SBS 2011 with 8 client machines. I want to use a dedicated broadband connection for the PBX (3CX) box, but the box also needs to be accessible in the local network as we will be using IP Phones and software IP phones. How configure two network cards on PBX box, so that one will be always used to connect to our SIP host over the Internet and the other will be connected to local network accessible from other client pc to connect to the pbx system. It must be noted that currently the Windows SBS 2011 acts as the Primary Domain Controller and gateway for all the client machines.     I cannot use a load balancer as it will conflict and cause issues within the current setup of our SBS2011 as it is also our Exchange Server. Any input is much appreciated. thanks in advance

    Read the article

  • How configure 2 Lan cards in Windows 7/8 pc one to connect to Internet and other to Local Network

    - by Maharshi Raval
        I am about to install a dedicated VOIP server in our office. It is a 3CX pbx system on Windows 7/8 machine. The environment currently is a Windows SBS 2011 with 8 client machines. I want to use a dedicated broadband connection for the PBX (3CX) box, but the box also needs to be accessible in the local network as we will be using IP Phones and software IP phones. How configure two network cards on PBX box, so that one will be always used to connect to our SIP host over the Internet and the other will be connected to local network accessible from other client pc to connect to the pbx system. It must be noted that currently the Windows SBS 2011 acts as the Primary Domain Controller and gateway for all the client machines.     I cannot use a load balancer as it will conflict and cause issues within the current setup of our SBS2011 as it is also our Exchange Server. Any input is much appreciated. thanks in advance

    Read the article

  • How do I access and enable more icons to be in the system tray?

    - by Jon
    So I'm messing around with Natty a little, and I noticed that all the apps that would normally use the system tray (or "notification area"?) aren't displaying there. Is that a bug, or is that the way it's going to be? I heard something about Ubuntu getting rid of that feature entirely. Is there a way to add it back? I mean, I didn't really like it, either, especially when there were apps that used it unnecessarily, but I can't use CryptKeeper at all now, or easycrypt, and I don't know whether Dropbox has synced without opening Nautilus.

    Read the article

  • What is wrong with those crontabs?

    - by Guillaume Boudreau
    I want my projectors to Power On before the mall opens, and Power Off when the mall closes. So I created crontab entries (that I placed in /etc/cron.d/mall), but today (Thu Nov 22 18:58:29 EST 2012 is the current date on that server), the power-off.sh script got executed at 17:20 (see syslog excerpt below). Being Thu, Nov. 22, I would have expected the power-off.sh script to be executed at 21:20, per the 4th crontab line below. Why did power-off.sh execute at 17:20? What is wrong with my crontab entries? Content of /etc/cron.d/mall: 40 9 22-30 Nov Mon-Sat myuser /usr/local/projectors/power-on.sh 40 10 22-30 Nov Sun myuser /usr/local/projectors/power-on.sh 20 18 22-30 Nov Mon-Wed myuser /usr/local/projectors/power-off.sh 20 21 22-30 Nov Thu-Fri myuser /usr/local/projectors/power-off.sh 20 17 22-30 Nov Sat-Sun myuser /usr/local/projectors/power-off.sh 40 9 1-22 Dec Mon-Sat myuser /usr/local/projectors/power-on.sh 40 10 1-22 Dec Sun myuser /usr/local/projectors/power-on.sh 20 21 1-22 Dec Mon-Fri myuser /usr/local/projectors/power-off.sh 20 17 1-22 Dec Sat-Sun myuser /usr/local/projectors/power-off.sh syslog excerpt: $ grep power-off.sh /var/log/syslog Nov 22 17:20:01 lanner-ubu-c2d CRON[23007]: (myuser) CMD (/usr/local/projectors/power-off.sh)

    Read the article

  • Are there similarities between operating system kernels and programming language kernels?

    - by rahmu
    I know very little about Smalltalk but I noticed that there's a frequent mention of the "kernel". Dan Ingalls prime maintainer of several implementations of Smalltalk also worked on a Javascript environment called "Lively Kernel" and in Peter Siebel's book he kept mentionning the "kernel". I cannot help but think that it is no coincidence that the creators of Smalltalk used the name of a (central) part of operating systems to refer to a particular component of their language. Was it because Smalltalk intended to act as an operating system? Was it because theory behind programming languages and operating systems have a lot in common? What is the reason behind the common appelation of the two components?

    Read the article

  • How do you setup Postfix/Dovecot/MySQL to not look for local accounts?

    - by thiesdiggity
    I am having an issue with one of my Postfix/Dovecot mail servers and I'm unsure how to fix the problem. I will try to explain it in detail, here it goes: I have an Ubuntu server setup using Virtual hosting with Postfix, Dovecot and MySQL. We have one domain setup as a virtual domain, for this example I am going to use mail.example.com. Under that domain we have one email address. I have another server (MS Exchange) setup using another one of my sub-domains, ex.example.com. The problem is that when I SMTP into the account on mail.example.com and try to send an email to an account on ex.example.com, I get the email returned back to us with an "unknown host" error. Now, I know that the mail.example.com server can resolve the ex.example.com domain because I can ping/dig while SSH'd into it. I can also log into Postfix via Telnet and send an email to an ex.example.com mailbox. I'm guessing that it has something to do with Postfix/Dovecot looking locally for the domain in the virtual domain list because of the tld domain (example.com)? If that's the case, how do I get Postfix/Dovecot to only look locally for the entire URL (mail.example.com) and if it doesn't find it, send it to the correct server by looking up the MX/A records (which I know exist and are setup correctly)? I have been working on this all day and any guidance would be GREATLY appreciated! Thanks for your time!

    Read the article

  • Does ZFS cache Compressed or Uncompressed data in a ZFS file-system with compression turned on?

    - by George Bailey
    ZFS supports file-system compression and it also caches frequently or recently accessed data. If a system has lots of CPU but the underlying data storage system is slow. It is possible that ZFS would perform better with compression turned on. This can be easily tested when writing files by measuring CPU and disk usage and throughput. (of course latency may exist,, but this would not be an issue for large files). But what about cache? If data will have to be decompressed every time it is read then this is probably less of a good idea. Is the cached data compressed?. Does anybody have some information on this?

    Read the article

  • How can be data oriented programming applied for GUI system?

    - by Miro
    I've just learned basics of Data oriented programming design, but I'm not very familiar with that yet. I've also read Pitfalls of Object Oriented Programming GCAP 09. It seems that data oriented programming is much better idea for games, than OOP. I'm just creating my own GUI system and it's completely OOP. I'm thinking if is data oriented programming design applicable for structured things like GUI. The main problem I see is that every type widget has different data, so I can hardly group them into arrays. Also every type of widget renders differently so I still need to call virtual functions.

    Read the article

  • How should a JEE application store credentials for logging in to an external system?

    - by FGreg
    I am in a situation where I have a Web Application (WAR) that is accessing a REST service provided by another application. The REST service uses Basic HTTP Authentication. So that means the application calling the REST service needs to store user credentials somehow. To further complicate things, this is an enterprise, so there are different 'regions' the application moves through which will have different credentials for the same service (think local development, development region, integration region, user test region, production, etc...) My first instinct is that the credentials should be stored by the JEE container and the application should ask the container for the credentials (probably via JNDI?). I'm beginning to read about Java Authentication and Authorization Service (JAAS) but I'm not sure if that is the appropriate solution to this problem. How should a JEE application store credentials for logging in to an external system? A few more details about my WAR. It is a Spring-Integration project that has no front-end. The container I am working with is Websphere. I am using JEE 5 and Spring 4.0.1. To this point I have not needed to consider spring-security... does this situation mean I should re-evaluate that decision?

    Read the article

  • Tool for creating system image from DOS (should be USB bootable) ..?

    - by Ahmad
    I want to know if there's any decent tool which can be used to create the system image from DOS .. What I specifically want to do is to put the program in my FAT32 formatted USB, then boot the target computer from the USB so that the tool runs, and then it should be able to create a complete system image of the entire system, and store it on the USB itself .. Please note that I can ONLY do this from boot time because of other limitations .. I cannot go into any OS to do it from there .. So I need tools which can do this at boot time from DOS ..

    Read the article

  • HTG Explains: Live File System vs. Mastered Disc Formats in Windows

    - by Chris Hoffman
    When burning a CD or DVD with Windows, you’ll be asked whether you want to use a Live File System or a Mastered disc format. Each has its own advantages and disadvantages. Windows 7 refers to this as “Like a USB flash drive” or “With a CD/DVD player.” But how exactly can a non-rewritable disc function like a USB flash drive? HTG Explains: What is the Windows Page File and Should You Disable It? How To Get a Better Wireless Signal and Reduce Wireless Network Interference How To Troubleshoot Internet Connection Problems

    Read the article

  • Software Architecture Analysis Method (SAAM)

    Software Architecture Analysis Method (SAAM) is a methodology used to determine how specific application quality attributes were achieved and how possible changes in the future will affect quality attributes based on hypothetical cases studies. Common quality attributes that can be utilized by this methodology include modifiability, robustness, portability, and extensibility. Quality Attribute: Application Modifiability The Modifiability quality attribute refers to how easy it changing the system in the future will be. This to me is a very open-ended attribute because a business could decide to transform a Point of Sale (POS) system in to a Lead Tracking system overnight. (Yes, this did actually happen to me) In order for SAAM to be properly applied for checking this attribute specific hypothetical case studies need to be created and review for the modifiability attribute due to the fact that various scenarios would return various results based on the amount of changes. In the case of the POS change out a payment gateway or adding an additional payment would have scored very high in comparison to changing the system over to a lead management system. I personally would evaluate this quality attribute based on the S.O.I.L.D Principles of software design. I have found from my experience the use of S.O.I.L.D in software design allows for the adoption of changes within a system. Quality Attribute: Application Robustness The Robustness quality attribute refers to how an application handles the unexpected. The unexpected can be defined but is not limited to anything not anticipated in the originating design of the system. For example: Bad Data, Limited to no network connectivity, invalid permissions, or any unexpected application exceptions. I would personally evaluate this quality attribute based on how the system handled the exceptions. Robustness Considerations Did the system stop or did it handle the unexpected error? Did the system log the unexpected error for future debugging? What message did the user receive about the error? Quality Attribute: Application Portability The Portability quality attribute refers to the ease of porting an application to run in a new operating system or device. For example, It is much easier to alter an ASP.net website to be accessible by a PC, Mac, IPhone, Android Phone, Mini PC, or Table in comparison to desktop application written in VB.net because a lot more work would be involved to get the desktop app to the point where it would be viable to port the application over to the various environments and devices. I would personally evaluate this quality attribute based on each new environment for which the hypothetical case study identifies. I would pay particular attention to the following items. Portability Considerations Hardware Dependencies Operating System Dependencies Data Source Dependencies Network Dependencies and Availabilities  Quality Attribute: Application Extensibility The Extensibility quality attribute refers to the ease of adding new features to an existing application without impacting existing functionality. I would personally evaluate this quality attribute based on each new environment for the following Extensibility  Considerations Hard coded Variables versus Configurable variables Application Documentation (External Documents and Codebase Documentation.) The use of Solid Design Principles

    Read the article

  • Service Layer - how broad should it be, and should it also be used from the local application?

    - by BornToCode
    The background: I need to build a desktop application with some operations (CRUD and more) (=winforms), I need to make another application which will re-use some of the functions of the main application (=webforms). I'm using service layer for reusing my functions. The service is calling the functions on the BL layer (correct me if I'm doing this wrong). so my desktop has 4 projects - DAL, BL, UI, WEBSERVICES. The dilemma (simple but I still need some more experienced opinions): In my main winform UI - should I call the functions from the BL - bl.getcustomers(), or do it similar to how I call it in the webform, and call the functions from the service - webservices.getcustomers? Should I create a service for every single function on the BL even if I need some of the functions only in one UI? for example - should I create services for all the CRUD operations, even though I need to re-use only update operation in the webform? YOUR HELP IS MUCH APPRECIATED

    Read the article

  • Best strategy to discover a web service in a local network?

    - by Ucodia
    I am currently doing some research for a project. The setup is simple, I have a computer running a service in my home network and any device connected to that same network should be able to discover the service automatically and use it. I have no specific technology requirement whether it is on the server or client side. The client knows about the service definition. Other than that I have no idea what strategy to use, what technology to look at or whether I should go for a SOAP or a HTTP based service. I think going HTTP with REST API is the best for targeting all devices but I am opened to any suggestions. Thanks.

    Read the article

  • What does "Windows is not a real-time operating system" mean?

    - by hydroparadise
    I came across an application called LatencyMon, that apparently does latency monitoring. I have always understood the more of a load you put on the processor, the less responsive, or more latent, the system becomes. However, in the second section of the LatencyMon page, the first sentence says, "Windows is not a real-time operating system". That got me thinking. I mean, is this any different from any other operatiing system like linux, unix, or OS X? Are there any "Real-Time" operating systems? Or is the merely a marketing scheme to get you to buy their product? EDIT: Also, are there any examples of RTOS's out there?

    Read the article

  • Why is using an external USB drive or USB printer causing my system to hang?

    - by thepd
    I am having some troubles using various external hard drives and my printer, all of which I connect using USB. The majority of the time, when I connect either of these devices, my system freezes up completely after about 10 minutes. They work just fine prior to that moment. I've also not noticed any problems using a USB mouse. I'm running Ubuntu 10.10 and have tried using a newer kernel (a 2.6.36 Maverick kernel from the kernel ppa, as opposed to the default 2.6.35 one) to no avail. I'm using a Dell Studio XPS 16 (M1647) which is a sort of newish laptop and so my guess is this is probably some kind of driver bug. Is there anyway to debug these sorts of issues? I've looked through some of my logs (/var/log/messages seemed the most useful) but haven't been able to find any kind of USB related logging nor anything interesting happening prior to the hang.

    Read the article

< Previous Page | 252 253 254 255 256 257 258 259 260 261 262 263  | Next Page >