Search Results

Search found 11396 results on 456 pages for 'simply denis'.

Page 306/456 | < Previous Page | 302 303 304 305 306 307 308 309 310 311 312 313  | Next Page >

  • No boot device found. Press any key to continue

    - by Andrew Banks
    I took out the hard drive from my Dell Latitude E5420 notebook, put in an ADATA S599 solid state drive, and installed Ubuntu 11.10. When I boot, the Dell BIOS splash screen appears with a progress bar, which quickly fills up, and the screen goes black. All of this is like it was before. At this point, the OS splash screen should fade in. Instead, I was dismayed to see simply the following, in white text on a black screen: No boot device found. Press any key to continue After looking around for the Any key (just kidding) I press a key, and the Dell BIOS splash screen appears again with a progress bar, which quickly fills up, and the screen goes black. This time, however, the Ubuntu splash screen shows up, Ubuntu opens up, and all is normal. Every time I shut down, however, this happens again. It's like a game the computer and I play together. The computer has never started up without first saying: No boot device found. Press any key to continue and it has always started up after I press any key to continue. It also starts up fine if I click Restart instead of Shut Down. Thoughts?

    Read the article

  • Linux will not activate wireless after device has been re-enabled

    - by XHR
    Using a Eee 900A netbook by Asus. By pressing Fn + F2, I can disable or enable the wireless chip on the netbook, a blue LED indicates the status. I've been able to connect to wireless networks just fine with this netbook. However, if the wireless chip ever becomes disabled, I have to reboot to get my network connection back. This generally happens when suspending. For some reason the LED will be off and I have to hit Fn + F2 for it to light up again. However, after doing so, Linux will not reconnect to the network. It simply changes the wireless status from "wireless is disabled" to "device not ready". Even worse, I've recently had issues with the chip being enabled at boot, thus making it nearly impossible to get connected. I've searched around on-line but haven't found much of anything useful on this. This happens on all kinds of different distros including Ubuntu 9.10 Netbook, EeeBuntu 4 beta, Jolicloud and Ubuntu 10.04 Netbook. Edit I noticed this question is getting a lot of views. To give a quick update, I never did resolve this issue with the given distro's. However, I'm currently running Ubuntu 10.10 netbook edition and this issue has gone away.

    Read the article

  • Windows 2008 R2 file share - any way to "lock it down" outside of a 3rd party app?

    - by TheCleaner
    I have a 3rd party app that "makes a call" to write files to a file share on our network using the currently logged in credentials of the Windows domain user. Meaning the 3rd party app doesn't pass the apps credentials but simply issues a behind the scenes copy command to take a source file specified and copy/move it to the destination "repository" on the file share. The basic premise is that it keeps revisions/approvals for Document Control (think svn/git I guess, similar to this question: Lock down Windows folder to only be updatable by SVN). This all works fine...but here's my issue: I need a way to lock down the file share from being accessed/modified outside of using the 3rd party app (meaning prevent explorer/word/excel/etc from getting to that share). I know I can do the following: make the share a hidden share ($) - this definitely helps. Most users would have zero clue on how to get to such a share. Solves probably 95% of my issue. go one step further and set the "Hidden" attribute on the folders in the hidden share - this would go a little further in that even if a user knows the path to the hidden share like \\server\hidden$ they still won't see folders in that share without changing their explorer options to "show hidden files/folder Any other ideas on how I can lock this down? The users still need modify rights to this share/folders since the 3rd party app relies on their Windows permissions to that location when copying the files into it. I can't really use 3rd party tools to password protect the folder/share without causing the 3rd party app functions to fail.

    Read the article

  • How to access XAMPP virtual hosts from iPad on local network?

    - by martin's
    Using XAMPP on one machine. Multiple virtual hosts defined. One per project. Format is .local For example: apple.local microsoft.local client-site.local our-own-internal-site.local All works perfectly from that one machine. I now want to have other systems within the network access the various sites. The main reason for wanting to do this is to be able to test site functionality and layout from mobile devices without having to upload partial work to public servers. I can access the main XAMPP default site by simply entering the IP address of the XAMPP machine in, say, Safari on an iPad. However, there is no way to reach .local that I can see. Would this entail setting up a DNS server within the network? We have a mixture of Windows and Mac machines. No Linux. The XAMPP machine is Vista 64. I don't want real external internet access to be affected in any way, just ".local" pointed to the XAMPP machine if that makes sense.

    Read the article

  • Turn computer into DAS (Direct Attached Storage)

    - by Damon
    Can we build a direct attached storage by taking a computer/server, adding an HBA, and installing some appropriate software? We would use Debian as a host OS for both the DAS and the server. If so, what software do we use? And do we simply need a HBA for the DAS and the Server? Or do we need more hardware? The goal is to use an older server that does not have enough room for drives but does have ECC memory, server processors, redundant power supplies, dual nics, etc. Then find any boxes, server or not, the key being having enough room for 8-12 drives, fans, etc. and turning them into a DAS; build two of these DAS's and have them connected to the server. Eventually we want to have two servers using DRBD and associated services like heartbeat and pace maker to create an HA setup for our server(s) but that will take a long time to configure since I have no experience with anything related to DRBD (yet) and have a learning curve I have to get past, not to mention the additional cost of more hardware (two servers vs one).

    Read the article

  • How to distribute multiple executions of an app across many machines

    - by Salec
    I've got a simulation app (64-bit windows) that runs without any user interaction. This app gathers information and pushes it to a remote MS SQL Server. What I'd like to do is execute this simulation as many times as I can on multiple machines after our nightly build has finished and it has passed the test suite. If possible I'd love to have the ability to configure it to stop after x total runs or if the entire batch has taken over y hours. I've tried using Visual Studio's built in test framework since we already have a test lab set up with multiple agents. I created a single unit test that simply runs the simulation then I created an ordered test and added that single test multiple times (from what I gather, this is the only way to execute the same unit test more than once). I found that ordered tests are only run on a single agent and not distributed which is very limiting. We use TeamCity to perform our nightly builds and I suspect it's possible to implement this on top of that, but I'm fairly new to TeamCity. We also have Jenkins and Bamboo available and I'm open to any other software that would get the job done presuming it runs on a 64-bit Windows OS. Any suggestions?

    Read the article

  • How to verify TRIM/discard on encrypted swap?

    - by svarni
    I am using an encrypted swap partition via ecryptfs-setup-swap on my Ubuntu 13.04 computer using a SSD. I have manually set up trim for my ext4 root partition (simply by adding the "discard" option in /etc/fstab). I also manually ran fstrim on the root partition prior to booting and using dstat I saw that for a few seconds several GB/s of data have been written to the disk. That was presumably the effect of the trim command. These high writerates are reproducable by deleting huge files and have not occured before setting up trim, so I take them as evidence for working trim/discard. Manually enabling trim on my root partition has stopped the wearout of my precious new disk from 365 used reserved blocks (out of 6176 total) within three months down to 0 additional used reserved blocks within three additional months (data from SMART attributes). Because I want to minimize the wearout of my SSD I now would like to know whether my swap partition (which is encrypted using ecryptfs-setup-swap) also makes use of the trim/discard option. I tried sudo swapon -d -v /dev/mapper/cryptswap1 but did not receive particular information ("-v") about whether trim/discard ("-d") was applied. If unsupported, i would expect a message. Then I tried sudo dd if=/dev/sda6 count=1 BS=1M | xxd | less directly after booting and when no swapspace was used but I saw not only zeroes. I assume, when looking at freshly trimmed regions, the disk would send zeroes instead of reading random sectors (and according to some forums, (unencrypted) swap space is trimmed once upon boot). Long story short: Are there any ideas on how to test if trim is effectively used for my encrypted swap? And if not, any ideas on how to - at least manually, for once - trim the whole swap space? I wouldn't want to tinker with the partition itself, because I dont know if it needs to be reinitialized as (encrypted) swap - I dont want to be left with an unbootable system :)

    Read the article

  • Autounmounting USB keys with FAT filesystem on Linux (RHEL5)

    - by niXar
    For security reasons, I have two workstations i front of me, and I can only transfer data between them through a USB key. As you can imagine, it can get quickly tiresome, but the most annoying is having to unmount the things before removing them. Not umounting them results in missing files most of the time, even if I remove them a while after having last written to them. Now, since they're only used for transferring smallish files, and each are basically written once and read once, I don't need the fancy pansy caching infrastructure that makes clean unmounting a necessary step. And since the data is always a copy of something I have at hand, I don't care if the filesystem croaks from time to time. But anyway the system doesn't need to force that on me, it could simply make sure everything is committed with a second, and works synchronously. Then when I remove the key, nothing is lost. Is there a way to do this? I would appreciate any other tips on handling this situation. Edit: it appears the situation has changed between RHEL5 and Fedora up to F11 on one hand, and F12 on the other. The latter use DeviceKit-disk, and I haven't quite figured out how to do this. The method provided below in gconf does not work anymore.

    Read the article

  • Contacts in Outlook 2003/2007, some questions

    - by Ernst
    If I create a distribution list and then select members, can I see different fields than the default ones? In 2007 there are radio buttons for 'name only' and 'more columns', but the latter does seem to only result in no results at all, regardless of which address book I choose. In 2003 there is no such thing. Is there a plug in that will break up the recipients (whether they be to, cc, or bcc) in groups of X, and send then a number of mails as required? Our host allows only 50 recipients per mail and only 300 total recipients per 5 minutes. I know the email client blat has exactly this functionality, but it does not seem to be able to connect to the exchange server to get the contacts needed. Could I maybe set outlook to send to blat which then does the breaking up as necessary? Can I (or is there a plug in for this) export only part of the contacts instead of all of them? Note that we send mail outside our organisation via our web host where we've got a few mailboxes, and we use our exchange (2000) server only internally, the few people that can send email to the outside world have an external mailbox as well as their exchange account defined. I might be able to convince our general boss that we can simply give (some) people the ability to send outside via exchange, but I might just as well not succeed. Alternatively, is there another program that can connect to exchange to get the contacts (selected based on categories) and then send via smtp in groups with delays between the mails?

    Read the article

  • Having munin server monitoring problem: Graphs not being generated.

    - by geerlingguy
    When I run munin-cron (munin-cron --debug), I get the following error: 2010/05/10 13:39:01 [WARNING] Call to accept timed out. Remaining workers: archstl.org;archstl.archstl.org 2010/05/10 13:39:01 [DEBUG] Active workers: 1/8 These errors simply keep repeating themselves until I quit munin-cron. I've followed the directions for debugging munin on the 'Debugging Munin plugins' wiki page, but I get the following results when going through their directions: After telnetting to localhost 4949, I can see a list of plugins, see a node at archstl.archstl.org, but can't fetch anything. The output is as follows: >fetch cpu . However, on the same machine (which is both the node and the master munin server), I can run munin-run cpu, and it prints the results correctly to the command line, like so: user.value 100829130 nice.value 3479880 system.value 13969362 idle.value 664312639 iowait.value 12180168 irq.value 14242 softirq.value 199526 steal.value 0 Looking at the wiki page mentioned above, it looks like it might be a plugin environment problem, but I can't figure out how to fix/change this... If the plugin does run with munin-run but not through telnet, you probably have a PATH problem. Tip: Set env.PATH for the plugin in the plugin's environment file.

    Read the article

  • Vim: How to Install Airline?

    - by MunkyCheez
    I'm just getting started with Vim. It's a fun experience, but I've found it to be kind of overwhelming. I'm trying to get this plugin installed, vim-airline, but I'm having a lot of trouble. The Installation section on the Github page simply states: copy all of the files into your ~/.vim directory Presumably, this means download the .zip, extract it, and copy all of those files into ~/.vim/. I did this, but Vim just starts up like normal, and running :help airline just gives: Sorry, no help for airline I assume that this means it isn't getting installed. Also, the statusbar remains the same. I'm new to Vim and would really like to get this working. Thanks in advance! EDIT: I also tried putting the files into /usr/share/vim/vim73/. No dice. EDIT 2: I ran :helptags ~/.vim/doc and now the help-page displays when I type :help airline, but I'm still not getting the plugin itself (the status bar). Vim looks the same, but it can now display the help page.

    Read the article

  • Manual NAT on Checkpoint (Redirect all http requests to a local web server)

    - by kulakli
    Hi, We have a proxy server in internal network and I want to redirect all internet http requests to a web server in local network. It'll be like a Network Billboard that say "No direct connection is available. Set up your proxy etc." For example: A user starts the computer Opens the browser Trys to open www.google.com Should see web server output on local network Trys another web site on internet Should see web server output on local network Sets up proxy Trys to connect to a web site Web site should be loaded I have added a simple manual NAT rule to address translation in Checkpoint firewall but it simply does not work. Here is my address translation rule Source Destination Service T.Source T.Destination T.Service MY_PC A_GOOGLE_IP ALL ORIGINAL INT_WEB_SRV ORIGINAL Then when I ping A_GOOGLE_IP, replies come from INT_WEB_SRV, as I expected. However, when I try to connect A_GOOGLE_IP from browser (http://A_GOOGLE_IP), No replies come from SYN_SENT and falls into timeout. When I look at the firewall log of INT_WEB_SRV, I can see the incoming connection requests from MY_PC is accepted and NO denies. By the way, there is no problem to see INT_WEB_SRV (http://INT_WEB_SRV) from browser. My understanding is, my nat rule at checkpoint NGX R60 does not include return packets. I definitely need some help. Regards, Burak

    Read the article

  • nginx logrotate config

    - by TomOP
    Whats the best way to rotate nginx logfiles? In my opinion, I should create a file "nginx" in /etc/logrotate.d/ and fill it with the following code and do a /etc/init.d/syslog restart after that. This would be my config (I havn't tested it yet): /usr/local/nginx/logs/*.log { #rotate the logfile(s) daily daily # adds extension like YYYYMMDD instead of simply adding a number dateext # If log file is missing, go on to next one without issuing an error msg missingok # Save logfiles for the last 49 days rotate 49 # Old versions of log files are compressed with gzip compress # Postpone compression of the previous log file to the next rotation cycle delaycompress # Do not rotate the log if it is empty notifempty # create mode owner group create 644 nginx nginx #after logfile is rotated and nginx.pid exists, send the USR1 signal postrotate [ ! -f /usr/local/nginx/logs/nginx.pid ] || kill -USR1 `cat /usr/local/nginx/logs/nginx.pid` endscript } I have both the access.log and error.log files in /usr/local/nginx/logs/ and want to rotate both daily. Can anyone please tell me if "dateext" is correct? I want the log filename to be something like "access.log-2010-12-04". One more thing: Can I do the log rotation every day on a specific time (e.g. 11 pm)? If so, how? Thanks.

    Read the article

  • CentOS 5.8 - Can't login to tty1 as root after updates?

    - by slashp
    I've ran a yum update on my CentOS 5.8 box and now I am unable to log into the console as root. Basically what happens is I receive the login prompt, enter the correct username and password, and am immediately spit back to the login prompt. If I enter an incorrect password, I am told the password is incorrect, therefore I know that I am using the proper credentials. The only log I can seem to find of what's going on is /var/log/secure which simply contains: 15:33:41 centosbox login: pam_unix(login:session): session opened for user root by (uid=0) 15:33:41 centosbox login: ROOT LOGIN ON tty1 15:33:42 centosbox login: pam_unix(login:session): session closed for user root The shell is never spawned. I've checked my inittab which looks like so: 1:2345:respawn:/sbin/mingetty tty1 2:2345:respawn:/sbin/mingetty tty2 3:2345:respawn:/sbin/mingetty tty3 4:2345:respawn:/sbin/mingetty tty4 5:2345:respawn:/sbin/mingetty tty5 6:2345:respawn:/sbin/mingetty tty6 And my /etc/passwd which properly has bash listed for my root user: root:x:0:0:root:/root:/bin/bash As well as permissions on /tmp (1777) & /root (750). I've attempted re-installing bash, pam, and mingetty to no avail, and confirmed /bin/login exists. Any thoughts would be greatly appreciated. Thanks!! -slashp

    Read the article

  • How to Access User Directory shared by Apache on OS X Mountain Lion?

    - by schluchc
    When trying to access the local user web page on localhost/~username, I get a "403 Forbidden". The system web page in /Library/WebServer/Documents is accessible on localhost/ though, so I assume Apache is working fine. I know that this problem has been discussed several times, also on superuser. I implemented and checked all I could find, but I still couldn't solve the problem and would be glad if someone had a suggestion for this particular case: sudo apachectl -t returns Syntax OK. I have a username.conf file in /etc/apache2/users/: <Directory "/Users/username/Sites/"> Options Indexes MultiViews FollowSymLinks AllowOverride AuthConfig Limit Order allow,deny Allow from all </Directory> as proposed here [SuperUser] and in several other tutorials. The permissions of the username.conf file are -rw-r--r-- root wheel, as they should be. The httpd.conf is unchanged and therefore contains the line Include /private/etc/apache2/extra/httpd-userdir.conf. That file in turn contains UserDir Sites Include /private/etc/apache2/users/*.conf <IfModule bonjour_module> RegisterUserSite customized-users </IfModule> So the httpd*.conf files should be ok. The permissions of /Users/username/Sites is drwxr-xr-x 10 username staff and -rw-r--r--@ 1 username staff for the index.html. In the error log I simply get a [Sun Nov 25 22:14:32 2012] [error] [client 127.0.0.1] (13)Permission denied: access to /~username/ denied. And yes, after each change I did the sudo apachectl restart. Any help no how to solve the problem or how to further analyze it would be highly appreciated!

    Read the article

  • How to connect AD Explorer from Sysinternals to Global Catalog

    - by Oliver
    I'm using the sysinternals AD Explorer quite frequently to search and inspect an Active Directory without any big problems. But now i'd like to connect not only to a single AD Server. Instead i like to inspect the global catalog. If i enter within the AD Explorer connect dialog only the dns name of the machine (e.g. dns.to.domain.controller) that is serving the global catalog i only receive the concrete domain for which it is responsible, but not the whole forest (that's normal behaviour and expected by me). If i'm going to add the default port number (3268) for the global catalog in the form dns.to.domain.controller:3268 AD Explorer will simply crash without any further message. The global catalog itself works as expected under the given name and port number, cause our apache server use exactly this address and port number to authenticate some users. So any hints or tips to access the global catalog out of AD Explorer? Or there are any other nice tools like AD Explorer out there that doesn't have any problems to access the global catalog?

    Read the article

  • Remote access to phpmyadmin from computer belongs to same LAN

    - by Charles
    OK... I solved it. It is because I have not configured the httpd.conf to allow the centos listen port 80 and 8080. Listen 80 Listen 8080 I have setup the myphpadmin on my CentOS 6.4 recently. I can access and login to the myphpadmin on my localhost. However, when I type http://[hostipaddr]/phpmyadmin on my other computer in the same LAN with the CentOS, the browser simply cannot access the page. Below are some of the current configuration. Anyone can help please......? config.inc.php $i++; /* Authentication type */ $cfg['Servers'][$i]['auth_type'] = 'http'; /* Server parameters */ $cfg['Servers'][$i]['host'] = 'localhost'; $cfg['Servers'][$i]['connect_type'] = 'tcp'; $cfg['Servers'][$i]['compress'] = false; /* Select mysql if your server does not have mysqli */ $cfg['Servers'][$i]['extension'] = 'mysql'; $cfg['Servers'][$i]['AllowNoPassword'] = false; phpmyadmin.conf <Directory /var/www/html/phpmyadmin/> order allow,deny allow from all </Directory> Furthermore, I can access the webpage that stored in the CentOS from my other computer without problems. After using wireshark and tcpdump, I found that the server (the Cent OS) keep resetting the connection. (192.168.1.106 is my other computer, 192.168.1.101 is my CentOS) 23:29:42.281473 IP 192.168.1.106.55999 > 192.168.1.101.webcache: Flags [S], seq 2559409090, win 65535, options [mss 1460,nop,wscale 8,nop,nop,sackOK], length 0 23:29:42.281504 IP 192.168.1.101.webcache > 192.168.1.106.55999: Flags [R.], seq 0, ack 2559409091, win 0, length 0 I have disabled the iptables service on the CentOS already.

    Read the article

  • Passwortgeschützter Traffic-meter

    - by UncleBob
    Hallo erstmal, ich habe hier ein kleines Problem für das ich bis jetzt noch keine Lösung habe. Ich lebe in Bosnien und teile hier die Internetverbindung mit der Vermieterin, und wie es in Bosnien so ist haben wir keine Flatrate, sondern eine 15 Giga traffic limite. Das wäre eigentlich mehr als genug, wenn der Sohn der Vermieterin nicht immer überziehen würde, sodass die Rechnungen immer ziemlich teuer ausfallen. Ich habe ihm bereits ein Messprogramm installiert, aber das schaltet er offensichtlich aus sobald er in die Nähe seiner Limite kommt und behauptet dann die Limite nicht überzogen zu haben. Ich brauche also mindestens ein Messprogramm das Passwortgeschützt ist und/oder im Log Zeiten vermerkt wärend denen es nicht eingeschaltet war. Noch besser wäre ein Programm das ihm den Netzzugriff einfach abklemmt wenn er seinen Anteil überschreitet, also eine Mischung aus Trafic-meter und Parental Guard. Kann mir da jemand weiterhelfen? Gtranslated version Hi first, I have a small problem for which I yet have no solution. I live in Bosnia and share the Internet connection here with the owner, and how it is in Bosnia, we do not have a flat rate, but a 15 Giga traffic limite. That would actually would be more than enough, if the son of the landlady does not always cover so that the bills always turn out quite expensive. I have it already installed a monitoring program, but he apparently turns out as soon as he comes close to its limit and then claims not to have the limit excessive. I therefore need at least a measurement program that is password protected and / or in the log notes During low periods where it has not turned on. Even better would be a program that disconnects him from accessing the network if it simply exceeds its share, ie a mixture of Traffic parameters and Parental Guard. Can someone help me there?

    Read the article

  • Are there any tools to migrate your files, applications, and settings to a new Windows computer?

    - by calbar
    I've decided to upgrade my laptop on a regular basis and one of my main concerns is recreating my entire Windows 7 environment every time I do this. I'm talking toolbar positions, login settings, start menu items, applications and all their customizations... everything but my drivers. It literally takes weeks to fully recreate my working environment, not to mention the risk of user error or just simply forgetting "how I liked it." I'm assuming I won't find something as painless as Apple's Migration Assistant for Windows, but maybe there's something out there that can at least package up your apps and their settings? Bonus points if you can point it to your personal files, too - whatever's the quickest way to get from one machine to the next. I intend to install Windows fresh to remove bloatware on every machine that I buy, then selectively install the drivers I need. Something that accommodates loading my old apps into this newly prepared environment would be ideal. One random point of concern is in regard to application settings that refer to old hardware. I'm not sure if there's anything that can be done about this. If you have any thoughts, feel free to share. Thanks for your help!

    Read the article

  • how to set up dual wireless routers (G and N) with single broadband connection

    - by user15973
    A local cafe has an 802.11g wireless router attached to either an 8- or 12Mbs broadband connection. Although many customers still have -g devices, an increasing number are showing up with 802.11n devices (e.g. Mac laptops, iPads). The cafe owner is content with his router's area coverage, but he would like for his customers to be able to take advantage of the higher -n download speeds. He could simply replace the -g router with a -n router, but reportedly -n routers slow down considerably when servicing both -g and -n connections. Another option would be to buy the -n router, run an Ethernet cable between the two routers, and (of course) hook up the broadband connection to one of the two routers. Still another option would be to attach an ordinary, wired switch to the broadband connection and then attach both routers to the switch. Aside from the cost issue, what are the tradeoffs of those approaches? Which would you recommend, and why? Is there a better option than the ones I mentioned? Thank you in advance for your advice.

    Read the article

  • How can I use my cell phone to establish a dial-up networking connection?

    - by gWiz
    I am using Windows 7 and have a BlackBerry with T-Mobile (U.S.). I have paired the phone with my computer over Bluetooth, which automatically creates a serial port for it. I am able to open the port in PuTTY and successfully issue AT commands to the modem, including dialing. However, while using Windows to create and establish a Dial-Up Networking connection, I get an error dialog stating "Error 678. The remote computer did not respond." In my testing, I also tried setting up a connection to dial a number connected to a phone. When attempting to connect over this connection, the phone does ring but the very moment I answer the call, my computer displays the above error dialog. What must be done to successfully establish such a PPP connection? Some special AT initialization string perhaps? To clarify, I'm not referring to the well-described and popular technique known as "tethering," in which the remote host of the data link is the mobile service provider. I am interested specifically in establishing direct data links with remote hosts other than my mobile service provider. Think old-school landline connection to your friend's computer or BBS. Edit 1 As grawity pointed out in comments, the missing piece of the puzzle is the actual modulator that is compatible with v-series protocols, which I expected to be built into the cellphone. So far the best only software alternative I could find is this experimental project. Edit 2 Found this forum discussion today. The participants state that there is no old-school modem in the BlackBerry. Edit 3 When I place a call in PuTTY with ATD, immediately after the call is answered (and the callee is initiating the handshake) the cellphone returns OK. This is not the expected behavior for establishing a data connection. The phone should reciprocate the handshake, and upon success return CONNECT. (Alternatively it should return BUSY or NO CARRIER, but never simply OK.) Windows DUN must be interpreting this as the "Error 678" I was seeing.

    Read the article

  • Recovering ZFS pool with errors on import.

    - by Sqeaky
    I have a machine that had some trouble with some bad RAM. After I diagnosed it and removed the offending stick of RAM, The ZFS pool in the machine was trying to access drives by using incorrect device names. I simply exported the pool and re-imported it to correct this. However I am now getting this error. The pool Storage no longer automatically mounts sqeaky@sqeaky-media-server:/$ sudo zpool status no pools available A regular import says its corrupt sqeaky@sqeaky-media-server:/$ sudo zpool import pool: Storage id: 13247750448079582452 state: UNAVAIL status: The pool is formatted using an older on-disk version. action: The pool cannot be imported due to damaged devices or data. config: Storage UNAVAIL insufficient replicas raidz1 UNAVAIL corrupted data 805066522130738790 ONLINE sdd3 ONLINE sda3 ONLINE sdc ONLINE A specific import says the vdev configuration is invalid sqeaky@sqeaky-media-server:/$ sudo zpool import Storage cannot import 'Storage': invalid vdev configuration I should have 4 devices in my ZFS pool: /dev/sda3 /dev/sdd3 /dev/sdc /dev/sdb I have no clue what 805066522130738790 is but I plan on investigating further. I am also trying to figure out how to use zdb to get more information about what the pool thinks is going on. For reference This was setup this way, because at the time this machine/pool was setup it needed certain Linux features and booting from ZFS wasn't yet supported in Linux. The partitions sda1 and sdd1 are in a raid 1 for the operating system and sdd2 and sda2 are in a raid1 for the swap. Any clue on how to recover this ZFS pool?

    Read the article

  • URL autocomplete no longer working in Chrome

    - by Yuji Tomita
    The browser URL autocomplete has started behaving differently starting yesterday. I used to access my top urls by typing the first one or two letters of a URL then pressing enter. Now, I have to visually fish for the right one and push the down arrow to select the url. Big difference. Anybody know if I can get the old functionality back somehow? Have I messed a setting? Example of how my browser used to work: Gmail.com: CMD + L Type G Enter Stackoverflow.com CMD + L Type S Enter Normally, the browser bar would already be highlighted with gmail.com after typing the first g. It would narrow the matches depending on what characters were typed next, or simply go to it if I pressed enter. UPDATE: I just realized my history tab looks suspicious. No entries But clearly Chrome is pulling some data from my history, as I have very personalized recommendations when typing in a letter. UPDATE: Fixed! Saved my bookmarks, removed my ~/Library/Application\ Support/Google/Default directory (careful, it looks like absolutely everything is stored here) restarted chrome, and within one visit to Gmail.com, my autocomplete was filling in my URLs like so: Beautiful.

    Read the article

  • Incredble low disk performance on HP DL385 G7

    - by 3molo
    Hi, As a test of the Opteron processor family, I bought a HP DL385 G7 6128 with HP Smart Array P410i Controller - no memory. The machine has 20GB ram 2x146GB 15k rpm SAS + 2x250GB SATA2, both in Raid 1 configurations. I run Vmware ESXi 4.1. Problem: Even with one virtual machine only, tried Linux 2.6/Windows server 2008/Windows 7, the VMs' feel really sluggish. With windows 7, the vmware converter installation even timed out. Tried both SATA and SAS disks and SATA disks are nearly unsusable, while SAS disks feels extremely slow.I can't see a lot of disk activity in the infrastructure client, but I haven't been looking for causes or even tried diagnostics because I have a feeling that it's either because of the cheap raid controller - or simply because of the lack of memory for it. Despite the problems, I continued and installed a virtual machine that serves a key function, so it's not easy to take it down and run diagnostics. Would very much like to know what you guys have to say of it, is it more likely to be a problem with the controller/disks or is it low performance because of budget components? Thanks in advance,

    Read the article

  • My SSD stopped working as it ends up in blue death right after windows 7 launches, how can I reset it for new windows install?

    - by HattoriHanzo
    As I mentioned in the title my SSD suddenly started to fail as after launching Windows 7 it goes straight to completely idle and after a few minutes it goes to blue death and restarts. I have an another HD with a windows xp on it and it works fine and I can also see the SSD and can access everything on it. Windows 7 on the SSD does work in safe mode though yet I didn't manage to find out what causes the problem. Since I can sill access the files and save them to another HD I'm looking for the best way to wipe and reinstall Windows 7. I have yet failed to find an easy to follow (or even understand) guide, different sources on the internet recommend different ways of doing this. And some guides are just simply full of terms I have no clue about. It's an OCZ Vertex 2 120GB. I"d very much appreciate if someone could give me an advice on what I should do, preferably in a way so I could regain the best possible performance as well. it doesn't matter if I don't understand the science behind it as long as I can follow the steps. Thanks!

    Read the article

< Previous Page | 302 303 304 305 306 307 308 309 310 311 312 313  | Next Page >