Search Results

Search found 19586 results on 784 pages for 'machine instruction'.

Page 487/784 | < Previous Page | 483 484 485 486 487 488 489 490 491 492 493 494  | Next Page >

  • Resetup kernel for virtualbox and now ubuntu is in initramfs state

    - by UbuntuMan
    My 10.04LTS became Read-Only so I wanted to reboot this virtual machine. Then I couldn't get it back up running. VirtualBox threw Result Code: NS_ERROR_FAILURE (0x80004005) error. I did the kernel re-setup: sudo /etc/init.d/vboxdrv setup * Stopping VirtualBox kernel modules [ OK ] * Uninstalling old VirtualBox DKMS kernel modules [ OK ] * Trying to register the VirtualBox kernel modules using DKMS [ OK ] * Starting VirtualBox kernel modules [ OK ] I can't even use sudo in this state. What can I do? (initframfs) /dev/sda1 /bin/sh: /dev/sda1: Permission denied (initframfs) /dev/sda2 /bin/sh: /dev/sda3: Permission denied (initframfs) /dev/sda3 /bin/sh: /dev/sda1: not found I have a similar image so I can check the disks setting if needed. Please help me. Thanks. I have an older version of VirtualBox: 4.0.8 something like that. But other vms on the same VirtualBox are working fine. UPDATE I can hold down SHIFT key and see the GURU menu. Only one kernel exists. Ubuntu(...) Ubuntu(recovery) master test master test

    Read the article

  • IIS7.5 Windows Authentication missing providers menu item (ntlm)

    - by Alex Bilbie
    I'm trying to enable NTLM authentication on a Windows Server 2008 R2 machine with IIS 7.5 for a specific file in my web root. I've been following these instructions http://docs.moodle.org/en/NTLM_authentication#IIS_Configuration In the IIS Manager I open the Authentication module, disable anonymous authentication and enable Windows Authentication however according to every post I can find on the matter I should have a 'providers' option appear but I don't. I've double checked in Server Manager that the 'Windows Authentication' security feature is enabled for IIS. Any help anyone can offer would be great, Thank you!

    Read the article

  • Windows - Delayed Write Failed error on USB hard drive

    - by ndngrd
    I've got a new Verbatim 1.5TB USB hard drive (Samsung HD154UI) and I'm finding myself completely unable to fill it. I'm using Windows XP. Whenever I try to copy a load of files over, it works for some time (will copy over between 20 and 90GB) but eventually stops with an error saying "The specified path is too deep" - the specified is not too deep, there's nothing more than 2 dirs deep that I'm copying. A balloon pops up at the bottom saying "Windows - Delayed Write Failed" telling me the data could not be copied. This wouldn't be too bad if I could just restart the transfer, but after this error has happened I can't write anything else to the disk - including if I eject it and then connect it to another machine. It just seems completely locked. The only way I can unlock it is to delete everything that I was copying to it. I've tried various USB cables and copying from different machines, and the same thing keeps happening.

    Read the article

  • What ports does Advantage Database Server need?

    - by asherber
    I have an application which uses ADS and I am attempting to deploy it in a Windows network environment with a rather restrictive firewall. I am having a problem configuring firewall ports appropriately. ADS lives on \\server, and it's listening on port 1234. When \\client tries to connect to \\server\tables, I get Error 6420 (Discovery process failed). When \client tries to connect to \\server:1234\tables, I get error 6097, bad IP address specified in the connection path. \\server is pingable from \\client, and I can telnet to \server:1234. If I try to connect from a client machine inside the firewall, either connection path works fine. It seems there must be something else I need to open in the firewall. Any ideas? Thanks, Aaron. Edit: I should have specified that the firewall is open to \\server:1234 specifically for TCP traffic. Is UDP involved here in some way?

    Read the article

  • juju -v status ERROR Invalid SSH key

    - by Captain T
    root@cloudcontrol:/storage# juju -v status 2012-06-07 11:19:47,602 DEBUG Initializing juju status runtime 2012-06-07 11:19:47,621 INFO Connecting to environment... 2012-06-07 11:19:47,905 DEBUG Connecting to environment using node-386077143930... 2012-06-07 11:19:47,906 DEBUG Spawning SSH process with remote_user="ubuntu" remote_host="node-386077143930" remote_port="2181" local_port="57004". The authenticity of host 'node-386077143930 (10.5.5.113)' can't be established. ECDSA key fingerprint is 31:94:89:62:69:83:24:23:5f:02:70:53:93:54:b1:c5. Are you sure you want to continue connecting (yes/no)? yes 2012-06-07 11:19:52,102 ERROR Invalid SSH key 2012-06-07 11:19:52,426:18541(0x7feb13b58700):ZOO_INFO@log_env@658: Client environment:zookeeper.version=zookeeper C client 3.3.5 2012-06-07 11:19:52,426:18541(0x7feb13b58700):ZOO_INFO@log_env@662: Client environment:host.name=cloudcontrol 2012-06-07 11:19:52,426:18541(0x7feb13b58700):ZOO_INFO@log_env@669: Client environment:os.name=Linux 2012-06-07 11:19:52,426:18541(0x7feb13b58700):ZOO_INFO@log_env@670: Client environment:os.arch=3.2.0-23-generic 2012-06-07 11:19:52,426:18541(0x7feb13b58700):ZOO_INFO@log_env@671: Client environment:os.version=#36-Ubuntu SMP Tue Apr 10 20:39:51 UTC 2012 2012-06-07 11:19:52,428:18541(0x7feb13b58700):ZOO_INFO@log_env@679: Client environment:user.name=sysadmin 2012-06-07 11:19:52,428:18541(0x7feb13b58700):ZOO_INFO@log_env@687: Client environment:user.home=/root 2012-06-07 11:19:52,428:18541(0x7feb13b58700):ZOO_INFO@log_env@699: Client environment:user.dir=/storage 2012-06-07 11:19:52,428:18541(0x7feb13b58700):ZOO_INFO@zookeeper_init@727: Initiating client connection, host=localhost:57004 sessionTimeout=10000 watcher=0x7feb11afc6b0 sessionId=0 sessionPasswd= context=0x2dc7d20 flags=0 2012-06-07 11:19:52,429:18541(0x7feb0e856700):ZOO_ERROR@handle_socket_error_msg@1579: Socket [127.0.0.1:57004] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2012-06-07 11:19:55,765:18541(0x7feb0e856700):ZOO_ERROR@handle_socket_error_msg@1579: Socket [127.0.0.1:57004] zk retcode=-4, errno=111(Connection refused): server refused to accept the client This is from a clean install with 2 nodes all running 12.04 Precise juju bootstrap - finishes with no errors and allocates the machine to the user but still no joy after juju environment-destroy and rebuild with different users and different nodes. Anyone got any ideas

    Read the article

  • MySQL: "UPDATE command denied to user ''@'localhost'"

    - by Uncle Nerdicus
    For some reason when I installed MySQL on my machine (a Mac running OS X 10.9) the 'root' MySQL account got messed up and I don't have access to it, but I do have access to the standard MySQL account 'sean@localhost' which I use to log into phpMyAdmin. I am trying to reset the 'root' password by starting the mysqld daemon using the command mysqld --skip-grant-tables and then running the following lines in the mysql shell. mysql> UPDATE mysql.user SET Password=PASSWORD('MyNewPass') -> WHERE User='root'; mysql> FLUSH PRIVILEGES; Problem is when I try to run that MySQL string the daemon spits back a ERROR 1142 (42000): UPDATE command denied to user ''@'localhost' for table 'user' as if I didn't use the -u argument when I started the mysql shell, either though I did. Any help is muchly appreciated as I am lost at this point. :/

    Read the article

  • How is my password sent across when I check gmails/access bank site [closed]

    - by learnerforever
    What encryption is used when my password is sent across in gmails/when I do online banking? RSA? DSA? Public-private key encryption?. In key encryption, which entity is assigned a public/private key? Does each unique machine with unique MAC address has a unique public/private key? Does each instance of browser have unique key? Does each user have unique private/public key? How does session key come into picture? How do machines receive their keys?

    Read the article

  • Stopping local drive mappings from transfering to a RDP session

    - by Chad
    We have a SQL server that locally has about 6 physical drives mapped. However let's say G: is a mount point to the SAN, if I connect with my local machine and have a personal folder mapped locally as g:\userdata that transfers to the remote desktop session on the server overwriting the value of the 'NAME' of the share. Here is the kicker, the G: on the server still has the right information but has the wrong label coming from the share on my PC. Does anyone know how to prevent this from happening? My tick box for local resources is unchecked in my Microsoft RDP client.

    Read the article

  • How to burn a CD-ROM from a Mac for Solaris

    - by cope360
    I would like to burn a CD using a Mac (10.5) which I can then access from a Solaris 10 x86 machine. This partially works: Insert blank CD and let the Finder open it so it creates a "Recordable CD" window for it. Drag the files to be burned into the "Recordable CD" window Burn (there are no options except for speed) Then to mount in Solaris: mount -f hsfs -o ro /dev/dsk/foo /mnt/bar The problem here is that Solaris will see all the filenames as lower case and it will only allow each file name to contain one period (these are HSFS limitations). My guess is that the Mac is not burning the disc with the Rock Ridge extensions that allow for the full file names to be preserved. Is there some combination of burning tools/options and mount options that will make this work?

    Read the article

  • Internet very slow when upgrading to Ubuntu 9.10

    - by roojoo
    I was running Ubuntu 8.x on my desktop and everything worked fine. Im using wired internet and it worked perfectly, pages loaded pretty fast. However, when I decided to upgrade to 9.10 the upgrade failed at some point, however I was left with what appeared to be Ubuntu 9.10. Since then the internet has been weird. When I go to a website it takes at least 10 seconds for the page to display, however if Im on a site and navigate to other pages on the website it loads quickly. This never happened prior to the upgrade. I thought this may be due to the upgrade not installing correctly so I did a fresh install of Xubuntu 9.10 but the problems are still the same. Im writing this on a Vista machine over the wireless network and internet is fine. Does anyone have any ideas of the issue? Thanks.

    Read the article

  • How do I work around sudo 'segmentation fault' on basic bash commands?

    - by sage
    I am sure the answers are out there, but alas there are too many answers (here and elsewhere) to other questions stopping me from finding them. I just encountered something substantially similar to what is described at the closed SO question, sudo : “segmentation fault” Ubuntu maverick [closed]. My team is using Ubuntu 11.04 on VMWare Workstation 8.0.4. We are doing development using c++, Xenomai, Qt, and Qt Creator. When we simulate our application on the virtual machine, we currently need to launch Qt Creator with sudo. My colleague mentioned today that he has been having issues where his workstation locks up and he needs to restart and that occasionally he has the issue that all sudo bash commands return "segmentation fault". I just ran our application in simulation mode. I was running Qt Creator under sudo and Qt Creator received the signal abort (if I recall). Afterward, every command executed with sudo from sudo qtcreator to sudo ls resulted in the message Segmentation fault. I clicked on the power widget to see if I could log out, but the system shut down straightaway without prompting. My understanding is that we run sudo because of a permissions issue with Xenomai and the VM as currently configured, but my colleague has a workaround for this. I expect that not running Qt Creator under sudo -- something that has always made me nervous -- will help contain this issue, but I find it troubling that this could happen and manifest as it does. Does anyone know what is happening? Any recommendations on how to work around this issue? This is happening often to I am trying tolobby for VM changes to be able to run the process without sudo.

    Read the article

  • SVN check out to samba directory

    - by Jon H
    I'm trying to svn co to a directory on Ubuntu, shared via samba, to OS X, but I get the following error (in OS X). svn: In directory 'site/product/tests' svn: Can't open file 'site/product/tests/.svn/tmp/text-base/._base.py.svn-base': No such file or directory My smb.conf file includes the following changes: unix extensions = no browseable = yes public = yes writable = yes delete readonly = yes create mask = 0775 directory mask = 0775 valid users = %S read only = no The checkout works fine locally (on the Ubuntu machine). What am I missing? More detail: Later inspection showed that the svn error couldn't find the file with 3, then 2 underscores: .___init__.py.svn-base Whereas listing the directory in OS X showed 2, then 2 underscores: __init__.py.svn-base And listing the same directory in a successful checkout on Ubuntu shows nothing (because it's a temporary directory?) I've tried the mangled = no setting in share settings, to no effect.

    Read the article

  • Recovering a lost website with no backup?

    - by Jeff Atwood
    Unfortunately, our hosting provider experienced 100% data loss, so I've lost all content for two hosted blog websites: http://blog.stackoverflow.com http://www.codinghorror.com (Yes, yes, I absolutely should have done complete offsite backups. Unfortunately, all my backups were on the server itself. So save the lecture; you're 100% absolutely right, but that doesn't help me at the moment. Let's stay focused on the question here!) I am beginning the slow, painful process of recovering the website from web crawler caches. There are a few automated tools for recovering a website from internet web spider (Yahoo, Bing, Google, etc.) caches, like Warrick, but I had some bad results using this: My IP address was quickly banned from Google for using it I get lots of 500 and 503 errors and "waiting 5 minutes…" Ultimately, I can recover the text content faster by hand I've had much better luck by using a list of all blog posts, clicking through to the Google cache and saving each individual file as HTML. While there are a lot of blog posts, there aren't that many, and I figure I deserve some self-flagellation for not having a better backup strategy. Anyway, the important thing is that I've had good luck getting the blog post text this way, and I am definitely able to get the text of the web pages out of the Internet caches. Based on what I've done so far, I am confident I can recover all the lost blog post text and comments. However, the images that go with each blog post are proving…more difficult. Any general tips for recovering website pages from Internet caches, and in particular, places to recover archived images from website pages? (And, again, please, no backup lectures. You're totally, completely, utterly right! But being right isn't solving my immediate problem… Unless you have a time machine…)

    Read the article

  • For large files compress first then transfer or rsync -z? which would be fastest?

    I have a ton of relativity small data files but they take up about 50 GB and I need them transferred to a different machine. I was trying to think of the most efficient way to do this. Thoughts I had were to gzip the whole thing then rsync it and decompress it, rely on rsync -z for compression, gzip then use rsync -z. I am not sure which would be most efficient since I am not sure how exactly rsync -z is implemented. Any ideas on which option would be the fastest?

    Read the article

  • remote desktop over wireless

    - by tbischel
    So I'm trying to run remote desktop on my laptop to connect to my home desktop. I have a problem where this works fine if I connect my laptop with an ethernet cable, but fails when I try to use wireless internet access (which works fine for normal internet surfing). I've experienced this problem at home with my wireless router, and at work with the wireless network they have there, so I'm inclined to believe that its a setting local to my machine rather than the router blocking the requests... but I'm not sure where to look. Any suggestions?

    Read the article

  • Transferring existing files from ext3 to ZFS (on FreeBSD)

    - by peppergrower
    I use an old machine as a file server, for backups, and as a testbed for development. I currently have Debian installed, but I'm very interested in FreeBSD because of ZFS: I really, really like its file integrity features. Before I switch, however, I wanted to ask: what's the best way to migrate my ~400GB of files from the current filesystem (ext3) to ZFS? My number-one requirement is that the migration be absolutely reliable: I don't want to lose any data. (I'll be backing it up before doing this anyway, but still.) My secondary goal is speed: I'd rather not have this take overnight if it doesn't have to. Recommendations? Is FUSE for FreeBSD stable enough to use? What about FreeBSD's native read support for ext3? NFS, maybe? How have you done this?

    Read the article

  • Only two monitor ports are detected on nVidia Quadro NVS 450

    - by Erick Robertson
    I have an nVidia Quadro NVS 450 installed in a Dell Optiron 380. Only DisplayPort #1 and DisplayPort #4 are detected by windows. The machine has a BIOS setting to automatically choose the primary video card, or to disable the primary when a PCI-e card is installed (which it is). Windows cannot see DisplayPort #2 and #3 no matter what I do. I have tried the Windows Drivers, latest nVidia Drivers - no dice. I am assured that this video card cannot break in this way. I'm plum out of ideas. I've tried reseating the video card. I've contacted Dell and they've remoted in and looked around - threw their arms up after two hours. Any ideas? I'm running Windows 7 64-bit. All four monitors are 1600x1200 Dell monitors.

    Read the article

  • Is there a rule of thumb for RAM upgrades?

    - by Retrosaur
    I'm having a hard time figuring out whether or not a certain laptop/computer's RAM can be upgraded or not. Is there a rule of thumb that determines how much max RAM one could add to a system without looking it up via external websites? A little bit of a background information: I work in computer sales at a computer electronics store, so it is virtually impossible for me to install any sort of software that would detect computer specs, and I get a lot of customers who wonder what laptop/desktop RAM upgrades usually are. Is there a certain rule that adding more RAM entails? Does it make a difference if it's a 32-bit or 64-bit machine? OS?

    Read the article

  • Distributed transactions and queues, ruby, erlang

    - by chrispanda
    I have a problem that involves several machines, message queues, and transactions. So for example a user clicks on a web page, the click sends a message to another machine which adds a payment to the user's account. There may be many thousands of clicks per second. All aspects of the transaction should be fault tolerant. I've never had to deal with anything like this before, but a bit of reading suggests this is a well known problem. So to my questions. Am I correct in assuming that secure way of doing this is with a two phase commit, but the protocol is blocking and so I won't get the required performance? It appears that DBs like redis and message queuing system like Rescue, RabbitMQ etc don't really help me a lot - even if I implement some sort of two phase commit, the data will be lost if redis crashes because it is essentially memory-only. All of this has led me to look at erlang - but before I wade in and start learning a new language, I would really like to understand better if this is worth the effort. Specifically, am I right in thinking that because of its parallel processing capabilities, erlang is a better choice for implementing a blocking protocol like two phase commit, or am I confused?

    Read the article

  • Ghosting context menu clicks in WinXP

    - by Swish
    Let me preface by saying I have a lot of windows open most of the time, although not resource intensive ones, just browsers, ssh sessions, a music player, FTP client, Notepad++, IM clkients, etc. Anyway, I get a lot of weird visual "ghosting" type effects. For example when right-clicking and then selecting an option from a context menu the selected item will remain in view until I right click somewhere on the desktop. Same thing happens when selecting items from the File, Edit, etc. menu in various programs. I'm assuming this is just a result of a less than high quality video card (NVIDIA GeForce FX 5200), all the other hardware in the machine is newer higher quality, that specific video card was added after the fact for multiple monitors. I have looked all over the web for solutions and have increased the number of GDI handles for Windows, reduced the hardware accelaration on the card, etc. Any suggestions other than replace the card?

    Read the article

  • How do I deploy a charm from a local repository?

    - by Matt McClean
    I am trying to run the Charm tutorial from the juju documentation by creating a new charm from a local repository. I started by installing the charms from bzr to my local ubuntu 12.04 desktop running in a virtual machine. The new file structure is the following: ubuntu@ubuntu-VirtualBox:~$ find charms/precise/drupal/ charms/precise/drupal/ charms/precise/drupal/hooks charms/precise/drupal/hooks/db-relation-changed charms/precise/drupal/hooks/install charms/precise/drupal/hooks/start charms/precise/drupal/hooks/stop charms/precise/drupal/metadata.yml charms/precise/drupal/README When I install the mysql charm, which was downloaded from the remote charm repository, it works fine. However when I run the following command to deploy the new charm it fails with the following error message: ubuntu@ubuntu-VirtualBox:~$ juju deploy --repository=charms local:precise/drupal 2012-05-09 10:01:05,671 INFO Searching for charm local:precise/drupal in local charm repository: /home/ubuntu/charms 2012-05-09 10:01:05,845 WARNING Charm '.mrconfig' has an error: CharmError() Error processing '/home/ubuntu/charms/precise/.mrconfig': unable to process /home/ubuntu/charms/precise/.mrconfig into a charm Charm 'local:precise/drupal' not found in repository /home/ubuntu/charms 2012-05-09 10:01:06,217 ERROR Charm 'local:precise/drupal' not found in repository /home/ubuntu/charms Is there some file missing in the drupal charm directory that juju needs to make the charm valid? Also, I get the file processing error for the .mrconfig file also when deploying the mysql charm so is there something I need to change there perhaps?

    Read the article

  • Squid traffic tunneled through VPN

    - by NerdyNick
    So what I'm trying to do is have a Squid Proxy run on 1 machine along side a VPN connection. What I want to happen is all traffic running though the Squad Proxy would run though the VPN for its outbound. ie Desktop - (Squid Proxy - VPN) The goal is to allow my desktop selective tunneling through the VPN. So that Instant Messaging and the like that do not need to run through the VPN can go through my normal traffic. Typically I would go though a SSH Proxy but currently am forced to use VPN to gain entry into the office, and a Squid proxy seemed like it might work out the easiest for what I am needing. EDIT Realize I forgot to actually state what problem I'm running into. I have the Squid setup and verified it works, but once I connect to the VPN. All requests to Squid get accepted but Squid is unable to make the request over the VPN. So the client ends up just sitting there.

    Read the article

  • Visual Studio 2010 on Macbook Air

    - by Kyle B.
    Does anyone here run Visual Studio 2010 (or VS12 RC) on a Macbook Air? I have the current model with 4GB ram, 13" screen, and 256GB SSD drive. Before I go through the effort of configuring this, I'd like to know if anyone from the community has done this and: Was the performance acceptable? If it is, I plan to get a larger cinema display monitor as a second display and do all my coding on this machine ditching my desktop. Did you use Boot camp, Parallels, or VMWare? I feel to maximize performance that boot camp would be necessary to make the most utilization of the memory, but am not sure if this completely necessary. I'd prefer to use a VM, but wasn't sure if this was practical and would value your input before buying a license. Did you also run anything else on the Windows installation, such as SQL Server express, IISExpress, etc? Did performance lag after a certain point? Note: I would have asked this in superuser.com, but felt this applied more directly to the programming community.

    Read the article

  • How to manage iowait over cifs?

    - by Silvia
    For backup purposes we have Cifs file Server running that contains encrypted containers for backing up the more sensitive data. The container is mounted with cryptsetup and loop as a local filesystem and the rsync is used for backups. Because the Cifs server is not the fastest machine ever built, running the rsync process results in an iowait on the servers running the backup which in turn drives Nagios into an email frenzy. The question is, how do reduce the iowait on the server? Configuring Nagios to not report seems more like a workaround then a solution. Stretching the backups over different time intervals is already done with little effect and spending money is also not an option because apparently, we are talking about a "non-critical system".

    Read the article

  • Unable to access Windows 7 shared folder with Windows 98

    - by PabloG
    I'm unable to access a Windows 7 (Windows 7 Pro 64-bit) shared folder from an old Windows 98 box: I tried with: Turning on file and printer sharing Turning on public folder sharing Turning off password protected sharing Sharing the folder with read permissions to Everyone Lowering the encryption to 40-56 bits. The shared folder works fine using it from Windows XP, and even from Linux with CIFS / Samba, but when I try to use it from Win98 with: NET USE X: \\SERVER\SHARE an user / password dialog pops up. I entered the administrator's user / password from my Windows 7 box, but it doesn't work (incorrect password). The same Win98 machine works fine accessing a Windows XP shared folder, so it looks like a Windows 7 networking issue. Any ideas?

    Read the article

< Previous Page | 483 484 485 486 487 488 489 490 491 492 493 494  | Next Page >