Search Results

Search found 22701 results on 909 pages for 'missing features'.

Page 693/909 | < Previous Page | 689 690 691 692 693 694 695 696 697 698 699 700  | Next Page >

  • Windows 7 ICS client web failure

    - by n8wrl
    I have several windows 7 PC's connected on a LAN via a hub. One has a Verizon 3G connection and works great. I have internet connection sharing enabled on it, which automagically set the LAN connection to 192.168.137.1 and enabled DHCP. I am trying to get the client PC's working one at a time. The others are off. The client is able to: Get an IP via DHCP with correct settings. Ping any web address I can throw at it, so DNS and routing are working. Windows update works. But web sites hang in IE. All but google.com! I type www.msn.com, microsoft.com, amazon.com, etc. etc. All ping via a cmd window but IE just hangs - it says web site found but the green progress bar just slowly creeps and no content displays. www.google.com comes up even after clearing browser and dns cache. I am pulling my hair out - what am I missing? EDIT: After some more gyrations with a router I'm back to ICS. Same symptoms, only now I have an answer to Andrew's question, YES I can do Google searches but clicking on any of the result links hangs! Let one sit for half an hour with no timeout or error.

    Read the article

  • VMWare web UI intermittent access on CentOS

    - by PeteWilliams
    I've got a CentOS 5.2 server that I'm trying to get set up as a development environment. As part of this, I planned to install VMWare Server 2 and set up several virtual development servers. I've got as far as installing VMWare Server 2 but access to the remote control panel is only working intermittently. If I access it through Firefox at https://127.0.0.1:8333/ui/# it usually says either: "Connection intterupted: connection was reset before the page loaded" Or "Firefox can't establish a connection to the server at 127.0.0.1" But every now and then it lets me in and I'll manage a few clicks in the web UI before it kicks me out with the following error: "The server could not complete a request (HTTP 0 ). The server encountered an unexpected condition that prevented it from fulfilling the request. If this problem persists, please contact your system administrator." I've done all the updates available in CentOS except one OpenOffice one that is causing a conflict, and I re-ran wmware-config.pl after updating the kernel. Though I went with all the defaults as I don't really know what I'm doing! I've since rebooted and nothing changed. I've also tried accessing the control panel remotely from another machine in the network and the results are the same. Does anyone have any ideas what might be causing this and how I can resolve it? I'm afraid I'm a developer playing at sys-admin, so I may be missing something obvious! Many thanks Pete Update I have now reinstalled both the operating system and VMWare and I'm still getting the same issue. I wonder if it's a result of the settings I'm putting in on the config.pl script..?

    Read the article

  • While using an ntfs smb share for mac users, do symbolic links and extended attributes work?

    - by scape
    We have a majority of mac users but we'd rather support their file sharing using a Windows server with an ntfs drive, or at least a Linux server with ext3. We've had trouble, much trouble, utilizing the OS X server software and after the years are now looking to abandon it. What's mostly holding us back is the fact that the mac users very often utilize symbolic links and other special features that exist for an HFS+ partition. The shared locations are mostly primary storage and not just used as an archive storage location. While there is an option to create symbolic links under ntfs, I'm curious if there is anything I need to look out for if I were to move the files over to a new partition that's hosted from a Windows server from the HFS+ partition; in addition, how well creating a symbolic link from a mac might work. I am also worried about windows backup software and if it will ruin these special sym links, and how placing permissions on sub-folders will work. Alternatively I could remotely backup the files using a mac and Bru, nonetheless I still want to get away from mac server for hosting the shares.

    Read the article

  • Why is .htaccess not allowed in a directory but is allowed in another?

    - by John Isaacks
    I have apache2 installed on ubuntu 10.4 inside my var/www/ directory [amung others] I have a cakephp and a dvdcatalog directories. Each of which have CakePHP 1.3 installed. I can access them both via localhost/cakephp and localhost/dvdcatalog But the dvdcatalog shows up with no css styling. They both have these files: /var/www/cakephp/app/webroot/css/cake.generic.css /var/www/dvdcatalog/app/webroot/css/cake.generic.css But when I go to http://localhost/cakephp/css/cake.generic.css it sees the file but it does not see the file when I go to http://localhost/dvdcatalog/css/cake.generic.css I think this means the cakephp folder is able to use .htaccess and the dvdcatalog is not. I setup the cakephp directory last month when I was following in the blog tutorial. I am setting up the dvdcatalog directory now for a different tutorial. So I am not sure if I am missing a step. in my /etc/apache2/apache2.conf file I have this: <Directory "/var/www/*"> Order allow,deny Allow from all AllowOverride All </Directory> Which I thought gave .htaccesss to all. Does anyone have any ideas what the problem is?

    Read the article

  • Port knocking via SSH tunnels

    - by j0ker
    I have a server running in my university's internal network. There is only one SSH daemon running which is secured by port knocking with knockd. Works fine if I try to connect from within the internal network. But since the server has no external IP, I have to tunnel into the internal network every time I want to access the server from outside. And since tunneling only works for a single port I cannot do the port knocking as easily as from an internal client. In fact, I don't get it to work at all. What I'm trying is opening tunnels for all the different ports that have to be knocked. Then I send TCP-SYN packets into the tunnels. But that doesn't work even for a single port. If I establish the tunnel on the first port in the knock sequence and send a packet through it, it doesn't reach the server. There is no entry in the log file of knockd, while there should be something like 123.45.67.89: openSSH: Stage 1 (as shown with internal knocks). So I guess, the problem doesn't exist within my knocking script but is a more general one. Are there any known problems with what I'm trying to do? Is it even possible or am I missing something? Thanks in advance!

    Read the article

  • Removing extended partition without deleting logical in it

    - by HisDudeness
    I'm running a Linux-based laptop, and in order to multi-boot several distros in it, I created an extended partition which contains a bunch of logical ones with GParted. Now, after quite a long time with this setup, I've changed my mind because of the consequent lack of storing space for my data partition. Now I want to keep one distro alone like it's normal, and eventually have some other operating systems stored in external supports to plug in and use if I want. Obviously, also this partition I want to keep (and to enlarge a little too) is just a logical inside the extended I want to keep. For what concerns the number I'm ok, meaning I currently have this big distro dedicated extended, the swap and the data partitions, so there's space for another primary before I delete the extended, but I don't know how to delete it without touching the logical in it, I don't want to reinstall the system losing all changes and settings, and I don't want to keep an extended partition for a logical alone. How can I do? Do I have to create a new primary, copy the logical content in it and then delete everything? Will the system boot and maintain exactly all the features it has now? Or is there a way to convert an extended into a primary once it contains just one logical? Or can I directly move a logical out of an extended turning it into a primary? Or, again, am I screwed?

    Read the article

  • Will these instructions work when turning of journaling on an ext4 SSD?

    - by snowlord
    I have an Acer Aspire One with an SSD for storage. I recently installed Ubuntu on it and chose ext4 for my filesystem. Then I read that journaling on an SSD isn't the best idea, so I will try to disable journaling and I have found these intstructions (from http://fenidik.blogspot.com/2010/03/ext4-disable-journal.html): # Create ext4 fs on /dev/sda10 disk mkfs.ext4 /dev/sda10 # Enable writeback mode. This mode will typically provide the best ext4 performance. tune2fs -o journal_data_writeback /dev/sda10 # Delete has_journal option tune2fs -O ^has_journal /dev/sda10 # Required fsck e2fsck -f /dev/sda10 # Check fs options dumpe2fs /dev/sda10 |more For more performance add fstab opions: data=writeback,noatime,nodiratime i.e: /dev/sda10 /opt ext4 defaults,data=writeback,noatime,nodiratime 0 0 I will use them on my boot partition. Are there any particularly bad parts here, or are there any missing steps? Will my boot partition be fit for being on an SSD after this? Or should I consider switching to ext2, or even reinstall it all and choose ext2 at partitioning time (I'd rather not though, since I've configured quite some stuff already)?

    Read the article

  • How to correctly deploy Adobe Reader 9.1

    - by Ben Gillam
    Hi I have recently tried to deploy Adobe Reader 9.1 onto our network here. (SBS 2003 server and XP Workstations) I followed the instructions for the extraction of the installer and .msi and then creating a .mst transform file to set custom options. (Suppress EULA, dont create desktop icon etc) I then added the package to my deployment GPO applied the relevant .mst file and preceded to deploy accross the network. The software package is computer assigned to be installed prior to logon, to avoid user permissions issues. The package deploys correctly to computers and will run perfectly fine if you run from a shortcut, however when trying to view a pdf from within a web browser it fails with the following message. "The adobe acrobat/reader that is running can not be used to view PDF files in a web browser. Adobe Acrobat/Reader version 8 or 9 is required. Please exit and try again" I have found many pages on google refering to this problem, but none appear to be in relation the problems I have found. http :// kb2.adobe.com/cps/405/kb405461.html These fixes recommend correcting a registry entry (which i should mention is missing after the deployed installation. However this does not work. Switching off display in a browser - Seems to defeat the object of fixing the problem Removing old versions - There arent any. Trying with a different user - This affects all users of all privalige levels on all computers. On my workstation I uninstalled Acrobat Reader 9.1 then reinstalled manually using the same installation source files and it works fine. has anyone sucsessfully deployed AR9.1 on their domain and if so how? For the time being I have downloaded the older 8.1.3 release and deployed this in the same way which works fine, but would like to be using the up to date version. Thanks

    Read the article

  • Installing 64-bit Ubuntu Server 12.04 LTS, on a VM with VMWare Player, on a 64-bit Windows 7 PC

    - by WannaBeAGeek
    I'm trying to create a VM, using VMWare Player, with an ISO image of Ubuntu Server 12.04 (LTS). The machine I'm doing the installation on has an Intel(R) Core(TM) i5 CPU, and runs 64-bit Windows 7 I managed to create the VM (gave username, password, configured network etc), but I can't install Ubuntu Server. First I get this alert : Binary translation is incompatible with long mode on this platform. Disabling long mode. Without long mode support, the virtual machine will not be able to run 64-bit code. For more details see http://vmware.com/info?id=152. When I click OK, I get another alert : This virtual machine is configured for 64-bit guest operating systems. However, 64-bit operation is not possible. This host supports Intel VT-x, but Intel VT-x is disabled. Intel VT-x might be disabled if it has been disabled in the BIOS/firmware settings or the host has not been power-cycled since changing this setting. (1) Verify that the BIOS/firmware settings enable Intel VT-x and disable 'trusted execution.' (2) Power-cycle the host if either of these BIOS/firmware settings have been changed. (3) Power-cycle the host if you have not done so since installing VMware Player. (4) Update the host's BIOS/firmware to the latest version. For more detailed information, see http://vmware.com/info?id=152. Then, when I click OK, my VM exists, and I get back to the VMWare Player home screen. I don't know much about hardware and virtualisation, so there might be some necessary info I'm not giving. Please don't hesitate to let me know what is missing in my post, for finding solutions. Thanks :)

    Read the article

  • Problem connecting to remote network using demand-dial VPN interface with Windows Server 2003

    - by Mike Forman
    I have a Windows 2003 server (SP2) that I'm trying to set up route traffic from my local network using a VPN My local network has the following components: Broadband router (192.168.0.1) Windows Server with a single NIC running RRAS (192.168.0.2 def. gateway = 192.168.0.1) Client Machine (192.168.0.3 def. gateway = 192.168.0.1) Using a VPN connection, I am trying to access a remote machine (10.0.0.1 for example) I configured RRAS with a demand-dial interface for the VPN and set it to be a persistent connection. As part of that setup, a static route to 10.0.0.0 (255.255.0.0) was created. When at the console of the server, I can ping 10.0.0.1 with no problems I added a route on the client machine using the following command: ROUTE ADD 10.0.0.0 MASK 255.255.0.0 192.168.0.2 If I run tracert 10.0.0.1 from the client, the first hop is to 192.168.0.2 which tells me that route is working. However, I cannot ping 10.0.0.1 from the client machine. What am I missing? Hopefully something simple.

    Read the article

  • Grub error 18, gparted not showing anything

    - by Montecristo
    Some week ago I started having some problems with my pc, sometimes it just freezed not allowing me to do anything. I had to turn it off and on and sometimes do it a couple of time even at startup. Now it does not start at all, grub is giving me error 18. I have found that a solution is to create a bootable partition in the first sector of the disk. gparted does not recognize any partition, the window in which there would be my partitions is empty. sudo fdisk -l does not output anything. If I type sudo mount /dev/sda and then tab tab to autocomplete these are the devices coming out: sda sda1 sda2 sda5. If I launch sudo mount -t ext3 /dev/sda1 disk I get the following error: mount: wrong fs type, bad option, bad superblock on /dev/sda1, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so dmesg outputs [ 1831.974847] EXT3-fs: unable to read superblock Do you know how to solve this issue? I'm not completely sure this is a software problem, should I try with a new hard disk?

    Read the article

  • Windows Vista/7 dropping Mac Server share points

    - by Hooligancat
    My Windows Vista and Windows 7 clients are having problems maintaining access to SMB shares on a Mac server. The initial connection to the server appears to be OK, as the Windows clients can see all of the server share points. However, the client randomly drops a couple of the server share points although the clients can still see the server. For example. If I have the following share points on the Mac server: Share A Share B Share C Share D Share E The Windows client can see these shares most of the time and can access them most of the time. But randomly a couple of the shares will just get dropped or go missing from the Windows client's ability to view them so I end up with something like: Share B Share D Share E All the share points are established int the same way with the same permission settings. My Mac OSX Server is set up with the following for SMB: SMB sharing enabled Standalone Server Workgroup of `CORPORATE` Allow Guest Access = YES Client connections limit = 100 Authentication: NTLMv2 & Kerberos and NTLM Code Page is Latin US (437) This is a workgroup master browser WINS registration is set to Enable WINS server (tried with setting off) Enable virtual share points for homes YES I noticed in my SMB file service log that the clients appear to connect OK, but I get the following error which implies a reset by either the server or the client: /SourceCache/samba/samba-187.9/samba/source/lib/util_sock.c:read_data(534) read_data: read failure for 4 bytes to client 192.168.0.99. = Connection reset by peer I am a bit stumped as to a direction to turn to try and get this to resolve. Continued attempts to access the server from the client will reconnect to the share points, but they inevitably get dropped again in the near future. Any and all help much appreciated.

    Read the article

  • Can a website company that builds 4-5 websites a year afford dedicated hosting?

    - by Petras
    We manage about 30 websites that use shared ASP.NET SQL Server web hosting. These are typical small/medium business websites and they perform fine in this environment. Recently I was looking at VPS hosting in this thread http://serverfault.com/questions/128329/how-do-you-host-multiple-public-facing-websites-on-a-vps After contacting a provider in one of the replies I was told that VPS hosting is not recommended for 30 sites, even if they are small. The resource requirements might be too great even for VPS. So I should turn to dedicated hosting. The lowest cost dedicated hosting is $219 per month (see http://www.serverintellect.com/dedicated/pentiumdservers.aspx). But this is only for a single processor which seems too light for a machine running both IIS and SQL. In our office all the developers work on quad cores so I assume I’d really need the Quad Processor. However, this starts at $599 monthly. Now, I won’t be able to transfer all of our 30 sites to this machine. I’d only be able to transfer say 5 or 6. However, moving forward, I’d be able to host all future sites on this machine. This amounts to 4-5 per year. Let’s look at the economics. Shared hosting costs are typically $16.95 monthly (see http://www.crystaltech.com/dotnet.aspx). So here’s the dilemma First months costs: $599 First month revenue: 6x$16.95 = $101.7 Loss in first month: $497.3 First year costs: $599x12=$7188 First month revenue: 6x$16.95x12 + 5x$16.95x6(averaged) = $1728.9 Loss in first year: $5459.1 Clearly it is going to take years for this server to pay for itself. It just doesn’t seem economical! Am I missing something here, or is dedicated not the way to go with the amount of sites we build?

    Read the article

  • DHCP Relay setup in ubuntu server

    - by jerichorivera
    I have a network appliance (QNO) that works as traffic load balancer and dhcp server. I would like to add a linux server in between the network appliance and the client computers. The linux server will be used to monitor bandwidth usage. My problem is I still want DHCP to be served by the network appliance so that load balancing will still work efficiently. We are afraid that if we setup the linux server as the DHCP server the network appliance will not be able to load balance the traffic if it only sees the linux server as a single client connecting to it. I've been searching all over for a tutorial on how to setup DHCP relay but have not found any. How do I setup DHCP relay on my linux server given there are two NICs attached to it, one connects the linux server to the network appliance and the other connects the linux server to the client computers. EDIT Router (DHCP) ---- [eth0] Linux Server (Relay agent) [eth1] ----- PC (network) Router IP is 192.168.0.100 eth0 is on DHCP eth1 is static 192.168.2.11 (if I need to change this I can) Tried to do dhcrelay -i eth1 192.168.0.100, but the PC was not getting any DHCP lease from the DHCP router. I might be missing something here.

    Read the article

  • Forcing programs to be installed to another drive

    - by zyboxenterprises
    I have an SSD as my main Windows drive, with a 640GB 2.5" HDD, partitioned to store programs and user settings, and also to act as backup (it's the only thing I had lying around at the time of building my PC). The task was to make the PC as fast as possible, while having an increased storage capacity available to store normal user data, and to assist in my small data recovery business. The problem is that whenever I install a program, it installs to C:\Program Files [(x86 for the 32 bit programs]\, although I have changed the environment variables. This wouldn't normally be an issue, however every installation program points its shortcut to my 640GB HDD. The root layout of both drives: To clarify: Program files get installed to C:\ Program shortcuts are always pointed to Z:\, my 640GB HDD Modifying the relevant environment variables doesn't do anything, I looked at this, but however it only talks about modifying the registry and environment variables, which I have already done so. I install to the Z:\ drive if the installation program lets me change the installation path, but however the installation programs sometimes don't let me change this. Is there a way that I can force every program to install to the relevant location on Z:\? Perhaps I'm missing something here? Edit: Found this program; would it be appropriate to use in my case? I would be able to move the entire Program Files (and its x86 version) to Z:\, without impacting on the performance.

    Read the article

  • How to make an "import image" button/field in a PDF form?

    - by Joe
    How to make an "import image" button/field in a PDF form? I am designing a "Lost Pet" poster for a local animal shelter. The idea is to make a PDF file that users of the shelter's website can download, insert their pet's information, and then print it out if their pet goes missing. I will be designing the poster in InDesign CS3, and then exporting it to a PDF file. I will then add form fields for the user to fill out. I am ok with making text entry form fields. That is a simple matter. What's not so simple is figuring out a way to allow the user to insert a photo of their lost pet into the PDF, save the file, and then print it out with the photo in it. I am running Adobe Acrobat Professional 8. I am running it on a Mac. All of the searches I've done have told me that if I was running Acrobat in Windows, I'd have access to this other program called LiveCycle Designer which has pre-built form item libraries, including a Image Field. But I cannot find any similar option on the Mac version. Has anyone has any experience doing something like this? If so, I could certainly use some tips on making this work. Just a quick clarification... This is a volunteer design project that I am doing for the shelter, so I don't want to spend money on any extra software/addons at all. I am hoping this can be done somehow with the pre-existing software I have.

    Read the article

  • Hadoop streaming job on EC2 stays in "pending" state

    - by liamf
    Trying to experiment with Hadoop and Streaming using cloudera distribution CDH3 on Ubuntu. Have valid data in hdfs:// ready for processing. Wrote little streaming mapper in python. When I launch a mapper only job using: hadoop jar /usr/lib/hadoop/contrib/streaming/hadoop-streaming*.jar -file /usr/src/mystuff/mapper.py -mapper /usr/src/mystuff/mapper.py -input /incoming/STBFlow/* -output testOP hadoop duly decides it will use 66 mappers on the cluster to process the data. The testOP directory is created on HDFS. A job_conf.xml file is created. But the job tracker UI at port 50030 never shows the job moving out of "pending" state and nothing else happens. CPU usage stays at zero. (the job is created though) If I give it a single file (instead of the entire directory) as input, same result (except Hadoop decides it needs 2 mappers instead of 66). I also tried using the "dumbo" Python utility and launching jobs using that: same result: permanently pending. So I am missing something basic: could someone help me out with what I should look for? The cluster is on Amazon EC2. Firewall issues maybe: ports are enabled explicitly, case by case, in the cluster security group.

    Read the article

  • ssh many users to one home

    - by filippo
    Hiya, I want to allow some trusted users to scp files into my server (to an specific user), but I do not want to give these users a home, neither ssh login. I'm having problems to understand the correct settings of users/groups I have to create to allow this to happen. I will put an example; Having: MyUser@MyServer MyUser belongs to the group MyGroup MyUser's home will be lets say, /home/MyUser SFTPGuy1@OtherBox1 SFTPGuy2@OtherBox2 They give me their id_dsa.pub's and I add it to my authorized_keys I reckon then, I'd do in my server something like useradd -d /home/MyUser -s /bin/false SFTPGuy1 (and the same for the other..) And for the last, useradd -G MyGroup SFTPGuy1 (then again, for the other guy) I'd expect then, the SFTPGuys to be able to sftp -o IdentityFile=id_dsa MyServer and to be taken to MyUser's home... Well, this is not the case... SFTP just keeps asking me for a password. Could someone point out what am I missing? Thanks a mil, f. [EDIT: Messa in StackOverflow asked me if authorized_keys file was readable to the other users (members of MyGroup). Its an interesting point, this was my answer: Well, it wasn't (it was 700), but then I changed the permissions of the .ssh dir and the auth file to 750 though still no effect. Guess it's worth mentioning that my home dir ( /home/MyUser) is also readable for the group; most dirs being 750 and the specific folder where they'd drop files is 770. Nevertheless, about the auth file, I reckon the authentication would be performed by the local user on MyServer, isn't it? if so, I don't understand the need for other users to read it... well.. just wondering. ]

    Read the article

  • a brand new FS based on a database without using fuse

    - by Devrim
    hi all, To serve millions of files out of a single directory, being able to connect to a drive from hundreds of endpoints, and for some other reasons (to avoid gluster/nfs/all fs based networking solutions), I want to evaluate the possibility of making a filesystem that's based on a mongodb (or any other). Basically, it works like fusefs, every single file is kept in mongo gridfs. In theory, I do, mount mongodbfs /mountPoint mongodb://localhost then when i say touch /mountPoint/test.txt this file is inserted into mongodb. This FS will also store uid/gid and perms with the file, we can throw hundreds of servers to it, and no useradd will be necessary. I'm not thinking to include all the features of FS, just the ones we need. My question is, how do I start my quest in finding resources, books, links, people, developers who'd help me implement this? at least a proof of concept. Is it feasible? What should I expect as a timeline for such undertaking? Please only think about gazillion small files and folders.

    Read the article

  • Cannot open root device xvda1 or unknown-block(0,0)

    - by svoop
    I'm putting together a Dom0 and three DomU (all Gentoo) with kernel 3.5.7 and Xen 4.1.1. Each Dom has it's own md (md0 for Dom0, md1 for Dom1 etc). Dom0 works fine so far, however, I'm stuck trying to create DomUs. It appears the xvda1 device on DomU is not created or accessible: Parsing config file dom1 domainbuilder: detail: xc_dom_allocate: cmdline="root=/dev/xvda1 console=hvc0 root=/dev/xvda1 ro 3", features="(null)" domainbuilder: detail: xc_dom_kernel_mem: called domainbuilder: detail: xc_dom_boot_xen_init: ver 4.1, caps xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64 domainbuilder: detail: xc_dom_parse_image: called domainbuilder: detail: xc_dom_find_loader: trying multiboot-binary loader ... domainbuilder: detail: loader probe failed domainbuilder: detail: xc_dom_find_loader: trying Linux bzImage loader ... domainbuilder: detail: xc_dom_malloc : 10530 kB domainbuilder: detail: xc_dom_do_gunzip: unzip ok, 0x2f7a4f -> 0xa48888 domainbuilder: detail: loader probe OK xc: detail: elf_parse_binary: phdr: paddr=0x1000000 memsz=0x558000 xc: detail: elf_parse_binary: phdr: paddr=0x1558000 memsz=0x690e8 xc: detail: elf_parse_binary: phdr: paddr=0x15c2000 memsz=0x127c0 xc: detail: elf_parse_binary: phdr: paddr=0x15d5000 memsz=0x533000 xc: detail: elf_parse_binary: memory: 0x1000000 -> 0x1b08000 xc: detail: elf_xen_parse_note: GUEST_OS = "linux" xc: detail: elf_xen_parse_note: GUEST_VERSION = "2.6" xc: detail: elf_xen_parse_note: XEN_VERSION = "xen-3.0" xc: detail: elf_xen_parse_note: VIRT_BASE = 0xffffffff80000000 xc: detail: elf_xen_parse_note: ENTRY = 0xffffffff815d5210 xc: detail: elf_xen_parse_note: HYPERCALL_PAGE = 0xffffffff81001000 xc: detail: elf_xen_parse_note: FEATURES = "!writable_page_tables|pae_pgdir_above_4gb" xc: detail: elf_xen_parse_note: PAE_MODE = "yes" xc: detail: elf_xen_parse_note: LOADER = "generic" xc: detail: elf_xen_parse_note: unknown xen elf note (0xd) xc: detail: elf_xen_parse_note: SUSPEND_CANCEL = 0x1 xc: detail: elf_xen_parse_note: HV_START_LOW = 0xffff800000000000 xc: detail: elf_xen_parse_note: PADDR_OFFSET = 0x0 xc: detail: elf_xen_addr_calc_check: addresses: xc: detail: virt_base = 0xffffffff80000000 xc: detail: elf_paddr_offset = 0x0 xc: detail: virt_offset = 0xffffffff80000000 xc: detail: virt_kstart = 0xffffffff81000000 xc: detail: virt_kend = 0xffffffff81b08000 xc: detail: virt_entry = 0xffffffff815d5210 xc: detail: p2m_base = 0xffffffffffffffff domainbuilder: detail: xc_dom_parse_elf_kernel: xen-3.0-x86_64: 0xffffffff81000000 -> 0xffffffff81b08000 domainbuilder: detail: xc_dom_mem_init: mem 5000 MB, pages 0x138800 pages, 4k each domainbuilder: detail: xc_dom_mem_init: 0x138800 pages domainbuilder: detail: xc_dom_boot_mem_init: called domainbuilder: detail: x86_compat: guest xen-3.0-x86_64, address size 64 domainbuilder: detail: xc_dom_malloc : 10000 kB domainbuilder: detail: xc_dom_build_image: called domainbuilder: detail: xc_dom_alloc_segment: kernel : 0xffffffff81000000 -> 0xffffffff81b08000 (pfn 0x1000 + 0xb08 pages) domainbuilder: detail: xc_dom_pfn_to_ptr: domU mapping: pfn 0x1000+0xb08 at 0x7fdec9b85000 xc: detail: elf_load_binary: phdr 0 at 0x0x7fdec9b85000 -> 0x0x7fdeca0dd000 xc: detail: elf_load_binary: phdr 1 at 0x0x7fdeca0dd000 -> 0x0x7fdeca1460e8 xc: detail: elf_load_binary: phdr 2 at 0x0x7fdeca147000 -> 0x0x7fdeca1597c0 xc: detail: elf_load_binary: phdr 3 at 0x0x7fdeca15a000 -> 0x0x7fdeca1cd000 domainbuilder: detail: xc_dom_alloc_segment: phys2mach : 0xffffffff81b08000 -> 0xffffffff824cc000 (pfn 0x1b08 + 0x9c4 pages) domainbuilder: detail: xc_dom_pfn_to_ptr: domU mapping: pfn 0x1b08+0x9c4 at 0x7fdec91c1000 domainbuilder: detail: xc_dom_alloc_page : start info : 0xffffffff824cc000 (pfn 0x24cc) domainbuilder: detail: xc_dom_alloc_page : xenstore : 0xffffffff824cd000 (pfn 0x24cd) domainbuilder: detail: xc_dom_alloc_page : console : 0xffffffff824ce000 (pfn 0x24ce) domainbuilder: detail: nr_page_tables: 0x0000ffffffffffff/48: 0xffff000000000000 -> 0xffffffffffffffff, 1 table(s) domainbuilder: detail: nr_page_tables: 0x0000007fffffffff/39: 0xffffff8000000000 -> 0xffffffffffffffff, 1 table(s) domainbuilder: detail: nr_page_tables: 0x000000003fffffff/30: 0xffffffff80000000 -> 0xffffffffbfffffff, 1 table(s) domainbuilder: detail: nr_page_tables: 0x00000000001fffff/21: 0xffffffff80000000 -> 0xffffffff827fffff, 20 table(s) domainbuilder: detail: xc_dom_alloc_segment: page tables : 0xffffffff824cf000 -> 0xffffffff824e6000 (pfn 0x24cf + 0x17 pages) domainbuilder: detail: xc_dom_pfn_to_ptr: domU mapping: pfn 0x24cf+0x17 at 0x7fdece676000 domainbuilder: detail: xc_dom_alloc_page : boot stack : 0xffffffff824e6000 (pfn 0x24e6) domainbuilder: detail: xc_dom_build_image : virt_alloc_end : 0xffffffff824e7000 domainbuilder: detail: xc_dom_build_image : virt_pgtab_end : 0xffffffff82800000 domainbuilder: detail: xc_dom_boot_image: called domainbuilder: detail: arch_setup_bootearly: doing nothing domainbuilder: detail: xc_dom_compat_check: supported guest type: xen-3.0-x86_64 <= matches domainbuilder: detail: xc_dom_compat_check: supported guest type: xen-3.0-x86_32p domainbuilder: detail: xc_dom_compat_check: supported guest type: hvm-3.0-x86_32 domainbuilder: detail: xc_dom_compat_check: supported guest type: hvm-3.0-x86_32p domainbuilder: detail: xc_dom_compat_check: supported guest type: hvm-3.0-x86_64 domainbuilder: detail: xc_dom_update_guest_p2m: dst 64bit, pages 0x138800 domainbuilder: detail: clear_page: pfn 0x24ce, mfn 0x37ddee domainbuilder: detail: clear_page: pfn 0x24cd, mfn 0x37ddef domainbuilder: detail: xc_dom_pfn_to_ptr: domU mapping: pfn 0x24cc+0x1 at 0x7fdece675000 domainbuilder: detail: start_info_x86_64: called domainbuilder: detail: setup_hypercall_page: vaddr=0xffffffff81001000 pfn=0x1001 domainbuilder: detail: domain builder memory footprint domainbuilder: detail: allocated domainbuilder: detail: malloc : 20658 kB domainbuilder: detail: anon mmap : 0 bytes domainbuilder: detail: mapped domainbuilder: detail: file mmap : 0 bytes domainbuilder: detail: domU mmap : 21392 kB domainbuilder: detail: arch_setup_bootlate: shared_info: pfn 0x0, mfn 0xbaa6f domainbuilder: detail: shared_info_x86_64: called domainbuilder: detail: vcpu_x86_64: called domainbuilder: detail: vcpu_x86_64: cr3: pfn 0x24cf mfn 0x37dded domainbuilder: detail: launch_vm: called, ctxt=0x7fff224e4ea0 domainbuilder: detail: xc_dom_release: called Daemon running with PID 4639 [ 0.000000] Initializing cgroup subsys cpuset [ 0.000000] Initializing cgroup subsys cpu [ 0.000000] Linux version 3.5.7-gentoo (root@majordomo) (gcc version 4.5.4 (Gentoo 4.5.4 p1.0, pie-0.4.7) ) #1 SMP Tue Nov 20 10:49:51 CET 2012 [ 0.000000] Command line: root=/dev/xvda1 console=hvc0 root=/dev/xvda1 ro 3 [ 0.000000] ACPI in unprivileged domain disabled [ 0.000000] e820: BIOS-provided physical RAM map: [ 0.000000] Xen: [mem 0x0000000000000000-0x000000000009ffff] usable [ 0.000000] Xen: [mem 0x00000000000a0000-0x00000000000fffff] reserved [ 0.000000] Xen: [mem 0x0000000000100000-0x0000000138ffffff] usable [ 0.000000] NX (Execute Disable) protection: active [ 0.000000] MPS support code is not built-in. [ 0.000000] Using acpi=off or acpi=noirq or pci=noacpi may have problem [ 0.000000] DMI not present or invalid. [ 0.000000] No AGP bridge found [ 0.000000] e820: last_pfn = 0x139000 max_arch_pfn = 0x400000000 [ 0.000000] e820: last_pfn = 0x100000 max_arch_pfn = 0x400000000 [ 0.000000] init_memory_mapping: [mem 0x00000000-0xffffffff] [ 0.000000] init_memory_mapping: [mem 0x100000000-0x138ffffff] [ 0.000000] NUMA turned off [ 0.000000] Faking a node at [mem 0x0000000000000000-0x0000000138ffffff] [ 0.000000] Initmem setup node 0 [mem 0x00000000-0x138ffffff] [ 0.000000] NODE_DATA [mem 0x1387fc000-0x1387fffff] [ 0.000000] Zone ranges: [ 0.000000] DMA [mem 0x00010000-0x00ffffff] [ 0.000000] DMA32 [mem 0x01000000-0xffffffff] [ 0.000000] Normal [mem 0x100000000-0x138ffffff] [ 0.000000] Movable zone start for each node [ 0.000000] Early memory node ranges [ 0.000000] node 0: [mem 0x00010000-0x0009ffff] [ 0.000000] node 0: [mem 0x00100000-0x138ffffff] [ 0.000000] SMP: Allowing 1 CPUs, 0 hotplug CPUs [ 0.000000] No local APIC present [ 0.000000] APIC: disable apic facility [ 0.000000] APIC: switched to apic NOOP [ 0.000000] e820: cannot find a gap in the 32bit address range [ 0.000000] e820: PCI devices with unassigned 32bit BARs may break! [ 0.000000] e820: [mem 0x139100000-0x1394fffff] available for PCI devices [ 0.000000] Booting paravirtualized kernel on Xen [ 0.000000] Xen version: 4.1.1 (preserve-AD) [ 0.000000] setup_percpu: NR_CPUS:64 nr_cpumask_bits:64 nr_cpu_ids:1 nr_node_ids:1 [ 0.000000] PERCPU: Embedded 26 pages/cpu @ffff880138400000 s75712 r8192 d22592 u2097152 [ 0.000000] Built 1 zonelists in Node order, mobility grouping on. Total pages: 1259871 [ 0.000000] Policy zone: Normal [ 0.000000] Kernel command line: root=/dev/xvda1 console=hvc0 root=/dev/xvda1 ro 3 [ 0.000000] PID hash table entries: 4096 (order: 3, 32768 bytes) [ 0.000000] __ex_table already sorted, skipping sort [ 0.000000] Checking aperture... [ 0.000000] No AGP bridge found [ 0.000000] Memory: 4943980k/5128192k available (3937k kernel code, 448k absent, 183764k reserved, 1951k data, 524k init) [ 0.000000] SLUB: Genslabs=15, HWalign=64, Order=0-3, MinObjects=0, CPUs=1, Nodes=1 [ 0.000000] Hierarchical RCU implementation. [ 0.000000] NR_IRQS:4352 nr_irqs:256 16 [ 0.000000] Console: colour dummy device 80x25 [ 0.000000] console [tty0] enabled [ 0.000000] console [hvc0] enabled [ 0.000000] installing Xen timer for CPU 0 [ 0.000000] Detected 3411.602 MHz processor. [ 0.000999] Calibrating delay loop (skipped), value calculated using timer frequency.. 6823.20 BogoMIPS (lpj=3411602) [ 0.000999] pid_max: default: 32768 minimum: 301 [ 0.000999] Security Framework initialized [ 0.001355] Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes) [ 0.002974] Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes) [ 0.003441] Mount-cache hash table entries: 256 [ 0.003595] Initializing cgroup subsys cpuacct [ 0.003599] Initializing cgroup subsys freezer [ 0.003637] ENERGY_PERF_BIAS: Set to 'normal', was 'performance' [ 0.003637] ENERGY_PERF_BIAS: View and update with x86_energy_perf_policy(8) [ 0.003643] CPU: Physical Processor ID: 0 [ 0.003645] CPU: Processor Core ID: 0 [ 0.003702] SMP alternatives: switching to UP code [ 0.011791] Freeing SMP alternatives: 12k freed [ 0.011835] Performance Events: unsupported p6 CPU model 42 no PMU driver, software events only. [ 0.011886] Brought up 1 CPUs [ 0.011998] Grant tables using version 2 layout. [ 0.012009] Grant table initialized [ 0.012034] NET: Registered protocol family 16 [ 0.012328] PCI: setting up Xen PCI frontend stub [ 0.015089] bio: create slab <bio-0> at 0 [ 0.015158] ACPI: Interpreter disabled. [ 0.015180] xen/balloon: Initialising balloon driver. [ 0.015180] xen-balloon: Initialising balloon driver. [ 0.015180] vgaarb: loaded [ 0.016126] SCSI subsystem initialized [ 0.016314] PCI: System does not support PCI [ 0.016320] PCI: System does not support PCI [ 0.016435] NetLabel: Initializing [ 0.016438] NetLabel: domain hash size = 128 [ 0.016440] NetLabel: protocols = UNLABELED CIPSOv4 [ 0.016447] NetLabel: unlabeled traffic allowed by default [ 0.016475] Switching to clocksource xen [ 0.017434] pnp: PnP ACPI: disabled [ 0.017501] NET: Registered protocol family 2 [ 0.017864] IP route cache hash table entries: 262144 (order: 9, 2097152 bytes) [ 0.019322] TCP established hash table entries: 524288 (order: 11, 8388608 bytes) [ 0.020376] TCP bind hash table entries: 65536 (order: 8, 1048576 bytes) [ 0.020497] TCP: Hash tables configured (established 524288 bind 65536) [ 0.020500] TCP: reno registered [ 0.020525] UDP hash table entries: 4096 (order: 5, 131072 bytes) [ 0.020564] UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes) [ 0.020624] NET: Registered protocol family 1 [ 0.020658] PCI-DMA: Using software bounce buffering for IO (SWIOTLB) [ 0.020662] software IO TLB [mem 0xfb632000-0xff631fff] (64MB) mapped at [ffff8800fb632000-ffff8800ff631fff] [ 0.020750] platform rtc_cmos: registered platform RTC device (no PNP device found) [ 0.021378] HugeTLB registered 2 MB page size, pre-allocated 0 pages [ 0.023378] msgmni has been set to 9656 [ 0.023544] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 253) [ 0.023549] io scheduler noop registered [ 0.023551] io scheduler deadline registered [ 0.023580] io scheduler cfq registered (default) [ 0.023650] pci_hotplug: PCI Hot Plug PCI Core version: 0.5 [ 0.023845] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled [ 0.024082] Non-volatile memory driver v1.3 [ 0.024085] Linux agpgart interface v0.103 [ 0.024207] Event-channel device installed. [ 0.024265] [drm] Initialized drm 1.1.0 20060810 [ 0.024268] [drm:i915_init] *ERROR* drm/i915 can't work without intel_agp module! [ 0.025145] brd: module loaded [ 0.025565] loop: module loaded [ 0.045646] Initialising Xen virtual ethernet driver. [ 0.198264] i8042: PNP: No PS/2 controller found. Probing ports directly. [ 0.199096] i8042: No controller found [ 0.199139] mousedev: PS/2 mouse device common for all mice [ 0.259303] rtc_cmos rtc_cmos: rtc core: registered rtc_cmos as rtc0 [ 0.259353] rtc_cmos: probe of rtc_cmos failed with error -38 [ 0.259440] md: raid1 personality registered for level 1 [ 0.259542] nf_conntrack version 0.5.0 (16384 buckets, 65536 max) [ 0.259732] ip_tables: (C) 2000-2006 Netfilter Core Team [ 0.259747] TCP: cubic registered [ 0.259886] NET: Registered protocol family 10 [ 0.260031] ip6_tables: (C) 2000-2006 Netfilter Core Team [ 0.260070] sit: IPv6 over IPv4 tunneling driver [ 0.260194] NET: Registered protocol family 17 [ 0.260213] Bridge firewalling registered [ 5.360075] XENBUS: Waiting for devices to initialise: 25s...20s...15s...10s...5s...0s...235s...230s...225s...220s...215s...210s...205s...200s...195s...190s...185s...180s...175s...170s...165s...160s...155s...150s...145s...140s...135s...130s...125s...120s...115s...110s...105s...100s...95s...90s...85s...80s...75s...70s...65s...60s...55s...50s...45s...40s...35s...30s...25s...20s...15s...10s...5s...0s... [ 270.360180] XENBUS: Timeout connecting to device: device/vbd/51713 (local state 3, remote state 1) [ 270.360273] md: Waiting for all devices to be available before autodetect [ 270.360277] md: If you don't use raid, use raid=noautodetect [ 270.360388] md: Autodetecting RAID arrays. [ 270.360392] md: Scanned 0 and added 0 devices. [ 270.360394] md: autorun ... [ 270.360395] md: ... autorun DONE. [ 270.360431] VFS: Cannot open root device "xvda1" or unknown-block(0,0): error -6 [ 270.360435] Please append a correct "root=" boot option; here are the available partitions: [ 270.360440] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0) [ 270.360444] Pid: 1, comm: swapper/0 Not tainted 3.5.7-gentoo #1 [ 270.360446] Call Trace: [ 270.360454] [<ffffffff813d2205>] ? panic+0xbe/0x1c5 [ 270.360459] [<ffffffff813d2358>] ? printk+0x4c/0x51 [ 270.360464] [<ffffffff815d5fb7>] ? mount_block_root+0x24f/0x26d [ 270.360469] [<ffffffff815d62b6>] ? prepare_namespace+0x168/0x192 [ 270.360474] [<ffffffff815d5ca7>] ? kernel_init+0x1b0/0x1c2 [ 270.360477] [<ffffffff815d5500>] ? loglevel+0x34/0x34 [ 270.360482] [<ffffffff813d5a64>] ? kernel_thread_helper+0x4/0x10 [ 270.360486] [<ffffffff813d4038>] ? retint_restore_args+0x5/0x6 [ 270.360490] [<ffffffff813d5a60>] ? gs_change+0x13/0x13 The config: name = "dom1" bootloader = "/usr/bin/pygrub" root = "/dev/xvda1 ro" extra = "3" # runlevel memory = 5000 disk = [ 'phy:/dev/md1,xvda1,w' ] # vif = [ 'ip=..., vifname=veth1' ] # none for now Here are some details on the Dom0 kernel (grepping for "xen"): CONFIG_XEN=y CONFIG_XEN_DOM0=y CONFIG_XEN_PRIVILEGED_GUEST=y CONFIG_XEN_PVHVM=y CONFIG_XEN_MAX_DOMAIN_MEMORY=500 CONFIG_XEN_SAVE_RESTORE=y CONFIG_PCI_XEN=y CONFIG_XEN_PCIDEV_FRONTEND=y # CONFIG_XEN_BLKDEV_FRONTEND is not set CONFIG_XEN_BLKDEV_BACKEND=y # CONFIG_XEN_NETDEV_FRONTEND is not set CONFIG_XEN_NETDEV_BACKEND=y CONFIG_INPUT_XEN_KBDDEV_FRONTEND=y CONFIG_HVC_XEN=y CONFIG_HVC_XEN_FRONTEND=y # CONFIG_XEN_WDT is not set # CONFIG_XEN_FBDEV_FRONTEND is not set # Xen driver support CONFIG_XEN_BALLOON=y # CONFIG_XEN_SELFBALLOONING is not set CONFIG_XEN_SCRUB_PAGES=y CONFIG_XEN_DEV_EVTCHN=y CONFIG_XEN_BACKEND=y CONFIG_XENFS=y CONFIG_XEN_COMPAT_XENFS=y CONFIG_XEN_SYS_HYPERVISOR=y CONFIG_XEN_XENBUS_FRONTEND=y CONFIG_XEN_GNTDEV=m CONFIG_XEN_GRANT_DEV_ALLOC=m CONFIG_SWIOTLB_XEN=y CONFIG_XEN_TMEM=y CONFIG_XEN_PCIDEV_BACKEND=m CONFIG_XEN_PRIVCMD=y CONFIG_XEN_ACPI_PROCESSOR=m And the DomU kernel (grepping for "xen"): CONFIG_XEN=y CONFIG_XEN_DOM0=y CONFIG_XEN_PRIVILEGED_GUEST=y CONFIG_XEN_PVHVM=y CONFIG_XEN_MAX_DOMAIN_MEMORY=500 CONFIG_XEN_SAVE_RESTORE=y CONFIG_PCI_XEN=y CONFIG_XEN_PCIDEV_FRONTEND=y CONFIG_XEN_BLKDEV_FRONTEND=y CONFIG_XEN_NETDEV_FRONTEND=y CONFIG_INPUT_XEN_KBDDEV_FRONTEND=y CONFIG_HVC_XEN=y CONFIG_HVC_XEN_FRONTEND=y # CONFIG_XEN_WDT is not set # CONFIG_XEN_FBDEV_FRONTEND is not set # Xen driver support CONFIG_XEN_BALLOON=y # CONFIG_XEN_SELFBALLOONING is not set CONFIG_XEN_SCRUB_PAGES=y CONFIG_XEN_DEV_EVTCHN=y # CONFIG_XEN_BACKEND is not set CONFIG_XENFS=y CONFIG_XEN_COMPAT_XENFS=y CONFIG_XEN_SYS_HYPERVISOR=y CONFIG_XEN_XENBUS_FRONTEND=y CONFIG_XEN_GNTDEV=m CONFIG_XEN_GRANT_DEV_ALLOC=m CONFIG_SWIOTLB_XEN=y CONFIG_XEN_TMEM=y CONFIG_XEN_PRIVCMD=y CONFIG_XEN_ACPI_PROCESSOR=m Any ideas what I'm doing wrong here? Thanks a lot!

    Read the article

  • What's the best / easiest way to combine two mailboxes on Exchange 2007?

    - by jmassey
    I've found this and this(2) (sorry, maximum hyperlink limit for new users is 1, apparently), but they both seem targeted toward much more complex cases than what I'm trying to do, and I just want to make sure I'm not missing some better approach. Here's the scenario: 'Alice' has retired. 'Bob' has taken over Alice's position. Bob was already with the organization in a different but related position, and so they already have their own Exchange account with mail, calendars, etc., that they need to keep. I need to get all of Alice's old mail, calendar entries, etc., merged into Bob's existing stuff. Ideally, I don't want to have all of Alice's stuff in a separate 'recovery' folder that Bob would have to switch back and forth between to look at older stuff; I want it all just merged into Bob's current Inbox / Calendar. I'm assuming (read: hoping) that there's a better way to do this than fiddling with permissions and exporting to and then importing from a .pst. Office version is 2007 for everybody that uses Exchange, if that helps. Exchange is version 8.1. What (preferably step-by-step - I'm new to Exchange) is the best way to do this? I can't imagine this is an uncommon scenario, but my google-fu has failed me; there seems to be nothing on this subject that isn't geared towards far more complex scenarios. (2): h t t p://technet.microsoft.com/en-us/library/bb201751%28EXCHG.80%29.aspx

    Read the article

  • virtual machines, dual booting and data disks on SSD

    - by stevemarvell
    This is in planning, so if I've got the strategy wrong, please let me know. There are multiple questions here, but I think they all degenerate to the same answers. The hardware is a laptop with a single SSD. I'm trying to not lose the performance of the SSD. I plan a native dual booting Windows (plus cygwin) and Linux machine which is my BYOD and represents the development environment. I keep the codebase on a shared partition (though sometimes this is an external thunderbolt SSD) which can be natively "mounted" by whichever OS is in operation. I boot into one or the other environments depending on the task in hand. Sometime I have to develop with windows tools, but generally, Linux is my preferred development environment. It would be ideal if I could VM the other OS and run either in either. I'm going to assume, because I've not found a sensible VM based solution, that I have get samba involved to share the code partition between VMs. Is this going to blow my SSD performance in the VM? The client also supplies me with a VM for the target environment, usually linux. This is not often suited to development and is used for testing only. I normally keep two copies of this, one as a sandbox and one which I deploy to using the client's preferred method. I keep these VM snapshots on the shared partition. The latter is interacted with over the network and so has no disk sharing requirements. However, it would be useful for the sandbox to be able to "mount" the code base from the natively running OS. Is this samba or nfs again, depending on the native OS? Am I missing a trick which allows this to all work smoothly with all four environments running at once without loosing the SSD performance?

    Read the article

  • Booting Ubuntu as VM with KVM on Ubuntu 12.04

    - by CrazycodeMonkey
    I am trying to boot my very first VM using KVM. I have Ubuntu 12.04 installed, i made sure the BIOS had the right virtualization flag enabled for intel processor by running kvm-ok. I have researched this on google and all the instructions that i have found so far are outdated. for e.g. most instructions talk about booting a virtual machine with the following commands qemu-img create -f qcow2 foo.img 100G --- create a virtual disk for your VM kvm --name foo -m 1024 -hda foo.img -cdrom whatever.iso -boot d -- This runs kvm. This command line is incomplete. First you need to be root to run this. Second- it is missing option for the video device. When you run this command you get the following error "Could not initialize SDL(No available video device) - exiting" Googled this error and looked it up on stackover flow http://stackoverflow.com/questions/4841908/sdl-init-failure-reason-is-no-available-video-device The answer provided here does not work on Ubuntu 12.04 Googled this problem further and found out that i need to specify a video device so I finally ran the following command sudo kvm --name mymachine -m 8096 -hda myimage.img --cdrom ubuntu.iso -boot d -vga cirruss -k en-us -vmc :0 This was after I had created the myimage.img image on the drive. Now this command does not give me an error but it just hangs. Does anyone have clear instructions on how to run a VM using KVM on Ubuntu?

    Read the article

  • Can Spotlight or Media Browser index metadata contained in iPhoto or Aperture in Mac OS X?

    - by jaydles
    It seems silly to go to all the trouble to assign "Face" data to thousands of photos, but not make it possible to use that data to locate them outside of that application. Is there any way to get Spotlight or Media Browser in OSX (Snow Leopard) to index and recognize metadata (Faces, Places, etc.) contained in iPhoto or Aperture? I know that that metadata is stored in the "library" database for Aperture/iphoto, rather than on the actual files (which is too bad). And I can even potentially see why it might create challenges for spotlight to use it, since spotlight is presumably a file index system, not a media organizer, but surely the media browser used across the other OSX apps is intended to use it? The media browser's whole purpose seems to be to let you easily locate and reference the items you organize in one of the ilife apps (iphoto or Aperture, in this case) from the others (say, imovie, or Mail). It's particularly vexing since the photo app on the iphone sorts by faces by default. Additionally, the mac-based media browser does access smart albums and folders, so you could establish a workaround by creating a smart album for each "face" or place, or tag, and access them that way, but it seems like there must be an easier way. Am I missing something?

    Read the article

  • HP tx2510us shuts down without warning, now won't boot, no BIOS codes, screen doesn't light

    - by Tim S
    Hey all, HP tx2150us worked great for a year and a half, rarely it would shut down instead of sleeping. It started running hot sometimes, I installed Win7, worked great, still running hot. Before Xmas, it gave me evidence of video errors - jagged lines, rows missing. That same night, it started rebooting, then it shut off and wouldn't boot. It gave me a BIOS code, I shut it off to look up the code online. When I turned it on again, it won't boot at all. The LEDs all light up, the fan, HD, and optical all spin up, but the screen never lights up and it doesn't try to boot, nor does it blink any BIOS codes. It just acts like it's sleeping, and won't wake up. I suspected heat problems, so I disassembled it, cleaned the crap out of the fan, and reassembled it, breaking the stereo mic connector. Oh well. When I reassembled it, it booted up into Win7 again, but kept shutting down for no discernible reason. After a dozen or so random reboots like that, it is now back to where it was - turns on but doesn't boot or give BIOS codes. The screen never lights, and everything spins up then idles. Any ideas? I really can't afford to buy a new one and I use(d) it to take ALL my notes, that's why I got a tablet in the first place.

    Read the article

< Previous Page | 689 690 691 692 693 694 695 696 697 698 699 700  | Next Page >