Search Results

Search found 51282 results on 2052 pages for 'empty class'.

Page 494/2052 | < Previous Page | 490 491 492 493 494 495 496 497 498 499 500 501  | Next Page >

  • How to configure DNS so that www.example.com goes to one server, *.example.com to another

    - by fishwebby
    I'm trying to set up my domain as follows, but I'm not actually sure if it's possible. I have a domain where I would like the base and www addresses to go to my static site, but others to go to my application server. For example: My domain is registered with Dreamhost, and my application is on a VPS at Webbynode. I've set up the domain in Dreamhost to use Webbynode's nameservers: ns1.dnswebby.com ns2.dnswebby.com ns3.dnswebby.com And in Webbynode I've set up a wildcard A record to point to the IP address of my VPS: * 1.2.3.4 A and this works nicely, if I go to app.example.com it resolves to my application server at Webbynode. However, what I'd like to do is have example.com and www.example.com go to my static site, hosted back at Dreamhost, whilst still having any other domain go to my app. What I've done to try and achieve this is set up these DNS "NS" entries at Webbynode, trying to get Dreamhost to resolve these domain names: (empty) ns1.dreamhost.com NS (empty) ns2.dreamhost.com NS (empty) ns3.dreamhost.com NS www ns1.dreamhost.com NS www ns2.dreamhost.com NS www ns3.dreamhost.com NS (I don't have a fixed IP address at Dreamhost so I can't just set up simple A records). However this doesn't work... does anyone have any idea if this is possible and if so how it could be done? Update: I've got this working now, as above for the domain (i.e. registered with Dreamhost, but using Webbynode's nameservers). To delegate the DNS for www.example.com to Dreamhost, I've got the following DNS entries set up: www.example.com. ns1.dreamhost.com. NS www.example.com. ns2.dreamhost.com. NS www.example.com. ns3.dreamhost.com. NS (note the full stops at the end) And to get example.com to resolve to my static site, I set up CNAME record: example.com. www.example.com. CNAME So now, example.com and www.example.com go to my static site on Dreamhost, and if they change the IP address of my shared hosting it won't affect me, and all other subdomains go to my application server. This seems to work nicely, but if anyone knows a better way to do it I'd be happy to hear it. Thanks to all who replied.

    Read the article

  • Excel 2010: if( , , "") not treated the same as blank for pivot table group by date

    - by Confused
    I'm trying to group by date in an Excel 2010 pivot table. The column with dates (i.e., the one want to group by), should be the latest date of 2 other columns if neither is null, or blank. i.e., with a formula like: =IF(AND(A4 <> "", B4 <> ""), MAX(A4,B4), "") Normally, this ""in the IF() formula acts the same as an empty cell. In this case, it is preventing me from grouping by date in the Pivot Table. If I filter the date column by (Blanks), then clear the contents of all those cells, then the pivot table does group by date ok. i.e., "" is not being treated the same as an empty cell.

    Read the article

  • Windows file compare (FC) spurious differences

    - by user165568
    I'm getting differences like this: a.txt Betty Davis Cathy Edwards b.txt Betty Davis Cathy Edwards There are only two lines listed in the diff (which doesn't make sense). No CR/LF/Newline funnies. The difference just moves down if I delete lines. Same problem on Win7 and Win2K. The difference seems to go away if I remove all empty lines from the files. The empty lines are correctly terminiated too. Using /C /W (ignore case, ignore whitespace) Has anyone seen this before? What am I doing wrong? How can I fix it? There are real diffs in the file -missing, extra, or re-spelled names- but the files are byte-for-byte identical at the listed diff.

    Read the article

  • How to check sshd log?

    - by Eye of Hell
    Hello. I have Ubuntu 9.10 installed with sshd and i can successfully connect to it using login and password. I have configured an RSA key login and now have "Server refused our key" as expected. Ok, now i want to check sshd log in order to fingure out a problem. I have examined /etc/ssh/sshd_config and it have SyslogFacility AUTH LogLevel INFO Ok. I'm looking at /var/log/auth.log and... it's empty O_O. Changing Loglevel to VERBOSE helps nothing - auth.log is still empty. Any hints how i can check sshd log?

    Read the article

  • Where do I learn about IP blocks and subnets? Or is there just a calculator that does it all for me?

    - by cwd
    Amazon's elastic compute tool (among others) requires the ip block format for their command: ec2-authorize websrv -P tcp -p 80 -s 205.192.0.0/16 I may be doing this wrong, but as far as I can tell I need to use the block format even for a single IP address. 1) So, how would I do that for this IP? 71.75.232.132 Several years ago I took a CCNA class, and I remember going over IPs and subnets, masks, broadcast addresses, class a/b/c networks, etc. However a lot seems to have changed since then - for example I don't think you can tell what "class" a network is in just by looking at it anymore - sometimes they could be multiple classes. 2) Anyhow, my second question is where do I go to get a refresher on all these things? 3) Or should I just be using ipcalc or an online calculator to do it all for me - and if so, which one?

    Read the article

  • How to lock screen in linux before hibernating?

    - by Emanuel Ey
    So when i hibernate my laptop the screen doesn't lock automatically. To solve this i've changed /etc/acpi/powerbtn.sh to contain: su - myUsername -c "gnome-screensaver-command -l" sudo pm-hibernate exit 0 When running this file from a command line it works as intended (ie, lock the screen and then hibernate). Unfortunately, when pressing the power button, it still just hibernates without locking the screen -what am I missing? EDIT: I've added the line whoami>>~/Desktop/test.txt to verify which user is executing the /etc/acpi/powerbtn.shscript. When pressing the power button, the file test.txt is created, but is empty. From this i conclude that the script is in fact being called when pressing the power button. What i do not understand is how the output of whoami can be empty...

    Read the article

  • Where Catalyst stores applist for switchable graphics?

    - by noober
    I cannot add an app to the list to manually set it to high performance (Radeon instead of Intel HD). When I browse for an exe, nothing happens, the list is still empty. So, maybe I can edit some .cfg or .ini? UPDATE This is NOT my screenshot... actually I've found it on the Net. The list with iexplore.exe is what I meant. When I click 'Browse' and choose any exe (Portal2.exe, for instance) nothing happens. The list is empty, so I cannot set mode for Portal2.exe.

    Read the article

  • Hiera + Puppet classes

    - by Amadan
    I'm trying to figure out Puppet (3.0) and how it relates to built-in Hiera. So this is what I tried, an extremely simple example (I'll make a more complex hierarchy when I manage to get the simple one working): # /etc/puppet/hiera.yaml :backends: - yaml :hierarchy: - common :yaml: :datadir: /etc/puppet/hieradata # /etc/puppet/hieradata/common.yaml test::param: value # /etc/puppet/modules/test/manifests/init.pp class test ($param) { notice($param) } # /etc/puppet/manifests/site.pp include test If I directly apply it, it's fine: $ puppet apply /etc/puppet/manifests/site.pp Scope(Class[Test]): value If I go through puppet master, it's not fine: $ puppet agent --test Could not retrieve catalog from remote server: Error 400 on SERVER: Must pass param to Class[Test] at /etc/puppet/manifests/site.pp:1 on node <nodename> What am I missing? EDIT: I just left the office but a thought struck me: I should probably restart puppet master so it can see the new hiera.conf. I'll try that on Monday; in the meantime, if anyone figures out some not-it problem, I'd appreciate it :)

    Read the article

  • design a large scale network for an organization

    - by Essam
    i want to design a large scale network for an organization with HQ and two branches. i want to use a class A subnet. if i am using the network address 30.0.0.0 for the whole organization how can it be different from another organization company or whatever which is using the same address in another country? now i have the three locations for this organization,so i need 5 subnets [one for the HQ,two for branch A and branch B , one for connecting A to HQ and one for connecting branch B with HQ since i will use central DHCP server at the HQ,is that (number of subnetting) right? is it advisable to use class A or class B for this organization it term of address that will be wasted (let's say it is a university with two branches in two different states)?!

    Read the article

  • Pxe boot ubuntu server - corrupt packages

    - by Stu2000
    I have set up a cobbler pxe boot server and managed to get centos5.8 to fully automatically install. Unfortunately with Ubuntu 12.04-server-i386, it stops mid-way through with a message stating that packages are corrupt. I tried following this tip to unzip the Packages.gz file which results in an empty Packages file with nothing in it. Other people suggested doing a touch command which essentially does the exact same thing, an empty Packages file. That results in me getting a different message that states: Couldn't retrieve dists/precise/restricted/binary-i386/Packages. This may be due to a network..... Does anyone know how to work around this issue? Hitting continue before having made the tip/workaround resulted in ubuntu installing fine, but I need to be able to provide no manual input. Any advice appreciated, Stu

    Read the article

  • pdftk issue after updating Ubuntu from 9.04 to 9.10

    - by Crazydog
    We upgraded our server to 9.10 from 9.04 the other day, and it all went well except for one rather important program. We're using pdftk to automatically generate filled out pdf forms. On 9.04, it worked just fine. After updating to 9.10, the pdf forms would no longer be filled out - they'd just be empty. I discovered that if I try to create an fdf file via pdftk from my PDF form, it just creates an empty fdf with no fields. On windows, pdftk generates the fdf file just fine. Any ideas? Thanks.

    Read the article

  • SCCM deploy from VMware XP image

    - by HannesFostie
    We recently set up the latest version of SCCM and I managed to capture a .wim image from a virtual machine (build and capture task sequence). I want to use this .wim file to deploy winXP to different hardware, and therefor need to add device drivers to the task sequence. I created a driver package per laptop type, and deployed for the first time. However, I am getting a BSOD (0x00007B) which leads me to believe theres a problem with the storage drivers. After adjusting the task sequence to try and point to the mass storage drivers (which are applied at F6, I suppose) I do not get a list of compatible drivers (the list is empty). I looked around, and found some issues regarding hdc class drivers that are not recognized as mass storage drivers. The workaround suggested changing the INF file to make the driver a SCSIAdapter class driver, and importing these again, but to no avail. The list remains empty. Any help is much appreciated

    Read the article

  • Calling the LWRP from the Exception Handler

    - by Sarah Haskins
    Is it possible to call out to a Provider (LWRP) from a Chef Exception Handler? I think my Provider is out of scope, but I don't know if what I am trying to do is possible? or advisable? Here is my provider code (cookbooks/config/provider/signal.rb): action :failure do Chef::Log.info("Yeah success") end Here is my exception handler code (exception_handler/handlers/exceptionHandler.rb): require 'chef/handler' config_signal "signal" do action :nothing end class Chef class Handler class LogCollector < Chef::Handler notifies :failure, resources(:config_signal => signal) end end end Also, if anyone has a good recommendation for general reading about scope in the context of Chef I'd appreciate it.

    Read the article

  • SCP command Clarification

    - by david.colais
    I'm using the scp commands to pull some files from the remote server and one variation of the command is not working. I have 2 files names one.xml and two.xml in a remote server and I'm pulling these two files into the current dir using the following command: scp [email protected]:/student/class/Intermediate/one.xml . scp [email protected]:/student/class/Intermediate/two.xml . The above command works fine but if I use wildcards to pull all the xml files in a single shot as shown below it returns scp: No match. scp [email protected]:/student/class/Intermediate/*.xml . Why is it working if I pull the files individually and not working if I try to pull using wildcards.

    Read the article

  • data protector red tapes

    - by Caesar
    I am using HP Data Protector A.06.11 in my organization, with HP EML E-SeriesEML library, with 4 drives using LTO-4 tapes, and i am having some problems. Yesterday I put 5 new tapes in the robot and formatted them. At that time, the robot got just those empty 5 tapes with empty space. (all the rest of the tapes are red, or with protection) Today in the morning after the night (1 backup run at night), and 2 of the new tapes are red (the properties are): Writes : 2 Overwrites : 1 Errors : 9 I format one of them, and check for each drive if the tape become red, no one of the drives do it. In the main pool properties, in media condition got: Valid for : 36 (months) Maximum overwrites : 250

    Read the article

  • curl can't verify cert using capath, but can with cacert option

    - by phylae
    I am trying to use curl to connect to a site using HTTPS. But curl is failing to verify the SSL cert. $ curl --verbose --capath ./certs/ --head https://example.com/ * About to connect() to example.com port 443 (#0) * Trying 1.1.1.1... connected * Connected to example.com (1.1.1.1) port 443 (#0) * successfully set certificate verify locations: * CAfile: none CApath: ./certs/ * SSLv3, TLS handshake, Client hello (1): * SSLv3, TLS handshake, Server hello (2): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS alert, Server hello (2): * SSL certificate problem, verify that the CA cert is OK. Details: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed * Closing connection #0 curl: (60) SSL certificate problem, verify that the CA cert is OK. Details: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed More details here: http://curl.haxx.se/docs/sslcerts.html curl performs SSL certificate verification by default, using a "bundle" of Certificate Authority (CA) public keys (CA certs). If the default bundle file isn't adequate, you can specify an alternate file using the --cacert option. If this HTTPS server uses a certificate signed by a CA represented in the bundle, the certificate verification probably failed due to a problem with the certificate (it might be expired, or the name might not match the domain name in the URL). If you'd like to turn off curl's verification of the certificate, use the -k (or --insecure) option. I know about the -k option. But I do actually want to verify the cert. The certs directory has been properly hashed with c_rehash . and it contains: A Verisign intermediate cert Two self-signed certs The above site should be verified with the Verisign intermediate cert. When I use the --cacert option instead (and point directly to the Verisign cert) curl is able to verify the SSL cert. $ curl --verbose --cacert ./certs/verisign-intermediate-ca.crt --head https://example.com/ * About to connect() to example.com port 443 (#0) * Trying 1.1.1.1... connected * Connected to example.com (1.1.1.1) port 443 (#0) * successfully set certificate verify locations: * CAfile: ./certs/verisign-intermediate-ca.crt CApath: /etc/ssl/certs * SSLv3, TLS handshake, Client hello (1): * SSLv3, TLS handshake, Server hello (2): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS handshake, Server finished (14): * SSLv3, TLS handshake, Client key exchange (16): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSL connection using RC4-SHA * Server certificate: * subject: C=US; ST=State; L=City; O=Company; OU=ou1; CN=example.com * start date: 2011-04-17 00:00:00 GMT * expire date: 2012-04-15 23:59:59 GMT * common name: example.com (matched) * issuer: C=US; O=VeriSign, Inc.; OU=VeriSign Trust Network; OU=Terms of use at https://www.verisign.com/rpa (c)10; CN=VeriSign Class 3 Secure Server CA - G3 * SSL certificate verify ok. > HEAD / HTTP/1.1 > User-Agent: curl/7.19.7 (x86_64-pc-linux-gnu) libcurl/7.19.7 OpenSSL/0.9.8k zlib/1.2.3.3 libidn/1.15 > Host: example.com > Accept: */* > < HTTP/1.1 404 Not Found HTTP/1.1 404 Not Found < Cache-Control: must-revalidate,no-cache,no-store Cache-Control: must-revalidate,no-cache,no-store < Content-Type: text/html;charset=ISO-8859-1 Content-Type: text/html;charset=ISO-8859-1 < Content-Length: 1267 Content-Length: 1267 < Server: Jetty(7.2.2.v20101205) Server: Jetty(7.2.2.v20101205) < * Connection #0 to host example.com left intact * Closing connection #0 * SSLv3, TLS alert, Client hello (1): In addition, if I try hitting one of the sites using a self signed cert and the --capath option, it also works. (Let me know if I should post an example of that.) This implies that curl is finding the cert directory, and it is properly hash. Finally, I am able to verify the SSL cert with openssl, using its -CApath option. $ openssl s_client -CApath ./certs/ -connect example.com:443 CONNECTED(00000003) depth=3 /C=US/O=VeriSign, Inc./OU=Class 3 Public Primary Certification Authority verify return:1 depth=2 /C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=(c) 2006 VeriSign, Inc. - For authorized use only/CN=VeriSign Class 3 Public Primary Certification Authority - G5 verify return:1 depth=1 /C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)10/CN=VeriSign Class 3 Secure Server CA - G3 verify return:1 depth=0 /C=US/ST=State/L=City/O=Company/OU=ou1/CN=example.com verify return:1 --- Certificate chain 0 s:/C=US/ST=State/L=City/O=Company/OU=ou1/CN=example.com i:/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)10/CN=VeriSign Class 3 Secure Server CA - G3 --- Server certificate -----BEGIN CERTIFICATE----- <cert removed> -----END CERTIFICATE----- subject=/C=US/ST=State/L=City/O=Company/OU=ou1/CN=example.com issuer=/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)10/CN=VeriSign Class 3 Secure Server CA - G3 --- No client certificate CA names sent --- SSL handshake has read 1563 bytes and written 435 bytes --- New, TLSv1/SSLv3, Cipher is RC4-SHA Server public key is 2048 bit Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 Cipher : RC4-SHA Session-ID: D65C4C6D52E183BF1E7543DA6D6A74EDD7D6E98EB7BD4D48450885188B127717 Session-ID-ctx: Master-Key: 253D4A3477FDED5FD1353D16C1F65CFCBFD78276B6DA1A078F19A51E9F79F7DAB4C7C98E5B8F308FC89C777519C887E2 Key-Arg : None Start Time: 1303258052 Timeout : 300 (sec) Verify return code: 0 (ok) --- QUIT DONE How can I get curl to verify this cert using the --capath option?

    Read the article

  • Image manipulation filter needed: unobtrusively hide an area by blurring or smudging?

    - by index
    I would like to hide an empty area of a panorama stitched with hugin (using the GIMP). Hide in the sense of blending it in unobtrusively. I.e. fill the area with the average color of the surroundings and blur it. Or manually smudge the surroundings into the empty area. Is there a filter/plug-in that automatically smudges/blurs the edges into the area? Not looking for seam carving. Thanks.

    Read the article

  • Wine not finding some files

    - by Levans
    I'm having strange issues with Wine : If I look a C:\windows\system32\drivers\ in wine explorer, the directory looks empty, while the directory ~/.wine/drive_c/windows/system32/drivers is not. Plus, having the H: drive mapped to my home directory, I can look at H:\.wine\drive_c\windows\system32\drivers and it is not empty, the files are here ! Thus it seems Wine has the rights to access these files. So why don't they appear on the C: drive ? Some of my programs need them. I'm using Gentoo Linux, and Wine is version 1.7.0 compiled with these useflags (from eix) : X alsa cups fontconfig gecko jpeg lcms ldap mono mp3 ncurses nls openal opengl perl png prelink run-exes ssl threads truetype udisks xcomposite xinerama xml -capi -custom-cflags -dos -gphoto2 -gsm -gstreamer -odbc -opencl -osmesa -oss -pulseaudio -samba -scanner -selinux -test -v4l ABI_MIPS="-n32 -n64 -o32" ABI_X86="32 64 -x32" ELIBC="glibc" EDIT: I just updated to wine 1.7.4 and nothing changed.

    Read the article

  • ActiveDirectory - LDAP query for objectCategory unexpected results

    - by FinalizedFrustration
    AD is at 2003 functional level, some of our DC's are running Windows Server 2003, some are 2008, some are 2008 R2. When using the following query: (objectCategory=user) I do not expect to see any result where the objectCategory attribute is equal to 'CN=Person,CN=Schema,CN=Configuration,DC=Contoso' I expect only objects where the objectCategory attribute is equal to 'CN=User,CN=Schema,CN=Configuration,DC=Contoso' However, the query does indeed return all objects with the objectCategory attribute equal to 'CN=Person,CN=Schema,CN=Configuration,DC=Contoso' My question then is this: Why do I see the search results that I do? Does AD actively translate queries that include (objectCategory=user) to (objectCategory=Person)? I have looked at the schema definitions for both the Person and the User class, but I cannot see any reason for the query results as I am experiencing them. I know that the User class is a subclass of the organizationalPerson class, which is a subclass of Person, but I can't see an attribute value that would explain this translation.

    Read the article

  • How to figure out which directory is web server root?

    - by matt
    I want to view websites hosted on my Mac when running Windows VMware Fusion. I have an entry in the Windows hosts file to enable the routing: #ip of my mac domain i use on the VM to access it 192.168.1.70 mymac However, it resolves to an empty directory as a 404 is generated. I can see the access log on my Mac that everything is OK access wise. Firefox on VMware states the following response headers: Server Apache/2.2.14 (Unix) mod_ssl/2.2.14 OpenSSL/0.9.8l DAV/2 PHP/5.3.1 Any ideas how I can figure out what directory is being served? I am lost in a maze of twisty httpd.conf passages. localhost on my Mac resolves to my ~/Sites directory. 192.168.1.70 resolves to the same empty directory/404. Thanks.

    Read the article

  • After adding data files to a file group Is there a way to distribute the data into the new files?

    - by Blootac
    I have a database in a single file group, with a single file group. I've added 7 data files to this file group. Is there a way to rebalance the data over the 8 data files other than by telling sql server to empty the original? If this is the only way, is it possible to allow sql server to start writing to this file? MSDN says that once its empty its marked so no new data will be written to it. What I'm aiming for is 8 equally balanced data files. I'm running SQL Server 2005 standard edition. Thanks

    Read the article

  • nginx conditional Accept header

    - by manu_v
    Some mobile devices send the following incorrect requests to our servers : GET / HTTP/1.0 Accept: User-Agent : xxx The empty Accept header causes our Ruby on Rails server to throw back a 500 error. In Apache, the following directive allows us to rewrite the header before sending it to the application RoR server in order to cope with the broken devices : RequestHeader edit Accept ^$ "*/*" early We're currently setting up nginx, but achieving the same work-around is proving difficult. We are able to set : proxy_set_header Accept */*; However, this seems to have to be done inconditionally. Whenever trying to do : if ($http_accept !~ ".") { proxy_set_header Accept */*; } It complains with the message : "proxy_set_header" directive is not allowed here So, using nginx, how can we set the HTTP Accept header to */* when it is empty before sending the request to the application server ?

    Read the article

  • Does Win 7 still requires copying all files over before burning to a DVD-R or BD-R?

    - by Jian Lin
    It seems that Win 7 still needs to copy all files over to a folder, before it burns all files to the DVD-R or BD-R? I think since XP or Vista, Windows always copy everything over to a temporary folder before it will burn to an empty DVD-R. So if you just want to burn a 4GB file to an empty DVD-R, it will first make a copy of that file, and then burn it, instead of just burning it without making a copy first. And now on Win 7, it seems like it is the case also? Most other 3rd party burning tools won't make an extra copy of the files first... Win 7 is the exception. Is there a way around it? (to avoid copying over 25GB or 50GB of data before burning)

    Read the article

  • JFFS2 poor mount performance

    - by Marcin Polkowski
    I run multiple ARM boards with Debian Linux installed. Board is equipped with 512 MB of NAND memory. I've observed that after ~3 months of continuous run booting time increased significantly - it takes over 3 minutes to mount filesystem (JFFS2). System was using about 35% of available storage so I’ve removed unnecessary files (got to ~18%) but this didn't change anything. Then I realized that my software produces directories that are left empty so I’ve removed ~500 empty and unnecessary dirs. This didn’t help either. After system is started I see JFFS2 garbage collector (jffs2_gcd_mtd4) running and occupying over 90% of CPU. Now my question: is there a way to „optimize” JFFS2 filesystem for better performance - faster booting (my system have limited timid to boot up)? It would be great if this optimization could be done remotely - I have no physical access to boards.

    Read the article

  • To Run Linux (Ubuntu) on Windows 7, is using Virtual PC one of the best ways?

    - by Jian Lin
    I need to try Linux (Ubuntu) and feel hesitant to install Ubuntu on top of a Win 7 machine to dual boot (might need to use Win7 and Ubuntu at the same time). Is creating a Virtual PC on Win7, and then installing the latest Ubuntu on that Virtual PC one of the better option? So I think I can create a Virtual PC with an empty virtual hard disk (vhd), say, for 30GB, and then put in the Ubuntu DVD-R or CD-R to install Ubuntu onto that empty hard disk. Update: for some reason, the first time Ubuntu 10.04 installation CD-R boots up, it asked for the Language, and "Install Ubuntu" and then the screen has vertical green bars and then the VPC just closed. The 2nd or 3rd time it booted up, there is no asking of Language or "Install Ubuntu" and just shut down the VPC, sometimes with vertical green bars. I even created another new hard drive and same thing happened. And created VPC 02, and same thing happened. Created VPC 03 with a fixed hard drive size of 60GB and same thing happened.

    Read the article

< Previous Page | 490 491 492 493 494 495 496 497 498 499 500 501  | Next Page >