Search Results

Search found 59449 results on 2378 pages for 'disk error'.

Page 497/2378 | < Previous Page | 493 494 495 496 497 498 499 500 501 502 503 504  | Next Page >

  • How to resolve "svn: Can't find a temporary directory: Internal error"?

    - by HorusKol
    I have already googled the message, and I have plenty of disk space available on the SVN server (it's about 4% usage of 150 GB). I have noticed that when I try echo $TMPDIR at the command prompt on the SVN server I get nothing. What is making this a little confusing is that I only get this message from one location when I do an svn diff (that I've tested so far) - this error is not coming up when I try from three other computers (one of which is testing against the exact same repository, the other two are different repositories on the same svn server). About the only difference I can see is that the broken working copy is connecting to the server by an IP address where all the others are using a server name (although this resolves over DNS to the same IP Address). I'm hoping that I don't have to scratch the broken working copy and checkout a new one - unfortunately, this is a legacy project and not all changes have been properly revisioned.

    Read the article

  • Installation doesn't detect existing partitions

    - by retrac1324
    I am trying to install Ubuntu 11.10 in a dual boot with my existing Windows 7 but the installer does not detect any existing partitions. I have tried resetting my BCD using EasyBCD and doing fixmbr from the Windows startup disc. A while ago I had to use TestDisk to recover my partition table so this might be the cause but I have installed Ubuntu and Windows many times before with no problems. fdisk -l output: Disk /dev/sda: 640.1 GB, 640135028736 bytes 255 heads, 63 sectors/track, 77825 cylinders, total 1250263728 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x360555e5 Device Boot Start End Blocks Id System /dev/sda1 * 2048 1250274689 625136321 7 HPFS/NTFS/exFAT Disk /dev/sdf: 7803 MB, 7803174912 bytes 255 heads, 63 sectors/track, 948 cylinders, total 15240576 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x6f795a8d Device Boot Start End Blocks Id System /dev/sdf1 * 63 15240575 7620256+ c W95 FAT32 (LBA)

    Read the article

  • Gnome 3 freezes on logon on samsung RV 509

    - by Noufal
    I have a Samsung NP-RV509 A0FIN and I tried to install GNU/Linux with gnome 3.2 on it. I tried Fedora 16, Ubuntu 11.10 and Linux Mint 12 RC, but with no success. All of these freezes upon login into gnome shell. I think it is the problem with graphics driver, so I tried xorg-edgers ppa on my last installation, ie., Linux Mint. I also tried various intel graphics packages listed on Synaptic package manager, but no success again. My device configuration is as follows(obtained from windows 7): More details about my computer Component Details Subscore Base score Processor Intel(R) Pentium(R) CPU P6200 @ 2.13GHz 5.6 4.6 Memory (RAM) 4.00 GB 7.2 Graphics Intel(R) HD Graphics 4.6 Gaming graphics 1562 MB Total available graphics memory 5.2 Primary hard disk 12GB Free (50GB Total) 5.9 Windows 7 Ultimate System -------------------------------------------------------------------------------- Manufacturer SAMSUNG ELECTRONICS CO., LTD. Model RV409/RV509/RV709 Total amount of system memory 4.00 GB RAM System type 32-bit operating system Number of processor cores 2 64-bit capable Yes Storage -------------------------------------------------------------------------------- Total size of hard disk(s) 418 GB Disk partition (C:) 12 GB Free (50 GB Total) Media drive (D:) CD/DVD Disk partition (E:) 526 MB Free (191 GB Total) Disk partition (F:) 101 GB Free (177 GB Total) Graphics -------------------------------------------------------------------------------- Display adapter type Intel(R) HD Graphics Total available graphics memory 1562 MB Dedicated graphics memory 64 MB Dedicated system memory 0 MB Shared system memory 1498 MB Display adapter driver version 8.15.10.2202 Primary monitor resolution 1366x768 DirectX version DirectX 10 Network -------------------------------------------------------------------------------- Network Adapter Realtek PCIe GBE Family Controller Network Adapter Broadcom 802.11n Network Adapter Network Adapter Microsoft Virtual WiFi Miniport Adapter Notes -------------------------------------------------------------------------------- The gaming graphics score is based on the primary graphics adapter. If this system has linked or multiple graphics adapters, some software applications may see additional performance benefits. Any help is appreciated, and thanks in advance.

    Read the article

  • Request Limit Length Limits for IIS&rsquo;s requestFiltering Module

    - by Rick Strahl
    Today I updated my CodePaste.net site to MVC 3 and pushed an update to the site. The update of MVC went pretty smooth as well as most of the update process to the live site. Short of missing a web.config change in the /views folder that caused blank pages on the server, the process was relatively painless. However, one issue that kicked my ass for about an hour – and not foe the first time – was a problem with my OpenId authentication using DotNetOpenAuth. I tested the site operation fairly extensively locally and everything worked no problem, but on the server the OpenId returns resulted in a 404 response from IIS for a nice friendly OpenId return URL like this: http://codepaste.net/Account/OpenIdLogon?dnoa.userSuppliedIdentifier=http%3A%2F%2Frstrahl.myopenid.com%2F&dnoa.return_to_sig_handle=%7B634239223364590000%7D%7BjbHzkg%3D%3D%7D&dnoa.return_to_sig=7%2BcGhp7UUkcV2B8W29ibIDnZuoGoqzyS%2F%2FbF%2FhhYscgWzjg%2BB%2Fj10ZpNdBkUCu86dkTL6f4OK2zY5qHhCnJ2Dw%3D%3D&openid.assoc_handle=%7BHMAC-SHA256%7D%7B4cca49b2%7D%7BMVGByQ%3D%3D%7D&openid.claimed_id=http%3A%2F%2Frstrahl.myopenid.com%2F&openid.identity=http%3A%2F%2Frstrahl.myopenid.com%2F&openid.mode=id_res&openid.ns=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0&openid.ns.sreg=http%3A%2F%2Fopenid.net%2Fextensions%2Fsreg%2F1.1&openid.op_endpoint=http%3A%2F%2Fwww.myopenid.com%2Fserver&openid.response_nonce=2010-10-29T04%3A12%3A53Zn5F4r5&openid.return_to=http%3A%2F%2Fcodepaste.net%2FAccount%2FOpenIdLogon%3Fdnoa.userSuppliedIdentifier%3Dhttp%253A%252F%252Frstrahl.myopenid.com%252F%26dnoa.return_to_sig_handle%3D%257B634239223364590000%257D%257BjbHzkg%253D%253D%257D%26dnoa.return_to_sig%3D7%252BcGhp7UUkcV2B8W29ibIDnZuoGoqzyS%252F%252FbF%252FhhYscgWzjg%252BB%252Fj10ZpNdBkUCu86dkTL6f4OK2zY5qHhCnJ2Dw%253D%253D&openid.sig=h1GCSBTDAn1on98sLA6cti%2Bj1M6RffNerdVEI80mnYE%3D&openid.signed=assoc_handle%2Cclaimed_id%2Cidentity%2Cmode%2Cns%2Cns.sreg%2Cop_endpoint%2Cresponse_nonce%2Creturn_to%2Csigned%2Csreg.email%2Csreg.fullname&openid.sreg.email=rstrahl%40host.com&openid.sreg.fullname=Rick+Strahl A 404 of course isn’t terribly helpful – normally a 404 is a resource not found error, but the resource is definitely there. So how the heck do you figure out what’s wrong? If you’re just interested in the solution, here’s the short version: IIS by default allows only for a 1024 byte query string, which is obviously exceeded by the above. The setting is controlled by the RequestFiltering module in IIS 6 and later which can be configured in ApplicationHost.config (in \%windir\system32\inetsvr\config). To set the value configure the requestLimits key like so: <configuration> <security> <requestFiltering> <requestLimits maxQueryString="2048"> </requestLimits> </requestFiltering> </security> </configuration> This fixed me right up and made the requests work. How do you find out about problems like this? Ah yes the troubles of an administrator? Read on and I’ll take you through a quick review of how I tracked this down. Finding the Problem The issue with the error returned is that IIS returns a 404 Resource not found error and doesn’t provide much information about it. If you’re lucky enough to be able to run your site from the localhost IIS is actually very helpful and gives you the right information immediately in a nicely detailed error page. The bottom of the page actually describes exactly what needs to be fixed. One problem with this easy way to find an error: You HAVE TO run localhost. On my server which has about 10 domains running localhost doesn’t point at the particular site I had problems with so I didn’t get the luxury of this nice error page. Using Failed Request Tracing to retrieve Error Info The first place I go with IIS errors is to turn on Failed Request Tracing in IIS to get more error information. If you have access to the server to make a configuration change you can enable Failed Request Tracing like this: Find the Failed Request Tracing Rules in the IIS Service Manager.   Select the option and then Edit Site Tracing to enable tracing. Then add a rule for * (all content) and specify status codes from 100-999 to capture all errors. if you know exactly what error you’re looking for it might help to specify it exactly to keep the number of errors down. Then run your request and let it fail. IIS will throw error log files into a folder like this C:\inetpub\logs\FailedReqLogFiles\W3SVC5 where the last 5 is the instance ID of the site. These files are XML but they include an XSL stylesheet that provides some decent formatting. In this case it pointed me straight at the offending module:   Ok, it’s the RequestFilteringModule. Request Filtering is built into IIS 6-7 and configured in ApplicationHost.config. This module defines a few basic rules about what paths and extensions are allowed in requests and among other things how long a query string is allowed to be. Most of these settings are pretty sensible but the query string value can easily become a problem especially if you’re dealing with OpenId since these return URLs are quite extensive. Debugging failed requests is never fun, but IIS 6 and forward at least provides us the tools that can help us point in the right direction. The error message the FRT report isn’t as nice as the IIS error message but it at least points at the offending module which gave me the clue I needed to look at request restrictions in ApplicationHost.config. This would still be a stretch if you’re not intimately familiar, but I think with some Google searches it would be easy to track this down with a few tries… Hope this was useful to some of you. Useful to me to put this out as a reminder – I’ve run into this issue before myself and totally forgot. Next time I got it, right?© Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  Security  

    Read the article

  • dual boot xp/xp, now win7/xp getting no xp

    - by ped
    I have a laptop on which I have two drives with separate XP installs, one barebones for music production, the other "normal" XP with Office etc. (Unfortunately the bios won't give a boot disk choice) Normally I would be presented with two WinXPs on booting. Selecting the second one would get me into the "normal installation on disk 1 (C).Selecting the first in boot order would give me D: (disk 2) withe barebones XP. However, I installed Windows 7 Home onto disk 1 (C:), but there were no dual boot options anymore, even though I installed DualBoot Pro and added WinXP disk D:. The options now show show up, but seletcing Win XP just turns into a reboot back to where I started. Any help would be much appreciated

    Read the article

  • Reset MAAS after loosing Juju configuration?

    - by Azendale
    I managed to delete my Juju client cofiguration without running a juju destroy-environment first, leaving my MaaS in a state where I could not deploy to it. I would get the following (conflicting) output $ juju bootstrap ERROR environment is already bootstrapped $ juju status ERROR Unable to connect to environment "". Please check your credentials or use 'juju bootstrap' to create a new environment. Error details: no instances found So, I tried running juju destroy-environment with the new config, to see if it would clean up the old Juju environment on the MaaS system. It gave me the error "ERROR gomaasapi: got error back from server: 409 CONFLICT". I went into the MaaS GUI and stopped the leftover machines, and then deleted all the nodes and had then go through the discovery and commissioning stages again, but I still got the same errors after all that! Is there a way to reset this?

    Read the article

  • Why can't I resize some virtual thick disks in VMWhare vSphere Client

    - by Paul-Jan
    Let me start off by mentioning I am really really new to this whole VMWare ESX/vSphere thing, so apologies if this is a FAQ or asked a million times before using better wording. Also, please correct me if I'm using the wrong terms all over the place. We just got this brand new VMWare EXS server, and we started importing virtual machines from around the office. Some of them were originally based on VMWare, others are converted from MS Virtual Server. One of these machines needs a bigger disk, but in the vSphere Client the disk size spinedit is disabled. Disk type is thick. However, for other machines, with the same disk type, the spinedit is enabled and I can happily resize their disks. So here is the question: why can't I resize this disk in particular? All machines are freshly converted, so I assume there are no snapshots yet, which I read would stop a resize from happening (again, correct me if I'm wrong).

    Read the article

  • Built-in card-reader doesn't work. HP Compaq nx6325 notebook

    - by user10940
    I have a HP-Compaq nx6325 notebook with an built-in card-reader (SD, MS/Pro, MMC, SM, XD) and the ubuntu (10.10.) don't see it. I've tried to install it manually, with this steps (and with this tifmxx driver), but doesn't work. The compile log: $ echo /home/tvera/downloads/cr_install /home/tvera/downloads/cr_install $ make -C /lib/modules/2.6.35-25-generic/build M=/home/tvera/downloads/cr_install make[1]: Entering directory `/usr/src/linux-headers-2.6.35-25-generic' CC [M] /home/tvera/downloads/cr_install/tifm_core.o In file included from /home/tvera/downloads/cr_install/tifm_core.c:12: /home/tvera/downloads/cr_install/linux/tifm.h:128: error: field ‘cdev’ has incomplete type /home/tvera/downloads/cr_install/tifm_core.c: In function ‘tifm_uevent’: /home/tvera/downloads/cr_install/tifm_core.c:69: warning: passing argument 1 of ‘add_uevent_var’ from incompatible pointer type include/linux/kobject.h:244: note: expected ‘struct kobj_uevent_env *’ but argument is of type ‘char **’ /home/tvera/downloads/cr_install/tifm_core.c:69: warning: passing argument 2 of ‘add_uevent_var’ makes pointer from integer without a cast include/linux/kobject.h:244: note: expected ‘const char *’ but argument is of type ‘int’ /home/tvera/downloads/cr_install/tifm_core.c: At top level: /home/tvera/downloads/cr_install/tifm_core.c:161: warning: initialization from incompatible pointer type /home/tvera/downloads/cr_install/tifm_core.c: In function ‘tifm_free’: /home/tvera/downloads/cr_install/tifm_core.c:170: warning: type defaults to ‘int’ in declaration of ‘__mptr’ /home/tvera/downloads/cr_install/tifm_core.c:170: warning: initialization from incompatible pointer type /home/tvera/downloads/cr_install/tifm_core.c: At top level: /home/tvera/downloads/cr_install/tifm_core.c:177: error: unknown field ‘release’ specified in initializer /home/tvera/downloads/cr_install/tifm_core.c:178: warning: initialization from incompatible pointer type /home/tvera/downloads/cr_install/tifm_core.c: In function ‘tifm_alloc_adapter’: /home/tvera/downloads/cr_install/tifm_core.c:190: error: implicit declaration of function ‘class_device_initialize’ /home/tvera/downloads/cr_install/tifm_core.c: In function ‘tifm_add_adapter’: /home/tvera/downloads/cr_install/tifm_core.c:211: error: ‘BUS_ID_SIZE’ undeclared (first use in this function) /home/tvera/downloads/cr_install/tifm_core.c:211: error: (Each undeclared identifier is reported only once /home/tvera/downloads/cr_install/tifm_core.c:211: error: for each function it appears in.) /home/tvera/downloads/cr_install/tifm_core.c:212: error: implicit declaration of function ‘class_device_add’ /home/tvera/downloads/cr_install/tifm_core.c: In function ‘tifm_remove_adapter’: /home/tvera/downloads/cr_install/tifm_core.c:237: error: implicit declaration of function ‘class_device_del’ /home/tvera/downloads/cr_install/tifm_core.c: In function ‘tifm_free_adapter’: /home/tvera/downloads/cr_install/tifm_core.c:243: error: implicit declaration of function ‘class_device_put’ /home/tvera/downloads/cr_install/tifm_core.c: In function ‘tifm_alloc_device’: /home/tvera/downloads/cr_install/tifm_core.c:275: error: ‘struct device’ has no member named ‘bus_id’ /home/tvera/downloads/cr_install/tifm_core.c:275: error: ‘BUS_ID_SIZE’ undeclared (first use in this function) make[2]: *** [/home/tvera/downloads/cr_install/tifm_core.o] Error 1 make[1]: *** [_module_/home/tvera/downloads/cr_install] Error 2 make[1]: Leaving directory `/usr/src/linux-headers-2.6.35-25-generic' make: *** [all] Error 2 The output of lsusb: Bus 001 Device 005: ID 05e3:0702 Genesys Logic, Inc. USB 2.0 IDE Adapter Bus 003 Device 003: ID 0458:003a KYE Systems Corp. (Mouse Systems) NetScroll+ Mini Traveler Bus 003 Device 002: ID 08ff:2580 AuthenTec, Inc. AES2501 Fingerprint Sensor Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

    Read the article

  • How to recover dpkg from corrupted downloads?

    - by rocker9455
    I just upgraded to the 10.10 RC earlier and had a few problems with graphic drivers (x didnt start) But i have remedied that now. When i run 'sudo apt-get install -f' i get this: will@UbuntuBox:/mnt/slax$ sudo apt-get install -f [sudo] password for will: Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: libmono-wcf3.0-cil openoffice.org-calc openoffice.org-core The following NEW packages will be installed: libmono-wcf3.0-cil The following packages will be upgraded: openoffice.org-calc openoffice.org-core 2 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. 16 not fully installed or removed. Need to get 0B/32.5MB of archives. After this operation, 1,929kB disk space will be freed. Do you want to continue [Y/n]? Y (Reading database ... 201565 files and directories currently installed.) Preparing to replace openoffice.org-calc 1:3.2.1-6ubuntu2~10.04.1 (using .../openoffice.org-calc_1%3a3.2.1-7ubuntu1_i386.deb) ... Unpacking replacement openoffice.org-calc ... xz: (stdin): Compressed data is corrupt dpkg-deb: subprocess <decompress> returned error exit status 1 dpkg: error processing /var/cache/apt/archives/openoffice.org-calc_1%3a3.2.1-7ubuntu1_i386.deb (--unpack): short read on buffer copy for backend dpkg-deb during `./usr/lib/openoffice/basis3.2/program/libscfiltli.so' dpkg: regarding .../openoffice.org-core_1%3a3.2.1-7ubuntu1_i386.deb containing openoffice.org-core: openoffice.org-core conflicts with openoffice.org-calc (<< 1:3.2.1-7ubuntu1) openoffice.org-calc (version 1:3.2.1-6ubuntu2~10.04.1) is present and installed. dpkg: error processing /var/cache/apt/archives/openoffice.org-core_1%3a3.2.1-7ubuntu1_i386.deb (--unpack): conflicting packages - not installing openoffice.org-core Unpacking libmono-wcf3.0-cil (from .../libmono-wcf3.0-cil_2.6.7-3ubuntu1_all.deb) ... dpkg-deb (subprocess): data: internal gzip read error: '<fd:0>: data error' dpkg-deb: subprocess <decompress> returned error exit status 2 dpkg: error processing /var/cache/apt/archives/libmono-wcf3.0-cil_2.6.7-3ubuntu1_all.deb (--unpack): subprocess dpkg-deb --fsys-tarfile returned error exit status 2 Errors were encountered while processing: /var/cache/apt/archives/openoffice.org-calc_1%3a3.2.1-7ubuntu1_i386.deb /var/cache/apt/archives/openoffice.org-core_1%3a3.2.1-7ubuntu1_i386.deb /var/cache/apt/archives/libmono-wcf3.0-cil_2.6.7-3ubuntu1_all.deb E: Sub-process /usr/bin/dpkg returned an error code (1) Any idea how i can get the broken packages fixed? Cheers, Will

    Read the article

  • How to submit sitemap when your website has partial https? - Error: "Not in Domain"

    - by Ralph N
    My website is an ecommerce that is set up to do http for the item browsing portion, but https for things like shopping cart, contact us, etc.. (anything that has forms on it). I've submitted my website a long time ago to google webmaster tools as http://www.mywebsite.com. I also submitted a sitemap with about 40 links - 8 of them are https. I've noticed that for the longest time, google webmaster tools was reporting that 32 out of the 40 links have been crawled. I tested all the links against my robots.txt and realized that my robots text was blocking the https links. Google says those links are "Not In Domain". Is there a way i'm supposed to get around this so that I can have a hybrid-ssl site? I understand the concept that one site is mywebsite.com:80 and the other is mywebsite.com:443, but i'd like to avoid submitting and maintaining 2 seperate websites on google webmaster tools.

    Read the article

  • Begin the Clone Wars Have!

    - by Antony Reynolds
    Creating a New Virtual Machine from an Existing Virtual Disk In previous posts I described how I set up an OEL6 machine under VirtualBox that can run an 11gR2 database and FMW 11.1.1.5.  That is great if you want the DB and FMW running in the same virtual image and it has served me well for some proof of concepts and also for some testing of different JVMs.  However I also wanted to run some testing of FMW with the database running on a separate physical machine.  So in this post I will show how to take a VirtualBox image and create a new image based on the disks from that original image. What are my Options? There is more than one way to skin a cat, or in this case to create two separate VMs that can run on different hardware.  Some of the options include: Create new virtual disk images for each new VM. Clone the existing disk images and point the new VM at the cloned images. Point the new VM at the existing snapshots. #1 is too much like hard work, install OEL twice, install a database again, install FMW again, run RCU again!  Life is too short! #2 is probably the safest way of doing things.  VirtualBox allows you to clone a disk image for use in a separate machine.  However this of course duplicates the disk and means that it is now occupying 3 times the space, once for the original disk and twice more for the two clones I would need. #3 is the most space efficient way of doing things.  It does mean however that I can only run the new “cloned” images if I have access to the original image because that is where the base snapshots reside.  However this is not a problem for me as long as I remember to keep all threee images together.  So this is the approach we will follow. Snapshot, What Snapshot? As we are going to create new virtual machines based on existing snapshots we need to figure out which snapshot to use.  We do this by opening the “Media Manager” from within VirtualBox and moving the mouse over the snapshot images until we find the snapshots we want – the snapshot name is identified in the “Attached to:” comment.  In my case I wanted the FMW installed snapshot because that had a database configured for FMW alongside the FMW software.  I made a note of the filename of that snapshot (actually I just noted the first 5 characters as that was all that was needed to uniquely identify the snapshot file). When we create the new machines we will point them at the snapshot filename we have just checked. Network or NotWork? Because we want the two new machines to communicate with each other when hosted in different physical machines we can’t use the default NAT networking mode without a lot of hassle.  But at the same time we need them to have fixed IP addresses relative to each other so that they can see each other whilst also being able to see the outside world. To achieve all these requirements I created two network adapters for each machine.  Adapter 1 was a standard NAT mapping.  This will allow each machine to get a dynamic IP address (10.0.2.15 by default) that can be used to access the external world through the VBox provided NAT gateway.  This is the same as the existing configuration. The second adapter I created as a bridged adapter.  This gives the virtual machine direct access to the host network card and by using fixed IP addresses each machine can see the other.  It is important to choose fixed IP addresses that are not routable across your internal network so you don’t get any clashes with other machines on your network.  Of course you could always get proper fixed IP addresses from your network people, but I have serveral people using my images and as long as I don’t have two instances of the same VM on the same network segment this is easier and avoids reconfiguring the network every time someone wants a copy of my VM.  If it is available I would suggest using the 10.0.3.* network as 10.0.2.* is the default NAT network.  You can check availability by pinging 10.0.3.1 and 10.0.3.2 from your host machine.  If it times out then you are probably safe to use that. Creating the New VMs Now that I had collected the data that I needed I went ahead and created the new VMs. When asked for a “Boot Hard Disk” I used the “Choose a virtual hard disk file…” link to find the snapshot I had previously selected and set that to be the existing hard disk.  I chose the previously existing SOA 11.1.1.5 install for both the new DB and FMW machines because that snapshot had the database with the RCU completed that I wanted for my DB machine and it had the SOA software installed which I wanted for my FMW machine. After the initial creation of the virtual machine go into the network setting section and enable a second adapter which will be bridged.  Make a note of the MAC addresses (the last four digits should be sufficient) of the two adapters so that you can later set the bridged adapter to use fixed IP and the NAT adapter to use DHCP. We are now ready to start the VMs and reconfigure Linux. Reconfiguring Linux Because I now have two new machines I need to change their network configuration.  In particular I need to change the hostname, update the hosts file and change the network settings. Changing the Hostname I renamed both hosts by running the hostname command as root: hostname vboxfmw.oracle.com I also edited the /etc/sysconfig file and set the correct hostname in there. HOSTNAME=vboxfmw.oracle.com Changing the Network Settings I needed to change the network configuration to give the bridged network a fixed IP address.  I first explicitly set the MAC addresses of the two adapters, because the order of the virtual adapters in the VirtualBox Manager is not necessarily the same as the order of the adapters in the guest OS.  So I went in to the System->Preferences->Network Connections screen and explicitly set the “Device MAC address” for the two adapters. Having correctly mapped the Linux adapters to the VirtualBox adapters I then set the Bridged adapter to use fixed IP addressing rather than DHCP.  There is no need for additional routing or default gateways because we expect the two machine to be on the same LAN segment. Updating the Hosts File Having renamed the machines and reconfigured the network I then updated the /etc/hosts file to refer to the new machine name add a new line to the hosts file to provide an additional IP address for my server (the new fixed IP address) add a new line for the fixed IP address of the other virtual machine 10.0.3.101      vboxdb.oracle.com       vboxdb  # Added by NetworkManager 10.0.2.15       vboxdb.oracle.com       vboxdb  # Added by NetworkManager 10.0.3.102      vboxfmw.oracle.com      vboxfmw # Added by NetworkManager 127.0.0.1       localhost.localdomain   localhost ::1     vboxdb.oracle.com       vboxdb  localhost6.localdomain6 localhost6 To make sure everything takes effect I restarted the server. Reconfiguring the Database on the DB Machine Because we changed the hostname the listener and the EM console no longer start so I need to modify the listener.ora to use the new hostname and I also need to rebuild the EM configuration because it also relies on the hostname. I edited the $ORACLE_HOME/network/admin/listener.ora and changed the listening address to the new hostname:       (ADDRESS = (PROTOCOL = TCP)(HOST = vboxdb.oracle.com)(PORT = 1521)) After changing the listener.ora I was able to start the listener using: lsnrctl start I also had to reconfigure the EM database control.  I first deconfigured it using the command: emca -deconfig dbcontrol db -repos drop This drops the repository and removes any existing registered dbcontrols. I then re-configured it using the following command: emca -config dbcontrol db -repos create This creates the EM repository and then configures and starts dbcontrol. Now my database machine is ready so I can close it down and take a snapshot. Disabling the Database on the FMW Machine I set up the database to start automatically by creating a service called “dbora”.  On the FMW machine I do not need the database running so I can prevent it auto-starting by running the following command: chkconfig –del dbora Note that because I am using a snapshot it is not a waste of disk space to have the DB installed but not used.  As long as I don’t run it, it won’t cost me anything. I can now close the FMW machine down and take a snapshot. Creating a New Domain The FMW machine is now ready to create a new domain.  When creating the domain I can point it at the second machine which is running the database.  I can potentially run these machines on two separate physical machines as long as I have the original virtual machine available to both of the physical machines. Gotchas in Snapshotting VirtualBox does not support the concept of linked machines in a network like some virtualization technologies so when creating a snapshot it is a good idea to shut both VMs down and then take a snapshot on both of them.  This is because we want to keep the database in sync with the middleware.  One way to make sure that this happens would be to place all the domain configuration files on the database server via an NFS share, this would mean that all we would need to snapshot would be the database machine because that would hold all the state and configuration. The Sky’s the Limit We have covered a simple case of having just two machines.  I have a more complicated configuration in which two machine run a RAC database off the same base OS image, and two more machines run a SOA cluster based on the same OS image.  Just remember what machine holds state and what are the consequences of taking a snapshot.

    Read the article

  • USB mouse does not work on boot

    - by Uku Loskit
    My problem is pretty much a duplicate of the one described in USB mouse late to load , but the solution there has not worked for me. I'm running the same OS and experiencing the exact same issue. It disappears after 10 seconds or so. Booting with the options specified in the other question did not fix it :/ Thanks in advance. sheepz@sheepz-desktop:~$ dmesg | egrep "hci|usb" [ 0.188000] usbcore: registered new interface driver usbfs [ 0.188000] usbcore: registered new interface driver hub [ 0.188000] usbcore: registered new device driver usb [ 0.358613] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver [ 0.358627] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver [ 0.358637] uhci_hcd: USB Universal Host Controller Interface driver [ 0.358683] uhci_hcd 0000:00:1d.0: PCI INT A -> GSI 23 (level, low) -> IRQ 23 [ 0.358691] uhci_hcd 0000:00:1d.0: setting latency timer to 64 [ 0.358695] uhci_hcd 0000:00:1d.0: UHCI Host Controller [ 0.358726] uhci_hcd 0000:00:1d.0: new USB bus registered, assigned bus number 1 [ 0.358758] uhci_hcd 0000:00:1d.0: irq 23, io base 0x0000e100 [ 0.358927] uhci_hcd 0000:00:1d.1: PCI INT B -> GSI 19 (level, low) -> IRQ 19 [ 0.358932] uhci_hcd 0000:00:1d.1: setting latency timer to 64 [ 0.358935] uhci_hcd 0000:00:1d.1: UHCI Host Controller [ 0.358964] uhci_hcd 0000:00:1d.1: new USB bus registered, assigned bus number 2 [ 0.358991] uhci_hcd 0000:00:1d.1: irq 19, io base 0x0000e200 [ 0.359132] uhci_hcd 0000:00:1d.2: PCI INT C -> GSI 18 (level, low) -> IRQ 18 [ 0.359137] uhci_hcd 0000:00:1d.2: setting latency timer to 64 [ 0.359139] uhci_hcd 0000:00:1d.2: UHCI Host Controller [ 0.359165] uhci_hcd 0000:00:1d.2: new USB bus registered, assigned bus number 3 [ 0.359193] uhci_hcd 0000:00:1d.2: irq 18, io base 0x0000e300 [ 0.359327] uhci_hcd 0000:00:1d.3: PCI INT D -> GSI 16 (level, low) -> IRQ 16 [ 0.359332] uhci_hcd 0000:00:1d.3: setting latency timer to 64 [ 0.359334] uhci_hcd 0000:00:1d.3: UHCI Host Controller [ 0.359360] uhci_hcd 0000:00:1d.3: new USB bus registered, assigned bus number 4 [ 0.359387] uhci_hcd 0000:00:1d.3: irq 16, io base 0x0000e400 [ 0.731933] usb 1-1: new full speed USB device using uhci_hcd and address 2 [ 1.023859] usb 1-2: new full speed USB device using uhci_hcd and address 3 [ 16.136175] usb 1-2: device descriptor read/64, error -110 [ 31.352481] usb 1-2: device descriptor read/64, error -110 [ 31.568485] usb 1-2: new full speed USB device using uhci_hcd and address 4 [ 46.680794] usb 1-2: device descriptor read/64, error -110 [ 61.903555] usb 1-2: device descriptor read/64, error -110 [ 62.119671] usb 1-2: new full speed USB device using uhci_hcd and address 5 [ 72.541078] usb 1-2: device not accepting address 5, error -110 [ 72.653194] usb 1-2: new full speed USB device using uhci_hcd and address 6 [ 83.066637] usb 1-2: device not accepting address 6, error -110 [ 83.178615] usb 3-1: new low speed USB device using uhci_hcd and address 2 [ 83.562546] usbcore: registered new interface driver hiddev [ 83.578827] input: Logitech USB-PS/2 Optical Mouse as /devices/pci0000:00/0000:00:1d.2/usb3/3-1/3-1:1.0/input/input3 [ 83.579016] generic-usb 0003:046D:C01D.0001: input,hidraw0: USB HID v1.10 Mouse [Logitech USB-PS/2 Optical Mouse] on usb-0000:00:1d.2-1/input0 [ 83.579244] usbcore: registered new interface driver usbhid [ 83.579246] usbhid: USB HID core driver [114025.224407] usb 3-1: USB disconnect, address 2 sheepz@sheepz-desktop:~$ dmesg | egrep "hci|usb" [ 0.188000] usbcore: registered new interface driver usbfs [ 0.188000] usbcore: registered new interface driver hub [ 0.188000] usbcore: registered new device driver usb [ 0.358613] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver [ 0.358627] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver [ 0.358637] uhci_hcd: USB Universal Host Controller Interface driver [ 0.358683] uhci_hcd 0000:00:1d.0: PCI INT A -> GSI 23 (level, low) -> IRQ 23 [ 0.358691] uhci_hcd 0000:00:1d.0: setting latency timer to 64 [ 0.358695] uhci_hcd 0000:00:1d.0: UHCI Host Controller [ 0.358726] uhci_hcd 0000:00:1d.0: new USB bus registered, assigned bus number 1 [ 0.358758] uhci_hcd 0000:00:1d.0: irq 23, io base 0x0000e100 [ 0.358927] uhci_hcd 0000:00:1d.1: PCI INT B -> GSI 19 (level, low) -> IRQ 19 [ 0.358932] uhci_hcd 0000:00:1d.1: setting latency timer to 64 [ 0.358935] uhci_hcd 0000:00:1d.1: UHCI Host Controller [ 0.358964] uhci_hcd 0000:00:1d.1: new USB bus registered, assigned bus number 2 [ 0.358991] uhci_hcd 0000:00:1d.1: irq 19, io base 0x0000e200 [ 0.359132] uhci_hcd 0000:00:1d.2: PCI INT C -> GSI 18 (level, low) -> IRQ 18 [ 0.359137] uhci_hcd 0000:00:1d.2: setting latency timer to 64 [ 0.359139] uhci_hcd 0000:00:1d.2: UHCI Host Controller [ 0.359165] uhci_hcd 0000:00:1d.2: new USB bus registered, assigned bus number 3 [ 0.359193] uhci_hcd 0000:00:1d.2: irq 18, io base 0x0000e300 [ 0.359327] uhci_hcd 0000:00:1d.3: PCI INT D -> GSI 16 (level, low) -> IRQ 16 [ 0.359332] uhci_hcd 0000:00:1d.3: setting latency timer to 64 [ 0.359334] uhci_hcd 0000:00:1d.3: UHCI Host Controller [ 0.359360] uhci_hcd 0000:00:1d.3: new USB bus registered, assigned bus number 4 [ 0.359387] uhci_hcd 0000:00:1d.3: irq 16, io base 0x0000e400 [ 0.731933] usb 1-1: new full speed USB device using uhci_hcd and address 2 [ 1.023859] usb 1-2: new full speed USB device using uhci_hcd and address 3 [ 16.136175] usb 1-2: device descriptor read/64, error -110 [ 31.352481] usb 1-2: device descriptor read/64, error -110 [ 31.568485] usb 1-2: new full speed USB device using uhci_hcd and address 4 [ 46.680794] usb 1-2: device descriptor read/64, error -110 [ 61.903555] usb 1-2: device descriptor read/64, error -110 [ 62.119671] usb 1-2: new full speed USB device using uhci_hcd and address 5 [ 72.541078] usb 1-2: device not accepting address 5, error -110 [ 72.653194] usb 1-2: new full speed USB device using uhci_hcd and address 6 [ 83.066637] usb 1-2: device not accepting address 6, error -110 [ 83.178615] usb 3-1: new low speed USB device using uhci_hcd and address 2 [ 83.562546] usbcore: registered new interface driver hiddev [ 83.578827] input: Logitech USB-PS/2 Optical Mouse as /devices/pci0000:00/0000:00:1d.2/usb3/3-1/3-1:1.0/input/input3 [ 83.579016] generic-usb 0003:046D:C01D.0001: input,hidraw0: USB HID v1.10 Mouse [Logitech USB-PS/2 Optical Mouse] on usb-0000:00:1d.2-1/input0 [ 83.579244] usbcore: registered new interface driver usbhid [ 83.579246] usbhid: USB HID core driver

    Read the article

  • Why does Dropbox fail with a "dropboxd not found" error?

    - by Kevin
    This is a fresh install of Dropbox on a 12.04 server running on a 32bit system. It does not seem to fail the installation, but when I try to run it, I am told it can't find the file. Following the instructions on the Dropbox site, I get the following message: /home/kevin/.dropbox-dist/dropboxd: 10: exec: /home/kevin/.dropbox-dist/dropbox: not found Permissions currently being used: $ ls -l ~/.dropbox-dist/dropboxd -rwxrwxrwx 1 kevin kevin 258 Jun 13 20:40 /home/kevin/.dropbox-dist/dropboxd Has anyone had this problem and know a work around?

    Read the article

  • Two SATA HDDs connected using a Black Duet HDD Docking Station via eSATA to my Laptop, second drive

    - by leeand00
    Hi I am using a BlacX Duet HDD Docking Station to connect a 1TB WD Caviar Black SATA HDD (WD10000LSRTL) and a HITACHI SATA DESKSTAR (0S00163) to my G51VX (BestBuy) laptop via the eSATA port. When I plug in both HDDs in to the Docking Station, connect the docking station to my laptop and start Windows 7 (64-bit Ultimate), only the HDD in the first drive in the port actually shows up in My Computer and Disk Management. If I swap the drives positions I can get them both to work, but never at the same time. I also checked in the bios settings on the laptop, under Advanced-IDE Configuration-SATA Operation Mode, and it displays: SATA Operation Mode: [Enhanced] AHCI Port0 [Hard Disk] Device: Hard Disk Vendor: ST9320421AS LBA Mode: Supported S.M.A.R.T.: Supported AHCI Port5 [Hard Disk] Device: Hard Disk Vendor: Hitachi HDS721010CLA332 Size: 100.00 GB LBA Mode: Supported S.M.A.R.T.: Supported There should be a third drive, but I'm not certain why it is not being picked up. Additionally, before I played around with the settings in the IDE configuration, it used to display the DVD as well.

    Read the article

  • Device being used by VxVM

    - by Onur Bingul
    If you are using vxvm, you may have issues when you try to unconfigure a disk root@techsupport2 # cfgadm -c unconfigure c1::dsk/c1t3d0cfgadm: Component system is busy, try again: failed to offline:     Resource             Information       ----------------       -------------------------/dev/dsk/c1t3d0   Device being used by VxVM“cfgadm unconfigure” command fails here.The way to resolve this is to disable the disks path from DMP control. Since there is only one path to this disk, the “-f” (for force) option needs to be used:root@techsupport2 # vxdmpadm -f disable path=c1t3d0s2root@techsupport2 # vxdmpadm getsubpaths NAME         STATE[A]   PATH-TYPE[M] DMPNODENAME  ENCLR-NAME   CTLR   ATTRS================================================================================c1t6d0       ENABLED(A)   -          disk_0       disk         c1       -c1t3d0       DISABLED(M)   -          disk_1       disk         c1       -c1t0d0s2     ENABLED(A)   -          disk_2       disk         c1       -c1t1d0       ENABLED(A)   -          disk_3       disk         c1       -c3t47d0      ENABLED(A)   -          sun35100_0   sun35100     c3       -c3t47d1      ENABLED(A)   -          sun35100_1   sun35100     c3       -c3t47d2s2    ENABLED(A)   -          sun35100_2   sun35100     c3       -c3t47d3s2    ENABLED(A)   -          sun35100_3   sun35100     c3       -You can see the path now disabled from DMP.root@techsupport2 # cfgadm -c unconfigure c1::dsk/c1t3d0Now you can unconfigure the disk

    Read the article

  • Using virt-install to mount multiple cdrom drives/images

    - by Dana the Sane
    I would like to create a windows xp guest from the windows xp upgrade cd I have, along with one of a few full versions I have around. However, when I reach the stage in the installer where I am prompted to insert a full version cd, the installer can't find it, i.e.: Setup could not read the CD you inserted, or the CD is not a valid Windows CD.. Is there a work-around for this?, my Googling didn't uncover anything. I've tried various combinations of mounting .iso files and specifying disks, such as: $sudo virt-install --accelerate --connect qemu:///system -n xpsp1 -r 2048 --disk ./vm/winxp_sp1.iso,device=cdrom --disk ./vm/windows.qcow2,size=12 --vnc --noautoconsole --os-type windows --os-variant winxp --vcpus 2 -c /dev/cdrom --check-cpu If I try to specify multiple cdrom drives, I receive an error: virt-install --accelerate --connect qemu:///system -n xpsp1 -r 2048 --disk ./vm/winxp_sp1.iso,device=cdrom --disk /dev/cdrom,device=cdrom --disk ./vm/windows.qcow2,size=12 --vnc --noautoconsole --os-type windows --os-variant winxp --vcpus 2 --check-cpu Starting install... ERROR IDE CDROM must use 'hdc', but target in use.

    Read the article

  • Can´t verify my site on Google (error 403 Forbidden). I have other sites in the same host with no problems whatsoever

    - by Rosamunda Rosamunda
    I can´t verify my site on Google. I´ve done this several times for several sites, all inside the same host. I´ve tried the HTML tag method, HTML upload, Domain Name provider (I canp´t find the options that Google tell me that I should activate...), and Google Analytics. I always get this response: Verification failed for http://www.mysite.com/ using the Google Analytics method (1 minute ago). Your verification file returns a status of 403 (Forbidden) instead of 200 (OK). I´ve checked the server headers, and I get this result: REQUESTING: http://www.mysite.com GET / HTTP/1.1 Connection: Keep-Alive Keep-Alive: 300 Accept:/ Host: www.mysite.com Accept-Language: en-us Accept-Encoding: gzip, deflate User-Agent: Mozilla/4.0 (compatible; MSIE 7.0b; Windows NT 6.0) SERVER RESPONSE: HTTP/1.1 403 Forbidden Date: Wed, 19 Sep 2012 03:25:22 GMT Server: Apache/2.2.19 (Unix) mod_ssl/2.2.19 OpenSSL/0.9.8e-fips-rhel5 mod_bwlimited/1.4 PHP/5.2.17 Connection: close Content-Type: text/html; charset=iso-8859-1 Final Destination Page (It shows my actual homepage). What can I do? The hosting is the very same as in my other sites, where I didn´t have any issue at all! Thanks for your help! Note: As I have a Drupal 7 site, I´ve tried a "Drupal solution" first, but haven´t found any that solved this issue... How can it be forbidden when I can access the link perfectly ok? Is there any solution to this? Thanks!

    Read the article

  • Why my computer working without harddisc with livecd, but with hardisk not working - computer not response to any signals?

    - by Yosef
    Hi, History of problem: I formated computer (HP Pavalion Desktop). When I restart computer - computer come to first screen before boot and not response to any signals (f2, f10, ESC, etc..) I take out motherboad battery return after time back and power computer - result : as before I disconnect wires of hard-disk and insert livecd UBUNTU to cd and power coputer: result: works without hard-disk. What is the root of problem: hard-disk broken? hard-disk wires not working well? BIOS? other reason How can I fix the problem?(Buy new hard disk etc...) Thanks, Yosef

    Read the article

  • Error while trying to run project: Unable to start program &lsquo;&hellip;&rsquo;. The endpoint was not reachable.

    - by Marko Apfel
    During playing with Entity Framework I got the error: “Error while trying to run project: Unable to start program ‘'…’. The endpoint was not reachable.   By running the project in Visual Studio. Outside VS were no problems. A similar project runs fine. So I compared both project files. Indeed the first project file contains the line: <Prefer32bit>false</Prefer32bit> in some property groups. After deleting this line everything runs fine.

    Read the article

  • SQL SERVER – ASYNC_IO_COMPLETION – Wait Type – Day 11 of 28

    - by pinaldave
    For any good system three things are vital: CPU, Memory and IO (disk). Among these three, IO is the most crucial factor of SQL Server. Looking at real-world cases, I do not see IT people upgrading CPU and Memory frequently. However, the disk is often upgraded for either improving the space, speed or throughput. Today we will look at another IO-related wait type. From Book On-Line: Occurs when a task is waiting for I/Os to finish. ASYNC_IO_COMPLETION Explanation: Any tasks are waiting for I/O to finish. If by any means your application that’s connected to SQL Server is processing the data very slowly, this type of wait can occur. Several long-running database operations like BACKUP, CREATE DATABASE, ALTER DATABASE or other operations can also create this wait type. Reducing ASYNC_IO_COMPLETION wait: When it is an issue related to IO, one should check for the following things associated to IO subsystem: Look at the programming and see if there is any application code which processes the data slowly (like inefficient loop, etc.). Note that it should be re-written to avoid this  wait type. Proper placing of the files is very important. We should check the file system for proper placement of the files – LDF and MDF on separate drive, TempDB on another separate drive, hot spot tables on separate filegroup (and on separate disk), etc. Check the File Statistics and see if there is a higher IO Read and IO Write Stall SQL SERVER – Get File Statistics Using fn_virtualfilestats. Check event log and error log for any errors or warnings related to IO. If you are using SAN (Storage Area Network), check the throughput of the SAN system as well as configuration of the HBA Queue Depth. In one of my recent projects, the SAN was performing really badly and so the SAN administrator did not accept it. After some investigations, he agreed to change the HBA Queue Depth on the development setup (test environment). As soon as we changed the HBA Queue Depth to quite a higher value, there was a sudden big improvement in the performance. It is very likely to happen that there are no proper indexes on the system and yet there are lots of table scans and heap scans. Creating proper index can reduce the IO bandwidth considerably. If SQL Server can use appropriate cover index instead of clustered index, it can effectively reduce lots of CPU, Memory and IO (considering cover index has lesser columns than cluster table and all other; it depends upon the situation). You can refer to the following two articles I wrote that talk about how to optimize indexes: Create Missing Indexes Drop Unused Indexes Checking Memory Related Perfmon Counters SQLServer: Memory Manager\Memory Grants Pending (Consistent higher value than 0-2) SQLServer: Memory Manager\Memory Grants Outstanding (Consistent higher value, Benchmark) SQLServer: Buffer Manager\Buffer Hit Cache Ratio (Higher is better, greater than 90% for usually smooth running system) SQLServer: Buffer Manager\Page Life Expectancy (Consistent lower value than 300 seconds) Memory: Available Mbytes (Information only) Memory: Page Faults/sec (Benchmark only) Memory: Pages/sec (Benchmark only) Checking Disk Related Perfmon Counters Average Disk sec/Read (Consistent higher value than 4-8 millisecond is not good) Average Disk sec/Write (Consistent higher value than 4-8 millisecond is not good) Average Disk Read/Write Queue Length (Consistent higher value than benchmark is not good) Read all the post in the Wait Types and Queue series. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussions of Wait Stats in this blog are generic and vary from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • Fix overlapping partitions

    - by Alex
    I have problem with overlapping partitions. GParted shows me all my disk as unallocated area, output of fdisk below: alex@alex-ThinkPad-SL510:~$ sudo fdisk -l /dev/sda Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xfb4b9b90 Device Boot Start End Blocks Id System /dev/sda1 * 2048 2457599 1227776 7 HPFS/NTFS/exFAT /dev/sda2 2457600 571351724 284447062+ 7 HPFS/NTFS/exFAT /dev/sda3 571342846 604661759 16659457 5 Extended /dev/sda4 604661760 625137663 10237952 7 HPFS/NTFS/exFAT /dev/sda5 598650880 604661759 3005440 82 Linux swap / Solaris /dev/sda6 571342848 598650879 13654016 83 Linux Partition table entries are not in disk order Do I understand correctly that overlapping partitions are sda2 and sda3 (sda2 and sda6 overlaps too, because sda6 is the first chunk of sda3, sda3 has type "extended")? Are sda2 and sda3 the cause of problem? How can i fix it without deleting partitions? My OS is Ubuntu 12.04, 64 bit. Thanks in advance.

    Read the article

  • What's up with OCFS2?

    - by wcoekaer
    On Linux there are many filesystem choices and even from Oracle we provide a number of filesystems, all with their own advantages and use cases. Customers often confuse ACFS with OCFS or OCFS2 which then causes assumptions to be made such as one replacing the other etc... I thought it would be good to write up a summary of how OCFS2 got to where it is, what we're up to still, how it is different from other options and how this really is a cool native Linux cluster filesystem that we worked on for many years and is still widely used. Work on a cluster filesystem at Oracle started many years ago, in the early 2000's when the Oracle Database Cluster development team wrote a cluster filesystem for Windows that was primarily focused on providing an alternative to raw disk devices and help customers with the deployment of Oracle Real Application Cluster (RAC). Oracle RAC is a cluster technology that lets us make a cluster of Oracle Database servers look like one big database. The RDBMS runs on many nodes and they all work on the same data. It's a Shared Disk database design. There are many advantages doing this but I will not go into detail as that is not the purpose of my write up. Suffice it to say that Oracle RAC expects all the database data to be visible in a consistent, coherent way, across all the nodes in the cluster. To do that, there were/are a few options : 1) use raw disk devices that are shared, through SCSI, FC, or iSCSI 2) use a network filesystem (NFS) 3) use a cluster filesystem(CFS) which basically gives you a filesystem that's coherent across all nodes using shared disks. It is sort of (but not quite) combining option 1 and 2 except that you don't do network access to the files, the files are effectively locally visible as if it was a local filesystem. So OCFS (Oracle Cluster FileSystem) on Windows was born. Since Linux was becoming a very important and popular platform, we decided that we would also make this available on Linux and thus the porting of OCFS/Windows started. The first version of OCFS was really primarily focused on replacing the use of Raw devices with a simple filesystem that lets you create files and provide direct IO to these files to get basically native raw disk performance. The filesystem was not designed to be fully POSIX compliant and it did not have any where near good/decent performance for regular file create/delete/access operations. Cache coherency was easy since it was basically always direct IO down to the disk device and this ensured that any time one issues a write() command it would go directly down to the disk, and not return until the write() was completed. Same for read() any sort of read from a datafile would be a read() operation that went all the way to disk and return. We did not cache any data when it came down to Oracle data files. So while OCFS worked well for that, since it did not have much of a normal filesystem feel, it was not something that could be submitted to the kernel mail list for inclusion into Linux as another native linux filesystem (setting aside the Windows porting code ...) it did its job well, it was very easy to configure, node membership was simple, locking was disk based (so very slow but it existed), you could create regular files and do regular filesystem operations to a certain extend but anything that was not database data file related was just not very useful in general. Logfiles ok, standard filesystem use, not so much. Up to this point, all the work was done, at Oracle, by Oracle developers. Once OCFS (1) was out for a while and there was a lot of use in the database RAC world, many customers wanted to do more and were asking for features that you'd expect in a normal native filesystem, a real "general purposes cluster filesystem". So the team sat down and basically started from scratch to implement what's now known as OCFS2 (Oracle Cluster FileSystem release 2). Some basic criteria were : Design it with a real Distributed Lock Manager and use the network for lock negotiation instead of the disk Make it a Linux native filesystem instead of a native shim layer and a portable core Support standard Posix compliancy and be fully cache coherent with all operations Support all the filesystem features Linux offers (ACL, extended Attributes, quotas, sparse files,...) Be modern, support large files, 32/64bit, journaling, data ordered journaling, endian neutral, we can mount on both endian /cross architecture,.. Needless to say, this was a huge development effort that took many years to complete. A few big milestones happened along the way... OCFS2 was development in the open, we did not have a private tree that we worked on without external code review from the Linux Filesystem maintainers, great folks like Christopher Hellwig reviewed the code regularly to make sure we were not doing anything out of line, we submitted the code for review on lkml a number of times to see if we were getting close for it to be included into the mainline kernel. Using this development model is standard practice for anyone that wants to write code that goes into the kernel and having any chance of doing so without a complete rewrite or.. shall I say flamefest when submitted. It saved us a tremendous amount of time by not having to re-fit code for it to be in a Linus acceptable state. Some other filesystems that were trying to get into the kernel that didn't follow an open development model had a lot harder time and a lot harsher criticism. March 2006, when Linus released 2.6.16, OCFS2 officially became part of the mainline kernel, it was accepted a little earlier in the release candidates but in 2.6.16. OCFS2 became officially part of the mainline Linux kernel tree as one of the many filesystems. It was the first cluster filesystem to make it into the kernel tree. Our hope was that it would then end up getting picked up by the distribution vendors to make it easy for everyone to have access to a CFS. Today the source code for OCFS2 is approximately 85000 lines of code. We made OCFS2 production with full support for customers that ran Oracle database on Linux, no extra or separate support contract needed. OCFS2 1.0.0 started being built for RHEL4 for x86, x86-64, ppc, s390x and ia64. For RHEL5 starting with OCFS2 1.2. SuSE was very interested in high availability and clustering and decided to build and include OCFS2 with SLES9 for their customers and was, next to Oracle, the main contributor to the filesystem for both new features and bug fixes. Source code was always available even prior to inclusion into mainline and as of 2.6.16, source code was just part of a Linux kernel download from kernel.org, which it still is, today. So the latest OCFS2 code is always the upstream mainline Linux kernel. OCFS2 is the cluster filesystem used in Oracle VM 2 and Oracle VM 3 as the virtual disk repository filesystem. Since the filesystem is in the Linux kernel it's released under the GPL v2 The release model has always been that new feature development happened in the mainline kernel and we then built consistent, well tested, snapshots that had versions, 1.2, 1.4, 1.6, 1.8. But these releases were effectively just snapshots in time that were tested for stability and release quality. OCFS2 is very easy to use, there's a simple text file that contains the node information (hostname, node number, cluster name) and a file that contains the cluster heartbeat timeouts. It is very small, and very efficient. As Sunil Mushran wrote in the manual : OCFS2 is an efficient, easily configured, quickly installed, fully integrated and compatible, feature-rich, architecture and endian neutral, cache coherent, ordered data journaling, POSIX-compliant, shared disk cluster file system. Here is a list of some of the important features that are included : Variable Block and Cluster sizes Supports block sizes ranging from 512 bytes to 4 KB and cluster sizes ranging from 4 KB to 1 MB (increments in power of 2). Extent-based Allocations Tracks the allocated space in ranges of clusters making it especially efficient for storing very large files. Optimized Allocations Supports sparse files, inline-data, unwritten extents, hole punching and allocation reservation for higher performance and efficient storage. File Cloning/snapshots REFLINK is a feature which introduces copy-on-write clones of files in a cluster coherent way. Indexed Directories Allows efficient access to millions of objects in a directory. Metadata Checksums Detects silent corruption in inodes and directories. Extended Attributes Supports attaching an unlimited number of name:value pairs to the file system objects like regular files, directories, symbolic links, etc. Advanced Security Supports POSIX ACLs and SELinux in addition to the traditional file access permission model. Quotas Supports user and group quotas. Journaling Supports both ordered and writeback data journaling modes to provide file system consistency in the event of power failure or system crash. Endian and Architecture neutral Supports a cluster of nodes with mixed architectures. Allows concurrent mounts on nodes running 32-bit and 64-bit, little-endian (x86, x86_64, ia64) and big-endian (ppc64) architectures. In-built Cluster-stack with DLM Includes an easy to configure, in-kernel cluster-stack with a distributed lock manager. Buffered, Direct, Asynchronous, Splice and Memory Mapped I/Os Supports all modes of I/Os for maximum flexibility and performance. Comprehensive Tools Support Provides a familiar EXT3-style tool-set that uses similar parameters for ease-of-use. The filesystem was distributed for Linux distributions in separate RPM form and this had to be built for every single kernel errata release or every updated kernel provided by the vendor. We provided builds from Oracle for Oracle Linux and all kernels released by Oracle and for Red Hat Enterprise Linux. SuSE provided the modules directly for every kernel they shipped. With the introduction of the Unbreakable Enterprise Kernel for Oracle Linux and our interest in reducing the overhead of building filesystem modules for every minor release, we decide to make OCFS2 available as part of UEK. There was no more need for separate kernel modules, everything was built-in and a kernel upgrade automatically updated the filesystem, as it should. UEK allowed us to not having to backport new upstream filesystem code into an older kernel version, backporting features into older versions introduces risk and requires extra testing because the code is basically partially rewritten. The UEK model works really well for continuing to provide OCFS2 without that extra overhead. Because the RHEL kernel did not contain OCFS2 as a kernel module (it is in the source tree but it is not built by the vendor in kernel module form) we stopped adding the extra packages to Oracle Linux and its RHEL compatible kernel and for RHEL. Oracle Linux customers/users obviously get OCFS2 included as part of the Unbreakable Enterprise Kernel, SuSE customers get it by SuSE distributed with SLES and Red Hat can decide to distribute OCFS2 to their customers if they chose to as it's just a matter of compiling the module and making it available. OCFS2 today, in the mainline kernel is pretty much feature complete in terms of integration with every filesystem feature Linux offers and it is still actively maintained with Joel Becker being the primary maintainer. Since we use OCFS2 as part of Oracle VM, we continue to look at interesting new functionality to add, REFLINK was a good example, and as such we continue to enhance the filesystem where it makes sense. Bugfixes and any sort of code that goes into the mainline Linux kernel that affects filesystems, automatically also modifies OCFS2 so it's in kernel, actively maintained but not a lot of new development happening at this time. We continue to fully support OCFS2 as part of Oracle Linux and the Unbreakable Enterprise Kernel and other vendors make their own decisions on support as it's really a Linux cluster filesystem now more than something that we provide to customers. It really just is part of Linux like EXT3 or BTRFS etc, the OS distribution vendors decide. Do not confuse OCFS2 with ACFS (ASM cluster Filesystem) also known as Oracle Cloud Filesystem. ACFS is a filesystem that's provided by Oracle on various OS platforms and really integrates into Oracle ASM (Automatic Storage Management). It's a very powerful Cluster Filesystem but it's not distributed as part of the Operating System, it's distributed with the Oracle Database product and installs with and lives inside Oracle ASM. ACFS obviously is fully supported on Linux (Oracle Linux, Red Hat Enterprise Linux) but OCFS2 independently as a native Linux filesystem is also, and continues to also be supported. ACFS is very much tied into the Oracle RDBMS, OCFS2 is just a standard native Linux filesystem with no ties into Oracle products. Customers running the Oracle database and ASM really should consider using ACFS as it also provides storage/clustered volume management. Customers wanting to use a simple, easy to use generic Linux cluster filesystem should consider using OCFS2. To learn more about OCFS2 in detail, you can find good documentation on http://oss.oracle.com/projects/ocfs2 in the Documentation area, or get the latest mainline kernel from http://kernel.org and read the source. One final, unrelated note - since I am not always able to publicly answer or respond to comments, I do not want to selectively publish comments from readers. Sometimes I forget to publish comments, sometime I publish them and sometimes I would publish them but if for some reason I cannot publicly comment on them, it becomes a very one-sided stream. So for now I am going to not publish comments from anyone, to be fair to all sides. You are always welcome to email me and I will do my best to respond to technical questions, questions about strategy or direction are sometimes not possible to answer for obvious reasons.

    Read the article

  • Best approach to utilize RamDisk for Chrome?

    - by laggingreflex
    I use a lot of tabs and after a while less recently opened tabs take some time to become responsive, which I guess is because they're being un-cached to HDD as they're not required. So after creating a Ram-Disk I have two options, use --disk-cache-dir="G:/" switch to do what it does. Or what I'm currently doing: using a directory junction for "[...]\AppData\Local\Google\Chrome\User Data\Default" to move that entire folder over to Ram-Disk. I thought this would be better than just disk-cache but what do I know. Is it? As one can guess it'll be a pain saving/loading the Ram-Disk image each time I start chrome but if it really is better than the former approach I'll write a script or something.

    Read the article

  • SQL SERVER – IO_COMPLETION – Wait Type – Day 10 of 28

    - by pinaldave
    For any good system three things are vital: CPU, Memory and IO (disk). Among these three, IO is the most crucial factor of SQL Server. Looking at real-world cases, I do not see IT people upgrading CPU and Memory frequently. However, the disk is often upgraded for either improving the space, speed or throughput. Today we will look at an IO-related wait types. From Book On-Line: Occurs while waiting for I/O operations to complete. This wait type generally represents non-data page I/Os. Data page I/O completion waits appear as PAGEIOLATCH_* waits. IO_COMPLETION Explanation: Any tasks are waiting for I/O to finish. This is a good indication that IO needs to be looked over here. Reducing IO_COMPLETION wait: When it is an issue concerning the IO, one should look at the following things related to IO subsystem: Proper placing of the files is very important. We should check the file system for proper placement of files – LDF and MDF on a separate drive, TempDB on another separate drive, hot spot tables on separate filegroup (and on separate disk),etc. Check the File Statistics and see if there is higher IO Read and IO Write Stall SQL SERVER – Get File Statistics Using fn_virtualfilestats. Check event log and error log for any errors or warnings related to IO. If you are using SAN (Storage Area Network), check the throughput of the SAN system as well as the configuration of the HBA Queue Depth. In one of my recent projects, the SAN was performing really badly so the SAN administrator did not accept it. After some investigations, he agreed to change the HBA Queue Depth on development (test environment) set up and as soon as we changed the HBA Queue Depth to quite a higher value, there was a sudden big improvement in the performance. It is very possible that there are no proper indexes in the system and there are lots of table scans and heap scans. Creating proper index can reduce the IO bandwidth considerably. If SQL Server can use appropriate cover index instead of clustered index, it can effectively reduce lots of CPU, Memory and IO (considering cover index has lesser columns than cluster table and all other; it depends upon the situation). You can refer to the two articles that I wrote; they are about how to optimize indexes: Create Missing Indexes Drop Unused Indexes Checking Memory Related Perfmon Counters SQLServer: Memory Manager\Memory Grants Pending (Consistent higher value than 0-2) SQLServer: Memory Manager\Memory Grants Outstanding (Consistent higher value, Benchmark) SQLServer: Buffer Manager\Buffer Hit Cache Ratio (Higher is better, greater than 90% for usually smooth running system) SQLServer: Buffer Manager\Page Life Expectancy (Consistent lower value than 300 seconds) Memory: Available Mbytes (Information only) Memory: Page Faults/sec (Benchmark only) Memory: Pages/sec (Benchmark only) Checking Disk Related Perfmon Counters Average Disk sec/Read (Consistent higher value than 4-8 millisecond is not good) Average Disk sec/Write (Consistent higher value than 4-8 millisecond is not good) Average Disk Read/Write Queue Length (Consistent higher value than benchmark is not good) Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussions of Wait Stats in this blog are generic and vary from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Types, SQL White Papers, T SQL, Technology

    Read the article

< Previous Page | 493 494 495 496 497 498 499 500 501 502 503 504  | Next Page >