Search Results

Search found 16665 results on 667 pages for 'nhibernate configuration'.

Page 416/667 | < Previous Page | 412 413 414 415 416 417 418 419 420 421 422 423  | Next Page >

  • HP Smart Array; Is it possible to convert Raid1 to Raid0 by dropping a failed drive?

    - by Erik Heppler
    I have a server that was running two 60GB drives as a logical RAID1. At some point the second drive was physically removed and the logical drive has been in "Interim Recovery Mode" for several months now. There's no need for the redundancy of RAID1 on this machine, and I have no intention of replacing the missing drive. If possible I would like to convert the current RAID1 to a single-drive RAID0 by simply dropping the failed drive from the current configuration. I'm only interested in doing this if it can be done in-place. Otherwise I'm perfectly content to leave it in "Interim Recovery Mode" indefinitely.

    Read the article

  • How to use a custom Windows 7 system drive letter?

    - by Ivan
    The subject PC has many hard drive partitions dedicated for different purposes, C: being a Windows XP system drive and F: (which is actually the next primary partition placed right after C: physically) being intended to host a newly installed Windows 7 instance (meant for "dual boot" configuration). Needless to say the intention was all the partitions to have exactly the same letters under both OSes, needless to say Windows 7 has detected all of them in a completely different order which would not be a problem (as the non-system drives letters can be changed easily after installation) if it wouldn't have named it's system drive C: (meant to be F:), which I have no Idea how to change. Is there a way to set the letter you want? I don't mind reinstalling Windows 7 from scratch if it is to be set at installation time or even configured in some text files on the installation DVD. I have tried this way, but it renders the Windows 7 system desktop unbootable (gets stuck on "Preparing your desktop..." after "Welcome").

    Read the article

  • Nginx rewrite with Simple Machines Forum

    - by Kevin Worthington
    I am running Nginx 1.5.6 and I use the Simple Machines Forum software. Most rewrite rules seem to work properly, with the exception of the RSS feeds. In my Nginx configuration, I have the following line which is supposed to handle URLs which contain ".xml": rewrite ^/forum/(\.xml|xmlhttp)/?$ "/forum/index.php?pretty;action=$1" last; The above rule produces the following URL for the main forum, which returns a 403 Error: http://www.mydomain.com/forum/.xml/?type=rss I would like the rewrite rule to produce this type of URL, which returns code 200 (a real page): http://www.mydomain.com/forum/?type=rss;action=.xml Here is the entire block pertaining to the forum rewrites: http://pastebin.com/raw.php?i=tZkAibW3 I would really appreciate some help to create a rewrite rule to do that. Thanks.

    Read the article

  • Oracle Business Intelligence Applications 10g Bootcamp

    - by mseika
    Oracle Business Intelligence Applications 10g Bootcamp 12th - 15th February 2012, Reading (UK) The Oracle Business Intelligence Applications offer out-of-the-box integration with Siebel CRM and Oracle eBusiness Suite and provide pre-built Operational BI solutions for eBusiness Suite, Peoplesoft, Siebel, and SAP. This training will provide attendees with an in-depth working understanding of the architecture, the technical and the functional content of the Oracle Business Intelligence Applications, whilst also providing an understanding of their installation, configuration and extension. The course will cover the following topics:• Overview of Oracle Business Intelligence Applications• Oracle BI Applications Fundamentals and Features• Configuring BI Applications for Oracle E-Business Suite• Understanding BI Applications Architecture• Fundamentals of BI Applications Security REGISTER NOW Partner Registration Guide Price: FREE Cookham RoomOracle Corporation UK LtdOracle ParkwayThames Valley ParkReading, Berkshire RG6 1RA12th - 15th February 20129:30 am – 5:00 pm BST AudienceThe seminar is aimed at BI Consultants and Implementation Consultants within Oracle's Gold and Platinum Partners. Prerequisites• Good understanding of basic data warehousing concepts• Hands on experience in Oracle Business Intelligence Enterprise Edition• Hands on experience in Informatica• Some understanding of Oracle BI Applications is required (See Sales & Technical Tutorials for OBI, BI-Apps and Hyperion EPM) • Good understanding of any of the following Oracle EBS modules: General Ledger, Accounts Receivables, Accounts Payables System Requirements Please note that attendees are required to have a laptop. Laptop• 4GB RAM-Recognized by Windows 64 bits• 80GB free space in Hard drive or External Device• CPU Core 2 Duo or HigherOperating System Requirements• Windows 7, Windows XP, Windows 2003• NOT ALLOWED with Windows Vista• An Administrator User For more information please contact [email protected].

    Read the article

  • ODI 11g – Faster Files

    - by David Allan
    Deep in the trenches of ODI development I raised my head above the parapet to read a few odds and ends and then think why don’t they know this? Such as this article here – in the past customers (see forum) were told to use a staging route which has a big overhead for large files. This KM is an example of the great extensibility capabilities of ODI, its quite simple, just a new KM that; improves the out of the box experience – just build the mapping and the appropriate KM is used improves out of the box performance for file to file data movement. This improvement for out of the box handling for File to File data integration cases (from the 11.1.1.5.2 companion CD and on) dramatically speeds up the file integration handling. In the past I had seem some consultants write perl versions of the file to file integration case, now Oracle ships this KM to fill the gap. You can find the documentation for the IKM here. The KM uses pure java to perform the integration, using java.io classes to read and write the file in a pipe – it uses java threading in order to super-charge the file processing, and can process several source files at once when the datastore's resource name contains a wildcard. This is a big step for regular file processing on the way to super-charging big data files using Hadoop – the KM works with the lightweight agent and regular filesystems. So in my design below transforming a bunch of files, by default the IKM File to File (Java) knowledge module was assigned. I pointed the KM at my JDK (since the KM generates and compiles java), and I also increased the thread count to 2, to take advantage of my 2 processors. For my illustration I transformed (can also filter if desired) and moved about 1.3Gb with 2 threads in 140 seconds (with a single thread it took 220 seconds) - by no means was this on any super computer by the way. The great thing here is that it worked well out of the box from the design to the execution without any funky configuration, plus, and a big plus it was much faster than before, So if you are doing any file to file transformations, check it out!

    Read the article

  • Cannot connect to network

    - by dany
    I have a problem: I am on Debian. I configured my nic with a static ip (192.168.1.56). When I try to connect to a network, initially with ifconfig eth2 I get (correctly): eth2 inet addr:192.168.1.56 .... inet6 addr: fe80:221:ff:fe96:4598/64 but after a few seconds the 192.168.1.56 disappears and after some other seconds disappears the inet6 address too. When I press in the nm-applet it requires me the password but in the meantime it try to connect. At uni, the connection is a DHCP one. It works for the first few seconds but after it doesn't. Any possible solution? Here it is the relevant part of the syslog: (static ip configuration) http://pastebin.com/u3BPAsda

    Read the article

  • Configuring vsftpd with nginx on ubuntu

    - by arby
    I have vsftpd installed on Ubuntu 12.04LTS along with nginx, php, and sql on an Amazon ec2 instance. The web server is good to go, but I'm having trouble connecting to the FTP server. I'm not quite sure how to set the privileges or what configuration options I might be missing. By default, the location of the web root is at /usr/share/nginx/www and it is owned by root:root. The web server runs as user www-data in the group www-data. I've opened port 21 and set the passive ports in the ec2 backend and ufw firewall. In vsftpd.conf, I have: ... anonymous_enable=NO local_enable=YES local_umask=0027 chroot_local_user=YES pasv_enable=YES pas_max_port=12100 pasv_min_port=12000 port_enable=YES ... Now, I'm unsure how to create the FTP user that when I login, displays my web directory with write access. I've tried it a few different ways, but I keep running into errors (either no connection, no write access, or very slow timeouts.)

    Read the article

  • Exim redirect all unexisting accounts for local domains to a specific account

    - by tntu
    I want to route all incoming emails for local domains only to a single account if an account is not setup for that user. I would also like each email to be written in it's own file in user folder. I have a catchall user with /home/catchall/ path where I have a mail folder made for this but so far emails wither fail to deliver (thus my rule did not work) or they do deliver to /etc/mail/catchall file. I have been trying to put something together from the Exim configuration but so far nothing seem to work. http://exim.org/exim-html-current/doc/html/spec_html/ch20.html

    Read the article

  • Postfix - How to configure to send these emails?

    - by Jon
    I want my mailserver to send mail from my local application "from" any user supplied email address "to" my own address, say "[email protected]". The MX records for "mysite.com" actually point to a different server, even though the outgoing mainserver is running with mydomain set as "mysite.com". Perhaps this is part of the problem? postfix is currently causing a SMTPRecipientsRefused error within the python application. Can anyone point me to what to change in the configuration? Thanks postconf -n: alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases append_dot_mydomain = no biff = no config_directory = /etc/postfix inet_interfaces = all mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 mydestination = mysite.com, localhost.com, , localhost, * myhostname = mysite.com mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 myorigin = /etc/mailname readme_directory = no recipient_delimiter = + relayhost = smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU) smtpd_tls_cert_file = /etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file = /etc/ssl/private/ssl-cert-snakeoil.key smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes

    Read the article

  • Windows Firewall failing after 9-12 hours?

    - by routeNpingme
    I have 2 VM servers in the exact same NIC configuration: Server 2003 R2, one NIC connected to private (hardware firewall) network in a 10.x private address space, and one NIC connected straight to public internet. Windows Firewall is enabled for the Public Internet NIC only. Now, what doesn't make sense - this fails generally after 9-12 hours. It's not exact, but once or twice a day, traffic will just stop on the Internet NIC. No event log entries when it happens, and restarting the Windows Firewall service as well as stopping or restarting IPSec Services (just for fun) has no effect. Once the server is rebooted, everything is fine again for another 1/2 day. Any suggestions?

    Read the article

  • Oracle VM 3.1.1 build 365 released

    - by wcoekaer
    A few days ago we released a patch update for Oracle VM 3.1.1 (build 365). Oracle VM Manager 3.1.1 Build 365 is now available from My Oracle Support patch ID 14227416 Oracle VM Server 3.1.1 errata updates are, as usual, released on ULN in the ovm3_3.1.1_x86_64_patch channel. Just a reminder, when we publish errata for Oracle VM, the notifications are sent through the oraclevm-errata maillist. You can sign up here. Some of the bugfixes in 3.1.1 : 14054162 - Removes unnecessary locks when creating VNICs in a multi-threaded operation. 14111234 - Fixes the issue when discovering a virtual machine that has disks in a un-discovered repository or has un-discovered physical disks. 14054133 - Fixes a bug of object not found where vdisks are left stale in certain multi-thread operations. 14176607 - Fixes the issue where Oracle VM Manager would hang after a restart due to various tasks running jobs in the global context. 14136410 - Fixes the stale lock issue on multithreaded server where object not found error happens in some rare situations. 14186058 - Fixes the issue where Oracle VM Manager fails to discover the server or start the server after the server hardware configuration (i.e. BIOS) was modified. 14198734 - Fixes the issue where HTTP cannot be disabled. 14065401 - Fixes Oracle VM Manager UI time-out issue where the default value was not long enough for storage repository creation. 14163755 - Fixes the issue when migrating a virtual machine the list of target servers (and "other servers") was not ordered by name. 14163762 - Fixes the size of the "Edit Vlan Group" window to display all information correctly. 14197783 - Fixes the issue that navigation tree (servers) was not ordered by name. I strongly suggest everyone to use this latest build and also update the server to the latest version. have at it.

    Read the article

  • Nginx and PHP-FPM on OS X

    - by Seth
    I've been wanting to try out Nginx with PHP-FPM. I installed Nginx via Macports. I read that PHP 5.3.3 includes PHP-FPM, however, the PHP 5.3.3 configuration on Macports does not enable it. Can anyone explain or refer me to a tutorial on how to install PHP 5.3.3 with PHP-FPM for Nginx on OS X? I'd want to place it in /opt where Nginx is to keep it away from the PHP I'm using with Apache in /usr/local. I'm new to command-line stuff. Pardon my ignorance.

    Read the article

  • Fixing dpkg after installation of broken packages? [closed]

    - by Amith KK
    Possible Duplicate: How to remove all associated files and configuration settings of an app installed through 'force architecture' command I installed a broken package, and now apt-get/aptitude is failing with trying to remove it Each time I run a apt operation, this is the message I get: Removing crossplatformui ... ztemtvcdromd: no process found dpkg: error processing crossplatformui (--remove): subprocess installed post-removal script returned error exit status 1 No apport report written because MaxReports is reached already Errors were encountered while processing: crossplatformui E: Sub-process /usr/bin/dpkg returned an error code (1) running sudo apt-get install -f gives me this: amith@amith-desktop:~$ sudo apt-get install -f Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be REMOVED: crossplatformui 0 upgraded, 0 newly installed, 1 to remove and 1123 not upgraded. 1 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? Y (Reading database ... 340804 files and directories currently installed.) Removing crossplatformui ... ztemtvcdromd: no process found dpkg: error processing crossplatformui (--remove): subprocess installed post-removal script returned error exit status 1 Errors were encountered while processing: crossplatformui E: Sub-process /usr/bin/dpkg returned an error code (1) How do I fix this?

    Read the article

  • approx via inetd is not open to connection for others machines

    - by Cédric Girard
    I have an approx server to speed up Debian apt updates, on my Ubuntu 11.04 desktop PC, it had ran fine in the past, but today le 9999 port is open from localhost, but not for others PC. I have not modified inetd configuration at all. What can I check and try? inetd.conf 9999 stream tcp nowait approx /usr/sbin/approx /usr/sbin/approx approx.com # Here are some examples of remote repository mappings. # See http://www.debian.org/mirror/list for mirror sites. debian http://ftp2.fr.debian.org/debian security http://security.debian.org/debian-security volatile http://volatile.debian.org/debian-volatile # The following are the default parameter values, so there is # no need to uncomment them unless you want a different value. # See approx.conf(5) for details. $cache /espace/Dossiers/approx $max_rate unlimited $max_redirects 5 $user approx $group approx $syslog daemon $pdiffs true $offline false $max_wait 10 $verbose false $debug false I tried to allow others PC to connect with a "ALL: ALL" in hosts.allow. ufw is disabled, iptables-save is empty.

    Read the article

  • XSF-FO intellisense and national languages with Apache FOP

    - by Lukasz Kurylo
    Some time ago I showed how to get an intellisense and how to configure the FO.NET to acquire national characters inside the generated pdf files. Due to the limitations that I mensioned in my previous post, I started playing with the Apache FOP. In this post I want to show, how to acquire the same result as I showed in the two posts related to FO.NET.   Intellisense   To get the intellisense from the XSL-FO templates set the xsi:schemaLocation the same way I showed it in this post. The only diffrence to FO.NET is that, during generating the document by the code I showed last time we will get an exception:   org.apache.fop.fo.ValidationException: Invalid property encountered on "fo:root": xsi:schemaLocation (See position 6:11)   Fortunatelly there is a very easy way to resolve this without removing the entire attribute along with the intellisense. Add to the FopFactory the ignoreNamespace by:   FopFactory fopFactory = FopFactory.newInstance(); fopFactory.ignoreNamespace(http://www.w3.org/2001/XMLSchema-instance);   Notice that, the url specified in this method this is a namespace for the xmlns:xsi namespace, not xsi.schemaLocation.   Fonts / national characters   This point is a little dfferent to acquire, but not more complicated that it was with FO.NET. To set the fonts in Apache FOP 1.0, we need a configuration file. A sample one can be get from the directory where we unpacked the fop binaries, from conf subdirectory. There is a file called fop.xconf. We must copy this file to our solution. In the simplest way, in the <fonts> tag we can add  <auto-detect/>. Thanks to this, FOP will index all fonts available on the installed operating system. There probably should be no problem, if we have a http handler or a WCF Service on the server that serves the generated pdf documents. In this situation we can use all available fonts on this server.   To use this config file, we must set a path to it:   FopFactory fopFactory = FopFactory.newInstance(); fopFactory.setUserConfig(new File("fop.xconf"));

    Read the article

  • how to pass domain name to backend with pound

    - by FurtiveFelon
    I am using pound as a way to decode SSL for the backend, but the bulk of the work is done on varnish (including virtualhost stuff). As a result, I need pound to just forward all other traffic to varnish verbatim, but it doesn't seem to do that. I am using the default configuration: ListenHTTP Address 1.2.3.4 Port 8080 ## allow PUT and DELETE also (by default only GET, POST and HEAD)?: xHTTP 0 Service BackEnd Address 127.0.0.1 Port 80 End End End So whenever I hit example.com:8080, it will always redirect to the default backend for varnish, which i assume was because the domain (host) header isn't send along. Anyone know what could be wrong? Thanks a lot! Jason

    Read the article

  • How to get Unity working on dual gpu laptop?

    - by Mourgos
    I recently bought a new laptop (Asus X53S series) that has two GPUs, an NVidia GeForce GT 540M and an integrated Intel GPU (I believe it's called Intel HD graphics 3000). I installed the recommended restricted NVidia drivers after a clean Ubuntu 11.10 install. In the 'additional drivers' program I get the message: "This driver is enabled and in use", although when I try to open the NVidia X Server Settings it says "You do not appear to be using the NVIDIA X driver." which seems to be the case since Ubuntu only starts using Unity 2D. I've had the same issue in 11.04 and I was forced to use the nouveau driver just to get unity working, but since I get quite a few crashes with it I really want to get the propriety driver working this time around. Since I've never had this issue with older laptops, I can only assume it is caused due to the dual gpu configuration. How do I get Ubuntu to use the propriety drivers, or is there any workaround to get the integrated Intel GPU to be ignored by Ubuntu? Alternatively, has anyone got Unity working with a similar setup?

    Read the article

  • Can't Start ISC DHCP IPv6 Server

    - by MrDaniel
    Trying to enable the ISC DHCP server for just IPv6 on Ubuntu 12.04 LTS. I have downloaded and installed the DHCP server via the following command: $ sudo apt-get install isc-dhcp-server Then I have followed the instructions in the following resources, Ubuntu Wiki DHCPv6, SixXS - Configuring ISC DHCPv6 Server and Linux IPv6 HOWTO - Configuration of the ISC DHCP server for IPv6 . So from review all those resources it seems like I need to: set a static IPv6 address for the Interface I want to run the DHCPv6 server from that is part of the IPv6 network subnet outside the DHCP range. Edit the /etc/dhcp/dhcpd6.conf file to configure the DHCPv6 range etc. Create the /var/lib/dhcp/dhcpd6.leases Manually start the DHCPv6 server. Setting the Static IP for eth0 $ sudo ifconfig eth0 inet6 add 2001:db8:0:1::128/64 My dhcpd6.conf default-lease-time 600; max-lease-time 7200; log-facility local7; subnet6 2001:db8:0:1::/64 { #Range for clients range6 2001:db8:0:1::129 2001:db8:0:1::254; } Created the dhcpd6.leases file As indicated in the dhcpd.leases man page. $ touch /var/lib/dhcp/dhcpd6.leases #Tried with sudo as well Manually starting the DHCPv6 server. Attempted to start the server using the following command: $ sudo dhcp -6 -f -cf /etc/dhcp/dhcpd6.conf eth0 The problem, the DHCP will not start, with an append error for the dhcpd6.leases file as indicated below when running the manual start command noted above. Can't open /var/lib/dhcp/dhcpd6.leases for append. Any ideas what I might be missing?

    Read the article

  • Connection to website interrupted/reset. Why?

    - by Goje87
    When navigating to some websites (like victorsosea.com, citibank.co.in etc.) on Google Chrome, I get a message that the 'connection to www.site.com was interrupted'. If I try on Firefox, it says 'The connection to the server was reset'. This happens no matter what internet connection I use on my laptop. Be it my home internet or office internet. I have tried using Google DNS address as well but that does not seem to solve the problem. I have been facing this problem randomly for some websites. Kindly let me know how I can resolve the issue. My laptop configuration: Windows 7, 64-bit

    Read the article

  • Oracle Solaris Zones Physical to virtual (P2V)

    - by user939057
    IntroductionThis document describes the process of creating and installing a Solaris 10 image build from physical system and migrate it into a virtualized operating system environment using the Oracle Solaris 10 Zones Physical-to-Virtual (P2V) capability.Using an example and various scenarios, this paper describes how to take advantage of theOracle Solaris 10 Zones Physical-to-Virtual (P2V) capability with other Oracle Solaris features to optimize performance using the Solaris 10 resource management advanced storage management using Solaris ZFS plus improving operating system visibility with Solaris DTrace. The most common use for this tool is when performing consolidation of existing systems onto virtualization enabled platforms, in addition to that we can use the Physical-to-Virtual (P2V) capability  for other tasks for example backup your physical system and move them into virtualized operating system environment hosted on the Disaster Recovery (DR) site another option can be building an Oracle Solaris 10 image repository with various configuration and a different software packages in order to reduce provisioning time.Oracle Solaris ZonesOracle Solaris Zones is a virtualization and partitioning technology supported on Oracle Sun servers powered by SPARC and Intel processors.This technology provides an isolated and secure environment for running applications. A zone is a virtualized operating system environment created within a single instance of the Solaris 10 Operating System.Each virtual system is called a zone and runs a unique and distinct copy of the Solaris 10 operating system.Oracle Solaris Zones Physical-to-Virtual (P2V)A new feature for Solaris 10 9/10.This feature provides the ability to build a Solaris 10 images from physical system and migrate it into a virtualized operating system environmentThere are three main steps using this tool1. Image creation on the source system, this image includes the operating system and optionally the software in which we want to include within the image. 2. Preparing the target system by configuring a new zone that will host the new image.3. Image installation on the target system using the image we created on step 1. The host, where the image is built, is referred to as the source system and the host, where theimage is installed, is referred to as the target system. Benefits of Oracle Solaris Zones Physical-to-Virtual (P2V)Here are some benefits of this new feature:  Simple- easy build process using Oracle Solaris 10 built-in commands.  Robust- based on Oracle Solaris Zones a robust and well known virtualization technology.  Flexible- support migration between V series servers into T or -M-series systems.For the latest server information, refer to the Sun Servers web page. PrerequisitesThe target Oracle Solaris system should be running the latest version of the patching patch cluster. and the minimum Solaris version on the target system should be Solaris 10 9/10.Refer to the latest Administration Guide for Oracle Solaris for a complete procedure on how todownload and install Oracle Solaris. NOTE: If the source system that used to build the image is an older version then the targetsystem, then during the process, the operating system will be upgraded to Solaris 10 9/10(update on attach).Creating the Image Used to distribute the software.We will create an image on the source machine. We can create the image on the local file system and then transfer it to the target machine, or build it into a NFS shared storage andmount the NFS file system from the target machine.Optional  before creating the image we need to complete the software installation that we want to include with the Solaris 10 image.An image is created by using the flarcreate command:Source # flarcreate -S -n s10-system -L cpio /var/tmp/solaris_10_up9.flarThe command does the following:  -S specifies that we skip the disk space check and do not write archive size data to the archive (faster).  -n specifies the image name.  -L specifies the archive format (i.e cpio). Optionally, we can add descriptions to the archive identification section, which can help to identify the archive later.Source # flarcreate -S -n s10-system -e "Oracle Solaris with Oracle DB10.2.0.4" -a "oracle" -L cpio /var/tmp/solaris_10_up9.flarYou can see example of the archive identification section in Appendix A: archive identification section.We can compress the flar image using the gzip command or adding the -c option to the flarcreate commandSource # gzip /var/tmp/solaris_10_up9.flarAn md5 checksum can be created for the image in order to ensure no data tamperingSource # digest -v -a md5 /var/tmp/solaris_10_up9.flar Moving the image into the target system.If we created the image on the local file system, we need to transfer the flar archive from the source machine to the target machine.Source # scp /var/tmp/solaris_10_up9.flar target:/var/tmpConfiguring the Zone on the target systemAfter copying the software to the target machine, we need to configure a new zone in order to host the new image on that zone.To install the new zone on the target machine, first we need to configure the zone (for the full zone creation options see the following link: http://docs.oracle.com/cd/E18752_01/html/817-1592/index.html  )ZFS integrationA flash archive can be created on a system that is running a UFS or a ZFS root file system.NOTE: If you create a Solaris Flash archive of a Solaris 10 system that has a ZFS root, then bydefault, the flar will actually be a ZFS send stream, which can be used to recreate the root pool.This image cannot be used to install a zone. You must create the flar with an explicit cpio or paxarchive when the system has a ZFS root.Use the flarcreate command with the -L archiver option, specifying cpio or pax as themethod to archive the files. (For example, see Step 1 in the previous section).Optionally, on the target system you can create the zone root folder on a ZFS file system inorder to benefit from the ZFS features (clones, snapshots, etc...).Target # zpool create zones c2t2d0 Create the zone root folder:Target # chmod 700 /zones Target # zonecfg -z solaris10-up9-zonesolaris10-up9-zone: No such zone configuredUse 'create' to begin configuring a new zone.zonecfg:solaris10-up9-zone> createzonecfg:solaris10-up9-zone> set zonepath=/zoneszonecfg:solaris10-up9-zone> set autoboot=truezonecfg:solaris10-up9-zone> add netzonecfg:solaris10-up9-zone:net> set address=192.168.0.1zonecfg:solaris10-up9-zone:net> set physical=nxge0zonecfg:solaris10-up9-zone:net> endzonecfg:solaris10-up9-zone> verifyzonecfg:solaris10-up9-zone> commitzonecfg:solaris10-up9-zone> exit Installing the Zone on the target system using the imageInstall the configured zone solaris10-up9-zone by using the zoneadm command with the install -a option and the path to the archive.The following example shows how to create an Image and sys-unconfig the zone.Target # zoneadm -z solaris10-up9-zone install -u -a/var/tmp/solaris_10_up9.flarLog File: /var/tmp/solaris10-up9-zone.install_log.AJaGveInstalling: This may take several minutes...The following example shows how we can preserve system identity.Target # zoneadm -z solaris10-up9-zone install -p -a /var/tmp/solaris_10_up9.flar Resource management Some applications are sensitive to the number of CPUs on the target Zone. You need tomatch the number of CPUs on the Zone using the zonecfg command:zonecfg:solaris10-up9-zone>add dedicated-cpuzonecfg:solaris10-up9-zone> set ncpus=16DTrace integrationSome applications might need to be analyzing using DTrace on the target zone, you canadd DTrace support on the zone using the zonecfg command:zonecfg:solaris10-up9-zone>setlimitpriv="default,dtrace_proc,dtrace_user" Exclusive IP stack An Oracle Solaris Container running in Oracle Solaris 10 can have a shared IP stack with the global zone, or it can have an exclusive IP stack (which was released in Oracle Solaris 10 8/07). An exclusive IP stack provides a complete, tunable, manageable and independent networking stack to each zone. A zone with an exclusive IP stack can configure Scalable TCP (STCP), IP routing, IP multipathing, or IPsec. For an example of how to configure an Oracle Solaris zone with an exclusive IP stack, see the following example zonecfg:solaris10-up9-zone set ip-type=exclusivezonecfg:solaris10-up9-zone> add netzonecfg:solaris10-up9-zone> set physical=nxge0 When the installation completes, use the zoneadm list -i -v options to list the installedzones and verify the status.Target # zoneadm list -i -vSee that the new Zone status is installedID NAME STATUS PATH BRAND IP0 global running / native shared- solaris10-up9-zone installed /zones native sharedNow boot the ZoneTarget # zoneadm -z solaris10-up9-zone bootWe need to login into the Zone order to complete the zone set up or insert a sysidcfg file beforebooting the zone for the first time see example for sysidcfg file in Appendix B: sysidcfg filesectionTarget # zlogin -C solaris10-up9-zoneTroubleshootingIf an installation fails, review the log file. On success, the log file is in /var/log inside the zone. Onfailure, the log file is in /var/tmp in the global zone.If a zone installation is interrupted or fails, the zone is left in the incomplete state. Use uninstall -F to reset the zone to the configured state.Target # zoneadm -z solaris10-up9-zone uninstall -FTarget # zonecfg -z solaris10-up9-zone delete -FConclusionOracle Solaris Zones P2V tool provides the flexibility to build pre-configuredimages with different software configuration for faster deployment and server consolidation.In this document, I demonstrated how to build and install images and to integrate the images with other Oracle Solaris features like ZFS and DTrace.Appendix A: archive identification sectionWe can use the head -n 20 /var/tmp/solaris_10_up9.flar command in order to access theidentification section that contains the detailed description.Target # head -n 20 /var/tmp/solaris_10_up9.flarFlAsH-aRcHiVe-2.0section_begin=identificationarchive_id=e4469ee97c3f30699d608b20a36011befiles_archived_method=cpiocreation_date=20100901160827creation_master=mdet5140-1content_name=s10-systemcreation_node=mdet5140-1creation_hardware_class=sun4vcreation_platform=SUNW,T5140creation_processor=sparccreation_release=5.10creation_os_name=SunOScreation_os_version=Generic_142909-16files_compressed_method=nonecontent_architectures=sun4vtype=FULLsection_end=identificationsection_begin=predeploymentbegin 755 predeployment.cpio.ZAppendix B: sysidcfg file sectionTarget # cat sysidcfgsystem_locale=Ctimezone=US/Pacificterminal=xtermssecurity_policy=NONEroot_password=HsABA7Dt/0sXXtimeserver=localhostname_service=NONEnetwork_interface=primary {hostname= solaris10-up9-zonenetmask=255.255.255.0protocol_ipv6=nodefault_route=192.168.0.1}name_service=NONEnfs4_domain=dynamicWe need to copy this file before booting the zoneTarget # cp sysidcfg /zones/solaris10-up9-zone/root/etc/

    Read the article

  • Help with Apache rewriteengine rules

    - by Vinay
    Hello - I am trying to write a simple rewrite rule using the rewriteengine in apache. I want to redirect all traffic destined to a website unless the traffic originates from a specific IP address and the URI contains two specific strings. RewriteEngine On RewriteLog /var/log/apache2/rewrite_kudithipudi.log RewriteLogLevel 1 RewriteCond %{REMOTE_ADDR} ^199\.27\.130\.105 RewriteCond %{REQUEST_URI} !/StringOne [NC, OR] RewriteCond %{REQUEST_URI} !/StringTwo [NC] RewriteRule ^/(.*) http://www.google.com [R=302,L] I put these statements in my virtual host configuration. But the rewriteengine seems to be redirect all requests, whether they match the condition or not. Am I missing something? Thank you. Vinay.

    Read the article

  • Setting up a Time Capsule with port forwarding

    - by Kaji
    Our old AirPort Extreme station hit EOL, so we decided to upgrade it to a Time Capsule. Along the way, we're trying to also set it up with a separate guest network and port forwarding/NAT, however we're having trouble setting it up so that the time capsule is handling the DHCP leases instead of the router. We've got DSL through Verizon through a Westell modem/router to the Time Capsule. Done the RTFM thing, and we haven't been able to get it to work. Can anyone explain how to get things set up properly for this configuration?

    Read the article

  • Setting up a Time Capsule with port forwarding

    - by Kaji
    Our old AirPort Extreme station hit EOL, so we decided to upgrade it to a Time Capsule. Along the way, we're trying to also set it up with a separate guest network and port forwarding/NAT, however we're having trouble setting it up so that the time capsule is handling the DHCP leases instead of the router. We've got DSL through Verizon through a Westell modem/router to the Time Capsule. Done the RTFM thing, and we haven't been able to get it to work. Can anyone explain how to get things set up properly for this configuration?

    Read the article

  • ODI 11g - Faster Files

    - by David Allan
    Deep in the trenches of ODI development I raised my head above the parapet to read a few odds and ends and then think why don’t they know this? Such as this article here – in the past customers (see forum) were told to use a staging route which has a big overhead for large files. This KM is an example of the great extensibility capabilities of ODI, its quite simple, just a new KM that; improves the out of the box experience – just build the mapping and the appropriate KM is used improves out of the box performance for file to file data movement. This improvement for out of the box handling for File to File data integration cases (from the 11.1.1.5.2 companion CD and on) dramatically speeds up the file integration handling. In the past I had seem some consultants write perl versions of the file to file integration case, now Oracle ships this KM to fill the gap. You can find the documentation for the IKM here. The KM uses pure java to perform the integration, using java.io classes to read and write the file in a pipe – it uses java threading in order to super-charge the file processing, and can process several source files at once when the datastore's resource name contains a wildcard. This is a big step for regular file processing on the way to super-charging big data files using Hadoop – the KM works with the lightweight agent and regular filesystems. So in my design below transforming a bunch of files, by default the IKM File to File (Java) knowledge module was assigned. I pointed the KM at my JDK (since the KM generates and compiles java), and I also increased the thread count to 2, to take advantage of my 2 processors. For my illustration I transformed (can also filter if desired) and moved about 1.3Gb with 2 threads in 140 seconds (with a single thread it took 220 seconds) - by no means was this on any super computer by the way. The great thing here is that it worked well out of the box from the design to the execution without any funky configuration, plus, and a big plus it was much faster than before, So if you are doing any file to file transformations, check it out!

    Read the article

  • Linux Bridge, Samba netbios name/hostname access

    - by Christopher Wilson
    I am currently running a linux bridge in the following configuration ADSL Modem: 192.168.1.1 Linux Bridge: eth0: 192.168.1.2 eth1: no address Wireless Router: 192.168.0.1 My issue is that i cannot access the "Linux Bridge" shares using the WINS name of the server via client systems (yes i understand it is a transparent bridge but i can access it via the 192.168.1.2 address this is not on the same subnet as the client systems). This is the global section of my SMB.CONF [global] unix extensions = off os level = 20 netbios name = server guest account = nobody server string = 447 Server security = share #unix extensions = no #wins support = yes #wins server = 192.168.0.1 name resolve order = wins lmhosts hosts bcast interfaces bridge1 eth0 eth1 lo bind interfaces only = yes Can i access a bridged server using it's WINS name to access samba shares? Cheers Chris

    Read the article

< Previous Page | 412 413 414 415 416 417 418 419 420 421 422 423  | Next Page >