Search Results

Search found 22515 results on 901 pages for 'created'.

Page 47/901 | < Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >

  • Oracle Solaris Zones Physical to virtual (P2V)

    - by user939057
    IntroductionThis document describes the process of creating and installing a Solaris 10 image build from physical system and migrate it into a virtualized operating system environment using the Oracle Solaris 10 Zones Physical-to-Virtual (P2V) capability.Using an example and various scenarios, this paper describes how to take advantage of theOracle Solaris 10 Zones Physical-to-Virtual (P2V) capability with other Oracle Solaris features to optimize performance using the Solaris 10 resource management advanced storage management using Solaris ZFS plus improving operating system visibility with Solaris DTrace. The most common use for this tool is when performing consolidation of existing systems onto virtualization enabled platforms, in addition to that we can use the Physical-to-Virtual (P2V) capability  for other tasks for example backup your physical system and move them into virtualized operating system environment hosted on the Disaster Recovery (DR) site another option can be building an Oracle Solaris 10 image repository with various configuration and a different software packages in order to reduce provisioning time.Oracle Solaris ZonesOracle Solaris Zones is a virtualization and partitioning technology supported on Oracle Sun servers powered by SPARC and Intel processors.This technology provides an isolated and secure environment for running applications. A zone is a virtualized operating system environment created within a single instance of the Solaris 10 Operating System.Each virtual system is called a zone and runs a unique and distinct copy of the Solaris 10 operating system.Oracle Solaris Zones Physical-to-Virtual (P2V)A new feature for Solaris 10 9/10.This feature provides the ability to build a Solaris 10 images from physical system and migrate it into a virtualized operating system environmentThere are three main steps using this tool1. Image creation on the source system, this image includes the operating system and optionally the software in which we want to include within the image. 2. Preparing the target system by configuring a new zone that will host the new image.3. Image installation on the target system using the image we created on step 1. The host, where the image is built, is referred to as the source system and the host, where theimage is installed, is referred to as the target system. Benefits of Oracle Solaris Zones Physical-to-Virtual (P2V)Here are some benefits of this new feature:  Simple- easy build process using Oracle Solaris 10 built-in commands.  Robust- based on Oracle Solaris Zones a robust and well known virtualization technology.  Flexible- support migration between V series servers into T or -M-series systems.For the latest server information, refer to the Sun Servers web page. PrerequisitesThe target Oracle Solaris system should be running the latest version of the patching patch cluster. and the minimum Solaris version on the target system should be Solaris 10 9/10.Refer to the latest Administration Guide for Oracle Solaris for a complete procedure on how todownload and install Oracle Solaris. NOTE: If the source system that used to build the image is an older version then the targetsystem, then during the process, the operating system will be upgraded to Solaris 10 9/10(update on attach).Creating the Image Used to distribute the software.We will create an image on the source machine. We can create the image on the local file system and then transfer it to the target machine, or build it into a NFS shared storage andmount the NFS file system from the target machine.Optional  before creating the image we need to complete the software installation that we want to include with the Solaris 10 image.An image is created by using the flarcreate command:Source # flarcreate -S -n s10-system -L cpio /var/tmp/solaris_10_up9.flarThe command does the following:  -S specifies that we skip the disk space check and do not write archive size data to the archive (faster).  -n specifies the image name.  -L specifies the archive format (i.e cpio). Optionally, we can add descriptions to the archive identification section, which can help to identify the archive later.Source # flarcreate -S -n s10-system -e "Oracle Solaris with Oracle DB10.2.0.4" -a "oracle" -L cpio /var/tmp/solaris_10_up9.flarYou can see example of the archive identification section in Appendix A: archive identification section.We can compress the flar image using the gzip command or adding the -c option to the flarcreate commandSource # gzip /var/tmp/solaris_10_up9.flarAn md5 checksum can be created for the image in order to ensure no data tamperingSource # digest -v -a md5 /var/tmp/solaris_10_up9.flar Moving the image into the target system.If we created the image on the local file system, we need to transfer the flar archive from the source machine to the target machine.Source # scp /var/tmp/solaris_10_up9.flar target:/var/tmpConfiguring the Zone on the target systemAfter copying the software to the target machine, we need to configure a new zone in order to host the new image on that zone.To install the new zone on the target machine, first we need to configure the zone (for the full zone creation options see the following link: http://docs.oracle.com/cd/E18752_01/html/817-1592/index.html  )ZFS integrationA flash archive can be created on a system that is running a UFS or a ZFS root file system.NOTE: If you create a Solaris Flash archive of a Solaris 10 system that has a ZFS root, then bydefault, the flar will actually be a ZFS send stream, which can be used to recreate the root pool.This image cannot be used to install a zone. You must create the flar with an explicit cpio or paxarchive when the system has a ZFS root.Use the flarcreate command with the -L archiver option, specifying cpio or pax as themethod to archive the files. (For example, see Step 1 in the previous section).Optionally, on the target system you can create the zone root folder on a ZFS file system inorder to benefit from the ZFS features (clones, snapshots, etc...).Target # zpool create zones c2t2d0 Create the zone root folder:Target # chmod 700 /zones Target # zonecfg -z solaris10-up9-zonesolaris10-up9-zone: No such zone configuredUse 'create' to begin configuring a new zone.zonecfg:solaris10-up9-zone> createzonecfg:solaris10-up9-zone> set zonepath=/zoneszonecfg:solaris10-up9-zone> set autoboot=truezonecfg:solaris10-up9-zone> add netzonecfg:solaris10-up9-zone:net> set address=192.168.0.1zonecfg:solaris10-up9-zone:net> set physical=nxge0zonecfg:solaris10-up9-zone:net> endzonecfg:solaris10-up9-zone> verifyzonecfg:solaris10-up9-zone> commitzonecfg:solaris10-up9-zone> exit Installing the Zone on the target system using the imageInstall the configured zone solaris10-up9-zone by using the zoneadm command with the install -a option and the path to the archive.The following example shows how to create an Image and sys-unconfig the zone.Target # zoneadm -z solaris10-up9-zone install -u -a/var/tmp/solaris_10_up9.flarLog File: /var/tmp/solaris10-up9-zone.install_log.AJaGveInstalling: This may take several minutes...The following example shows how we can preserve system identity.Target # zoneadm -z solaris10-up9-zone install -p -a /var/tmp/solaris_10_up9.flar Resource management Some applications are sensitive to the number of CPUs on the target Zone. You need tomatch the number of CPUs on the Zone using the zonecfg command:zonecfg:solaris10-up9-zone>add dedicated-cpuzonecfg:solaris10-up9-zone> set ncpus=16DTrace integrationSome applications might need to be analyzing using DTrace on the target zone, you canadd DTrace support on the zone using the zonecfg command:zonecfg:solaris10-up9-zone>setlimitpriv="default,dtrace_proc,dtrace_user" Exclusive IP stack An Oracle Solaris Container running in Oracle Solaris 10 can have a shared IP stack with the global zone, or it can have an exclusive IP stack (which was released in Oracle Solaris 10 8/07). An exclusive IP stack provides a complete, tunable, manageable and independent networking stack to each zone. A zone with an exclusive IP stack can configure Scalable TCP (STCP), IP routing, IP multipathing, or IPsec. For an example of how to configure an Oracle Solaris zone with an exclusive IP stack, see the following example zonecfg:solaris10-up9-zone set ip-type=exclusivezonecfg:solaris10-up9-zone> add netzonecfg:solaris10-up9-zone> set physical=nxge0 When the installation completes, use the zoneadm list -i -v options to list the installedzones and verify the status.Target # zoneadm list -i -vSee that the new Zone status is installedID NAME STATUS PATH BRAND IP0 global running / native shared- solaris10-up9-zone installed /zones native sharedNow boot the ZoneTarget # zoneadm -z solaris10-up9-zone bootWe need to login into the Zone order to complete the zone set up or insert a sysidcfg file beforebooting the zone for the first time see example for sysidcfg file in Appendix B: sysidcfg filesectionTarget # zlogin -C solaris10-up9-zoneTroubleshootingIf an installation fails, review the log file. On success, the log file is in /var/log inside the zone. Onfailure, the log file is in /var/tmp in the global zone.If a zone installation is interrupted or fails, the zone is left in the incomplete state. Use uninstall -F to reset the zone to the configured state.Target # zoneadm -z solaris10-up9-zone uninstall -FTarget # zonecfg -z solaris10-up9-zone delete -FConclusionOracle Solaris Zones P2V tool provides the flexibility to build pre-configuredimages with different software configuration for faster deployment and server consolidation.In this document, I demonstrated how to build and install images and to integrate the images with other Oracle Solaris features like ZFS and DTrace.Appendix A: archive identification sectionWe can use the head -n 20 /var/tmp/solaris_10_up9.flar command in order to access theidentification section that contains the detailed description.Target # head -n 20 /var/tmp/solaris_10_up9.flarFlAsH-aRcHiVe-2.0section_begin=identificationarchive_id=e4469ee97c3f30699d608b20a36011befiles_archived_method=cpiocreation_date=20100901160827creation_master=mdet5140-1content_name=s10-systemcreation_node=mdet5140-1creation_hardware_class=sun4vcreation_platform=SUNW,T5140creation_processor=sparccreation_release=5.10creation_os_name=SunOScreation_os_version=Generic_142909-16files_compressed_method=nonecontent_architectures=sun4vtype=FULLsection_end=identificationsection_begin=predeploymentbegin 755 predeployment.cpio.ZAppendix B: sysidcfg file sectionTarget # cat sysidcfgsystem_locale=Ctimezone=US/Pacificterminal=xtermssecurity_policy=NONEroot_password=HsABA7Dt/0sXXtimeserver=localhostname_service=NONEnetwork_interface=primary {hostname= solaris10-up9-zonenetmask=255.255.255.0protocol_ipv6=nodefault_route=192.168.0.1}name_service=NONEnfs4_domain=dynamicWe need to copy this file before booting the zoneTarget # cp sysidcfg /zones/solaris10-up9-zone/root/etc/

    Read the article

  • When is the default storage rule not really the default storage rule?

    - by Kevin Smith
    In 11g WebCenter Content (WCC) introduced dispersion rules in the vault and weblayout directory paths to better distribute content across the directories. The dispersion rule was based on dRevClassID. The only problem with this is that dRevClassID did not remain the same when you copied content from one WCC instance to another using Archiver like in a contribution-consumption scenario. This could cause problems because the web-viewable path would not be the same between the contribution and consumption instances. In the PS5 (11.1.1.6.0) release of WCC they addressed this by configuring the File Store Provider (FSP) so that all new content would use a storage rule with a dispersion rule based on dDocName, which would stay the same when content was copied to another WCC instance. To support migration from older versions of WCC they left the default storage rule unchanged and created a new storage rule called DispByContentId and made that the default storage rule for all new content. I only stumbled upon this a while back when I was trying to change the FSP configuration so that all content used a webless storage rule. I changed the default storage rule, restarted WCC, and checked in a new content item. To my surprise the new content was not created as webless. I struggled with this for a while until I noticed there were multiple storage rules defined in the FSP configuration. When I looked at the default value for the xStorageRule field in Configuration Manager, sure enough it was no longer default, but was now DispByContentId. Once I updated the DispByContentId storage rule to webless and restarted WCC all my new content was now created using the webless storage rule, just like I wanted. I noticed when I was creating this blog post that the default storage rule is also listed on the File Store Provider Information page, but I guess I didn't see that when I originally did this.

    Read the article

  • Changing LDAP schema casts Confluence AD integration unoperable

    - by Maxim V. Pavlov
    I have had our instance of Atlassian Confluence configured to be integrated with our Active Directory. In AD, all the users were being created under default Users folder in Active Directory Users and Computers. We have decided to introduce cleaner separation and have created an Organizational Units structure in AD. Under root we have created Managed OU, and under it - Users OU and all user accounts were moved under Users OU. Now I though that to let the Confluence AD integration engine "know" where to look for user accounts now, I only need to adjust the BaseDN and prepand it with ou=Managed so it is aware that it is looking for cn=Users but under ou=Managed. That didn't work. How should I adjust LDAP schema root in a client application for it to be able to look for users in OU that then in a default folder.

    Read the article

  • opening Dbf files in oracle 10g

    - by nagaraju
    This nagaraju,from India,Hyderabad. I have installed oracle 10g trail version in my system(E drive),created one database with my name(database:-nagaraju),in that created tables, prodecures ,functions ,sequences etc for my project. Due to some sudden problem,i formatted my machine C drive,now iam not ablle to open my database, i need all procedures ,tables which i created in that. Now I newly installed oracle10g again in another folder,how can i copy my old database into my inew installation database. Or can i copy the script of procedures so that ican run in new database. I have all data in Oradata folder,like DBF files etc. Could you please help me, how to do that?

    Read the article

  • MVC Scaffold Template Not Generating?

    - by monkey9987
    Been working on an MVC project and my templates were not generating. I first created my Model inside my "Models" folder, then did a quick compile. Next I went to the Views folder to get it created, right click and say "Add View" then I clicked the checkbox to create an edit page. What happened was the template would never seem to pull in my Model, it would just have the default header items, but the entire model was missing.  My model was defined as follows: public class LogOnModel { [Required][Display(Name = "User name")]public string UserName;[Required][DataType(DataType.Password)][Display(Name = "Password")]public string Password;[Display(Name = "Remember me?")]public bool RememberMe; } See anything wrong with that? I couldn't figure out why each time I created my View and selected the option to create the "Edit" scaffold automatically, it would come up blank. Turns out I'm missing my get / set methods on the Model class items. Here's my code with the correct setup: public class LogOnModel { [Required][Display(Name = "User name")]public string UserName { get; set; } [Required][DataType(DataType.Password)][Display(Name = "Password")]public string Password { get; set; }[Display(Name = "Remember me?")]public bool RememberMe { get; set; } }  I hope that helps someone out, it's pretty simple when I look at it now, but that's always the case!  ~ Steve

    Read the article

  • How to install Gyachi on ubuntu 12.10

    - by Oguz Can Sertel
    I would like to use Gyachi on ubuntu 12.10. I tried these steps but it doesn't work.. I wanted to compile it myself... but it need some libs... it made me confused... so I gave up sudo add-apt-repository ppa:adilson/experimental sudo apt-get update sudo apt-get install gyachi Thank you for your helps at first command the output: sudo add-apt-repository ppa:adilson/experimental You are about to add the following PPA to your system: Contains packages that are not in the official Debian/Ubuntu repositories and newer versions and snapshots which are not available yet in the repositories. Theses packages are experimental. Use them at your own risk. More info: https://launchpad.net/~adilson/+archive/experimental Press [ENTER] to continue or ctrl-c to cancel adding it gpg: keyring `/tmp/tmp3y3i7p/secring.gpg' created gpg: keyring `/tmp/tmp3y3i7p/pubring.gpg' created gpg: requesting key 27B81625 from hkp server keyserver.ubuntu.com gpg: /tmp/tmp3y3i7p/trustdb.gpg: trustdb created gpg: key 27B81625: public key "Launchpad Experimental Packages PPA" imported gpg: Total number processed: 1 gpg: imported: 1 (RSA: 1) OK and after sudo apt-get update; this is (sudo apt-get install gyachi)'s output here is the output: sudo apt-get install gyachi Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package gyachi

    Read the article

  • Ubuntu 12.04 slow boot on ASUS, attached with dmesg and bootchart

    - by stanleyhunk
    I heard that Ubuntu can boot up around 30sec, but I take more than 60sec every time my Ubuntu boot. I also read some forum said need to post the dmesg and bootchart to identify which process slowing down the booting time, as I'm not expert in Ubuntu and wish to learn more about it, I humbly ask any pro here to teach me how. My laptop specs: Model : ASUS K45VS RAM : 8MB CPU : Intel(R) Core(TM) i7-3630QM CPU @ 2.40GHz x 8 Graphic Card : nVidia GeForce GT 645M HDD : 750GB OS : Single boot Ubuntu 12.04LTS System.uname : Linux 3.8.0-39-generic #58~precise1-Ubuntu SMP Fri May 2 21:33:40 UTC 2014 x86_64 System.release : Ubuntu 12.04.4 LTS System.kernel.options : BOOT_IMAGE=/boot/vmlinuz-3.8.0-39-generic root=UUID=c8a71503-bce8-406c-9a5f-5aa8284f5c7c ro quiet splash My dmesg (which highlighted to the huge time frame gap): [ 30.772656] cgroup: libvirtd (1961) created nested cgroup for controller "memory" which has incomplete hierarchy support. Nested cgroups may change behavior in the future. [ 30.772659] cgroup: "memory" requires setting use_hierarchy to 1 on the root. [ 30.772683] cgroup: libvirtd (1961) created nested cgroup for controller "devices" which has incomplete hierarchy support. Nested cgroups may change behavior in the future. [ 30.772710] cgroup: libvirtd (1961) created nested cgroup for controller "blkio" which has incomplete hierarchy support. Nested cgroups may change behavior in the future. [ 32.140335] nvidia 0000:01:00.0: irq 46 for MSI/MSI-X [ 32.505619] ACPI Error: Field [TMPB] at 1081344 exceeds Buffer [ROM1] size 262144 (bits) (20121018/dsopcode-236) [ 32.505624] ACPI Error: Method parse/execution failed [\_SB_.PCI0.PEG0.PEGP._ROM] (Node ffff880224e91f00), AE_AML_BUFFER_LIMIT (20121018/psparse-537) [ 802.034422] audit_printk_skb: 69 callbacks suppressed [ 802.034428] type=1400 audit(1400914804.392:35): apparmor="DENIED" operation="capable" parent=1 profile="/usr/sbin/cupsd" pid=1683 comm="cupsd" pid=1683 comm="cupsd" capability=36 capname="block_suspend" [ 1581.300901] type=1400 audit(1400915584.816:36): apparmor="DENIED" operation="capable" parent=1 profile="/usr/sbin/cupsd" pid=1683 comm="cupsd" pid=1683 comm="cupsd" capability=36 capname="block_suspend" My Bootchart.png: Looking forward to learn to improve both my booting time and knowledge. Thanks in advance :)

    Read the article

  • My powershell script wont save a file when run using Task Scheduler, do I need to specify a specific argument?

    - by EGr
    I have a script that downloads a temporary Excel file, copies parts of it to a new file, and saves it to a specific location on the network. The problem I'm having is that the new file is never created/saved. If I run the script locally (through cmd.exe, powershell, or powershell ise), it WILL save the file locally, or to the network. If I try running the script via a schedule or on-demand via Task Scheduler, the temporary file is created, but the final document is never created or saved. Is there a specific argument I need to pass, or anything I could be doing wrong? This is the command I'm currently using: powershell.exe -file C:\path\to\my\powershell\script\thescript.ps1 Since it calls environment variables, and other variables relative to the scripts positon, I also set "Start in" to C:\path\to\my\powershell\script\

    Read the article

  • Setting up Gitosis, where to create the repos?

    - by ReynierPM
    I'm trying to setup Gitosis on CentOS 6.2 but have some doubts/problems about it. I read this docs here, here and here but it's unclear to me where to configure where repositories are created. My server has a partition /data where I create a directory and called /gitrepos. I want all the repos created under that directory. By default if I run the command: gitosis-init < /home/reynierpm/reynierpm.pub I get this Initialized empty Git repository in /root/repositories/gitosis-admin.git/ Reinitialized existing Git repository in /root/repositories/gitosis-admin.git/ And I want this repos created under /data/gitrepos, any help? Thanks in advance

    Read the article

  • Why values in my WCF data contract were suddenly wrong...

    - by mipsen
    A WCF Service I provided took a very simple data contract as parameter (containing one string and one int...) and had a very simple task to do. A .NET 3.5 client was created using the VS2008 feature "Add Service Reference". Everything worked as expected. Then a slight change came in: The client was expected to run on machines with .NET 2.0 only. So we set the Target  Framework to .NET 2.0, removed the references to System.ServiceModel, System.Runtime.Serialization and the ServiceReference and created a new Reference to the Service using the old "Add Web Reference" . A matter of 2 minutes.  When testing, the int value in the data contract arriving at the WCF Service suddenly was 0, instead of 38 as we expected. What happened? When generating an old  Web Reference on a WCF data contract an additional boolean field for each value-type field is created called [Fieldname]Specified (e.g. AgeSpecified) which defaults to "false". WCF inspects these boolean fields to determine if a value was provided for the value-type field. If the "Specified"-field is "false", WCF translates that to using the default-value of the value-type field. For int this is 0. So we had to insert  setting the "Specified"-field  for the int-value to "true" and everything was fine again. That was what we forgot after setting the Framework-version to 2.0...

    Read the article

  • How to create a new text file in DOS with the end result allowing me to rename the file?

    - by Rolo
    I would like an exact duplicate of the following procedure on a Windows computer using mouse clicks: Right Click in a directory and choose New and then Text Document. When one does this, the text file is created with a default name of New Text Document AND it is also highlighted so that I can type in my own file name. I would like to do this in DOS. I don't care what file name is originally created. What I want is for the name of the file to automatically be highlighted / able to be renamed, so that I can rename it. How can DOS execute / simulate a rename command / an F2 being pressed on the keyboard to a file that it has just created?

    Read the article

  • Ubuntu nasty error: The panel encountered a problem while loading xxxApplet

    - by Phuong Nguyen
    I have install ubuntu & GNOME (say, with minimum possible number of packages). I can login and do anything as I want. However, there is a nasty thing: Whenever I loggin in, I see this message: Error The panel encountered a problem while loading "OAFIID:GNOME_FastUserSwitchApplet" Do you want to delete the applet from your configuration [Don't Delete] [Delete] If I press [Delete] then the error won't be shown anymore. However, for every new created account, the message shown again (use created using sudo adduser user_name). Since I clone this OS into several virtual instance, and create new account on these instances. I wonder if there is a way to configure my ubuntu so that new created user don't have to see this annoy message? Thanks

    Read the article

  • Windows Azure virtual machine credentials

    - by Georges
    I have created a Windows Server virtual machine, and this also automatically created a disk (VHD). Then I removed the vm, and I created a new vm from the same vhd: in this process I wasn't asked to supply any credentials. I am not able anymore to login (with rdp) to the new VM, the credentials that I supplied when creating the first machine don't work anymore. Any ideas why this happens? What credentials should I use? Thanks a lot in advance.

    Read the article

  • How can I find the original un-changed configuration file to compare with the *.rpmnew file?

    - by User
    While upgrading from CentOS 5.7 to 5.8 I've received the following warnings: warning: /etc/sysconfig/iptables-config created as /etc/sysconfig/iptables-config.rpmnew warning: /etc/ssh/sshd_config created as /etc/ssh/sshd_config.rpmnew warning: /etc/odbcinst.ini created as /etc/odbcinst.ini.rpmnew (To know the reason for such files, and what one can do with them read - Why do I have .rpmnew file after an update? ) I want to know what exactly has been change in the default config file by comparing the old default file (the original un-changed configuration file) with the new default file (*.rpmnew). Then, I can apply the changes to my modified file (aka diff merge). The problem is I don't know where can I find the original un-changed configuration file...

    Read the article

  • How to receive mail in Qmail?

    - by Ivan
    I've a server that uses Qmail. It is installed by default and it is supposed to work. I've created a new domain and new user (vadddomain + vadduser) without problems, but when I send an email from Gmail to [email protected] (the address I've created) it desappears, it is. But if connect to SMTP server directly (telnet domain.com 25) and post an email it arrives to the user queue. What's happening?!? Note: If I try to access to my user through telnet domain.com 110 it seems my pwd is not correct and it's the same I used when created the user with vadduser

    Read the article

  • Is my gedit-latex-plugin working properly?

    - by arroy_0209
    I have installed gedit-latex-plugin(0.2 rc3) to be used with gedit(2.30.3) in ubuntu 10.04. If I use the command gedit file.tex& in terminal the file is opened and it seems everything works fine but in the terminal, lots of comments appear, some of which are: 2012-03-31 22:14:27,263 DEBUG resources - Initializing resource locating 2012-03-31 22:14:27,361 DEBUG Preferences - not found 2012-03-31 22:14:27,373 DEBUG JobManager - Created JobManager instance 147209196 2012-03-31 22:14:27,379 DEBUG GeditLaTeXPlugin - activate 2012-03-31 22:14:27,379 DEBUG WindowContext - init 2012-03-31 22:14:27,444 DEBUG GeditWindowDecorator - _init_tab_decorators: initialized 0 decorators 2012-03-31 22:14:27,511 DEBUG GeditWindowDecorator - active_tab_changed 2012-03-31 22:14:27,511 DEBUG GeditWindowDecorator - ---------- ADJUST: None 2012-03-31 22:14:27,513 DEBUG GeditWindowDecorator - No window-scope views for this extension 2012-03-31 22:14:27,513 DEBUG GeditWindowDecorator - _set_selected_bottom_view: 0 2012-03-31 22:14:27,514 DEBUG GeditWindowDecorator - _set_selected_side_view: 0 2012-03-31 22:14:27,539 DEBUG GeditWindowDecorator - tab_added 2012-03-31 22:14:27,952 DEBUG GeditTabDecorator - loaded 2012-03-31 22:14:27,964 DEBUG GeditTabDecorator - _adjust_editor: URI has changed 2012-03-31 22:14:27,965 DEBUG LaTeXCompletionHandler - init 2012-03-31 22:14:27,966 DEBUG LanguageModelFactory - Pickled object found: /home/abcd/.gnome2/gedit/plugins/GeditLaTeXPlugin/latex.pkl 2012-03-31 22:14:28,075 DEBUG CompletionDistributor - init 2012-03-31 22:14:28,078 DEBUG WindowContext - Created view LaTeXOutlineView 2012-03-31 22:14:28,078 DEBUG WindowContext - Created view IssueView 2012-03-31 22:14:28,079 DEBUG LaTeXEditor - init(file:///home/abcd/dir1/file1.tex) 2012-03-31 22:14:28,079 DEBUG LaTeXEditor - Parsing document... 2012-03-31 22:14:28,080 DEBUG IssueView - init 2012-03-31 22:14:28,082 DEBUG IssueView - init finished 2012-03-31 22:14:28,092 INFO LaTeXEditor - LaTeXParser.parse: 0.010000 2012-03-31 22:14:28,092 DEBUG LaTeXEditor - Parsed 1599 bytes of content 2012-03-31 22:14:28,093 DEBUG LaTeXOutlineView - set_outline 2012-03-31 22:14:28,093 DEBUG LaTeXOutlineView - init 2012-03-31 22:14:28,097 DEBUG LaTeXValidator - validate 2012-03-31 22:14:28,098 DEBUG LanguageModel - set_newcommands: 2012-03-31 22:14:28,102 DEBUG LaTeXEditor - Parsing finished 2012-03-31 22:14:28,105 DEBUG GeditWindowDecorator - ---------- ADJUST: .tex 2012-03-31 22:14:28,119 DEBUG GeditWindowDecorator - _set_selected_bottom_view: 0 2012-03-31 22:14:28,120 DEBUG GeditWindowDecorator - _set_selected_side_view: 0 I am not sure if the gedit-latex-plugin is working properly or is it facing some problem. Why are there so many debug messages? Can anybody please suggest what I should do?

    Read the article

  • How to Create tree type CVL in Content server(UCM)

    - by rajeev.y.ranjan-oracle
    Steps to create tree choice list:1)Create a table "tblStates" with column "stateID" and "stateName". Click on "ADD Recommended".2) Create another table "tblCities with columns "cityID", "stateID" and "cityName".3)Then create two views on these tables namely "tblstateview" and "tblcityview".3)In "StateView" added two rows with values as JH and MH in stateID column.Jharkhand and Maharastra in stateName.4)Similarly in tblcityview added two rows with values as:BO and RA in cityID column.JH and MH in stateID columnBokaro and Mumbai in cityname column.5)Created relationship with Parentinfo "tblStates" and stateID and  childinfo with tblCities and stateID.6)Created metadata by name "Newtest"Enable option list,go to the configure ,Select use tree,Click on go edit definition 7)Tree Definition at level 1: a)Choose" tblstateView"b)Choose relation "newstatecity"At Level2:a)Choose cityView.Log out of the NativeUI and ContentUI and test the tree created by name "Newtest".

    Read the article

  • mySQL sphinx Index location on Ubuntu

    - by Marcus Morris
    I am trying to test out a Sphinx cookbook, but I need a database to do so. I have created the database locally, but I need to know where the default path is for the index of the table I created. This is the error I am currently getting when trying to run sphinx because the path to the index is wrong: WARNING: index 'phoneindex': preload: failed to open /var/lib/mysql/mysql.sph: No such file or directory; NOT SERVING FATAL: no valid indexes to serve Where can I find mysql.sph? Or how/when is that file created? Thanks!

    Read the article

  • Setting per-directory umask using ACLs

    - by Yarin
    We want to mimic the behavior of a system-wide 002 umask on a certain directory foo, in order to ensure the following result: All sub-directories created underneath foo will have 775 permissions All files created underneath foo and subdirectories will have 664 permissions 1 and 2 will happen for files/dirs created by all users, including root, and all daemons. Assuming that ACL is enabled on our partition, this is the command we've come up with: setfacl -R -d -m mask:002 foo This seems to be working- I'm basically just looking for confirmation. Is this the most effective way to apply a per-directory umask with an ACL?

    Read the article

  • Implementing my Entity System. Questions about some problems I have found.

    - by Notbad
    Hi!, Well during this week I have deciding about implementation of my entity system. It is a big topic so it has been difficult to take one option from the whole. This has been my decision: 1) I don't have an entity class it is just an id. 2) I have systems that contain a list of components (the list is homegenous, I mean, RenderSystem will just have RenderComponents). 3) Compones will be just data. 4) There would be some kind of "entity prototypes" in a manager or something from we will create entity instances.Ideally they will define the type of components it has and initialization data. 5) Prototype code to create an entity (this is from the top of my head): int id=World::getInstance()->createEntity("entity template"); 6) This will notify all systems that a new entity has been created, and if the entity needs a component that the system handles it will add it to the entity. Ok, this are the ideas. Let's see if some can help with the problems: 1) The main problem is this templates that are sent to the systems in creation process to populate the entity with needed components. What would you use, an OR(ed) int?, a list of strings?. 2) How to do initialization for components when the entity has been created? How to store this in the template? I have thought about having a function in the template that is virtual and after entity is created an populated, gets the components and sets initialization values. 3) Don't you think this is a lot of work for just an entity creation?. Sorry for the long post, I have tried to expose my ideas and finding in order other could have a start beside exposing my problems. Thanks in advance, Notbad.

    Read the article

  • How to offset particles from point of origin

    - by Sun
    Hi I'm having troubles off setting particles from a point of origin. I want my particles to spread out after a certain radius from a the point of origin. For example, this is what I have right now: All particles emitted from a point of origin. What I want is this: Particles are offset from the point of origin by some amount, i.e after the circle. What is the best way to achieve this? At the moment, I have the point of origin, the position of each particle and its rotation angle. Sorry for the poor illustrations. Edit: I was mistaken, when a particle is created, I have only the point of origin. When the particle is created I am able to calculate the rotation of the particle in the update method after it has moved to a new location using atan2() method. This is how I create/manage particles: Created new particle at enemy ship death location, for every new particle which is added to the list, call Update and Draw to update its position, calculate new angle and draw it.

    Read the article

  • Is it possible at all to install Ubuntu 10.04 on Windows 7 (64-bit) using its Virtual PC?

    - by Jian Lin
    It was said that Win 7's Virtual PC is not suitable for installing Ubuntu 10.04... Is there any method at all that it will work? The following is the scenario I ran into: The first time Ubuntu 10.04 installation CD-R boots up, it asked for the Language, and "Install Ubuntu" and then the screen has vertical green bars and then the VPC just closed. The 2nd or 3rd time it booted up, there is no asking of Language or "Install Ubuntu" and just shut down the VPC, sometimes with vertical green bars. I even created another new hard drive and same thing happened. And created VPC 02, and same thing happened. Created VPC 03 with a fixed hard drive size of 60GB and same thing happened.

    Read the article

  • To Run Linux (Ubuntu) on Windows 7, is using Virtual PC one of the best ways?

    - by Jian Lin
    I need to try Linux (Ubuntu) and feel hesitant to install Ubuntu on top of a Win 7 machine to dual boot (might need to use Win7 and Ubuntu at the same time). Is creating a Virtual PC on Win7, and then installing the latest Ubuntu on that Virtual PC one of the better option? So I think I can create a Virtual PC with an empty virtual hard disk (vhd), say, for 30GB, and then put in the Ubuntu DVD-R or CD-R to install Ubuntu onto that empty hard disk. Update: for some reason, the first time Ubuntu 10.04 installation CD-R boots up, it asked for the Language, and "Install Ubuntu" and then the screen has vertical green bars and then the VPC just closed. The 2nd or 3rd time it booted up, there is no asking of Language or "Install Ubuntu" and just shut down the VPC, sometimes with vertical green bars. I even created another new hard drive and same thing happened. And created VPC 02, and same thing happened. Created VPC 03 with a fixed hard drive size of 60GB and same thing happened.

    Read the article

  • Get Zipped Logs from a Remote Server

    - by Jonathan
    I am tasked with trying to find a way to download zipped logs from a remote server. There are quite a bit of these logs and they are constantly created. I do have limited ssh access to the remote server and can scp or rsync the files. However, due to the sheer size of these logs file, I do not want to rsync all of them. The logs could get to terabytes and for rsync to compare them may take some time. I only want to get any new file that was created/last updated an hour ago. I also am worried that I will rsync logs that are in the process of being created, so I was thinking to only rsync files that were last modified 3-5 minutes ago. Would anyone be so kind as to help me with such a process? Thank you in advance.

    Read the article

  • Brownfield Support for OVMSS

    - by Owen Allen
    The area of virtualization saw quite a few enhancements with version 12.2. There's one particular virtualization enhancement that can make a big difference for a lot of people: support for brownfield Oracle VM Servers for SPARC. Brownfield refers to Oracle VM Servers for SPARC that were created outside of Ops Center. In older versions of Ops Center, you couldn't really do anything with them - Ops Center could only manage OVM Servers that it created. If you had OVM Servers outside of Ops Center, you'd have to recreate them if you wanted to manage them. In 12.2, though, this problem is cleared up. You can discover and manage OVM Servers for SPARC that you created outside of Ops Center, so long as the LDom Manager is running. When you discover the control domain, all of the logical domains are automatically discovered and managed and appear under the control domain in the Asset tree. If you want to use server pools and migrate the logical domains to a different Oracle VM Server for SPARC system, you'll need to move the metadata to a shared library and use shared Fibre Channel or iSCSI LUNs for the guest domain storage and add the server to a server pool. See the Oracle VM Server for SPARC chapter for more information.

    Read the article

< Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >