Search Results

Search found 35692 results on 1428 pages for 'build process'.

Page 234/1428 | < Previous Page | 230 231 232 233 234 235 236 237 238 239 240 241  | Next Page >

  • Do minidump files contain the timestamp of the crash?

    - by Roger Lipscombe
    The MiscInfoStream in a minidump file contains the process create time. I'd like to find out how long the process has been running for before the crash. Does a minidump file contain the exception timestamp anywhere? WinDbg on this dump file displays the following, which implies that it's in there somewhere... Debug session time: Tue Dec 29 15:49:20.000 2009 (GMT+0) System Uptime: not available Process Uptime: 0 days 0:33:03.000 Note that today's Mar 15, so this is almost certainly the timestamp of the crash. I'd like a programmatic way to retrieve that value and the "Process Uptime" value. I found the MINIDUMP_MISC_INFO_3 structure, which contains some timezone information, but it doesn't seem to contain the exception time.

    Read the article

  • Ruby and Forking

    - by Cory
    Quick question about Ruby forking - I ran across a bit of forking code in Resque earlier that was sexy as hell but tripped me up for a few. I'm hoping for someone to give me a little more detail about what's going on here. Specifically - it would appear that forking spawns a child (expected) and kicks it straight into the 'else' side of my condition (less expected. Is that expected behavior? A Ruby idiom? My IRB hack here: def fork return true if @cant_fork begin if Kernel.respond_to?(:fork) Kernel.fork else raise NotImplementedError end rescue NotImplementedError @cant_fork = true nil end end def do_something puts "Starting do_something" if foo = fork puts "we are forking from #{Process.pid}" Process.wait else puts "no need to fork, let's get to work: #{Process.pid} under #{Process.ppid}" puts "doing it" end end do_something

    Read the article

  • Error while compiling the Xcode project (IPhone)

    - by Sridhar
    Hello, I added ffmpeg iphone port into my library and I can able to use a few of its functions like avcodec_init(),.. without any errors. But when I include this function call "avcodec_register_all" Xcode is giving error after compilation The error message is : *--------------- ld: ldr 12-bit displacement out of range (4276 max +/-4096) in _CFRelease$stub in _CFRelease$stub from /Users/foxit/Documents/CameraTest/build/CameraTest.build/Debug-iphoneos/CameraTest.build/Objects-normal/armv6/CameraTest Command /Developer/Platforms/iPhoneOS.platform/Developer/usr/bin/gcc-4.2 failed with exit code 1 *------------- Does anyone know whats wrong with this ? Regards, Raghu

    Read the article

  • TFS 2010: Is MSBuild going to be dead because of Windows Workflow?

    - by afsharm
    MSBuild in TFS 2010 has been replaced by Windows Workflow 4.0. It means when you are creating a Build Definition, you won't have a TFSBuild.proj to edit instead you must edit a workflow to customize your build. BTW am I correct if I say Microsoft is not supporting MSBuild in TFS 2010 and learning MSBuild as a TFS 2010 Team Build administrator doesn't worth? And another more question: Is microsoft going to replace Visual Studio Projects' language from MSBuild to something like Windows Workflow? Many Thanks

    Read the article

  • How to parameterize a path in ANT?

    - by strelokstrelok
    I have the following defined in a file called build-dependencies.xml <?xml version="1.0" encoding="UTF-8"?> <project name="build-dependencies"> ... <path id="common-jars"> <fileset file="artifacts/project-1/jar/some*.jar" /> <fileset file="artifacts/project-2/jar/someother*.jar" /> </path> ... </project> I include it at the top of my build.xml file. Now I need to make the artifacts folder a parameter so it can be changed during execution of different targets. Having this... <?xml version="1.0" encoding="UTF-8"?> <project name="build-dependencies"> ... <path id="common-jars"> <fileset file="${artifacts}/project-1/jar/some*.jar" /> <fileset file="${artifacts}/project-2/jar/someother*.jar" /> </path> ... </project> ...and defining an "artifacts" property (and changing it) in the target does not work because it seems that the property substitution happens when the path is defined in build-dependencies.xml How can I solve this? One way I was thinking was to have a parameterized macro and call that before the path is actually used, but that seems not elegant. Something like this: <macrodef name="create-common-jars"> <attribute name="artifacts"/> <sequential> <path id="common-jars"> <fileset file="@{artifacts}/project-1/jar/some*.jar" /> <fileset file="@{artifacts}/project-2/jar/someother*.jar" /> </path> </sequential> </macrodef>

    Read the article

  • How to GetGuiResources for all system processes?

    - by Krzysztof
    Hello, I need to measure all used GDI objects in a Windows xp system. I found a GetGuiResources(__in HANDLE hProcess, __in DWORD uiFlags) method (with the GR_GDIOBJECTS flag). I call it for the process which I get from the method GetCurrentProcess() defined in WinBase.h. I don't know how to call it for other system processes, which I get by System::Diagnostics::Process::GetProcesses(), because that function returns an array of Process pointers, and GetGuiResources takes a HANDLE. Does anybody know a solution for that? How can I transform Process pointer to a Handle or get HANDLEs for all running system processes? thanks for help in advance!

    Read the article

  • wamp server not working? or bad php code

    - by lclaud
    I have this PHP code: <?php $username="root"; $password="******";// censored out $database="bazadedate"; mysql_connect("127.0.0.1",$username,$password); // i get unknown constant localhost if used instead of the loopback ip @mysql_select_db($database) or die( "Unable to select database"); $query="SELECT * FROM backup"; $result=mysql_query($query); $num=mysql_numrows($result); $i=0; $raspuns=""; while ($i < $num) { $data=mysql_result($result,$i,"data"); $suma=mysql_result($result,$i,"suma"); $cv=mysql_result($result,$i,"cv"); $det=mysql_result($result,$i,"detaliu"); $raspuns = $raspuns."#".$data."#".$suma."#".$cv."#".$det."@"; $i++; } echo "<b> $raspuns </b>"; mysql_close(); ?> And it should return a single string containing all data from the table. But it says "connection reset when loading page". the log is : [Tue Jun 15 16:20:31 2010] [notice] Parent: child process exited with status 255 -- Restarting. [Tue Jun 15 16:20:31 2010] [notice] Apache/2.2.11 (Win32) PHP/5.3.0 configured -- resuming normal operations [Tue Jun 15 16:20:31 2010] [notice] Server built: Dec 10 2008 00:10:06 [Tue Jun 15 16:20:31 2010] [notice] Parent: Created child process 2336 [Tue Jun 15 16:20:31 2010] [notice] Child 2336: Child process is running [Tue Jun 15 16:20:31 2010] [notice] Child 2336: Acquired the start mutex. [Tue Jun 15 16:20:31 2010] [notice] Child 2336: Starting 64 worker threads. [Tue Jun 15 16:20:31 2010] [notice] Child 2336: Starting thread to listen on port 80. [Tue Jun 15 16:20:35 2010] [notice] Parent: child process exited with status 255 -- Restarting. [Tue Jun 15 16:20:35 2010] [notice] Apache/2.2.11 (Win32) PHP/5.3.0 configured -- resuming normal operations [Tue Jun 15 16:20:35 2010] [notice] Server built: Dec 10 2008 00:10:06 [Tue Jun 15 16:20:35 2010] [notice] Parent: Created child process 1928 [Tue Jun 15 16:20:35 2010] [notice] Child 1928: Child process is running [Tue Jun 15 16:20:35 2010] [notice] Child 1928: Acquired the start mutex. [Tue Jun 15 16:20:35 2010] [notice] Child 1928: Starting 64 worker threads. [Tue Jun 15 16:20:35 2010] [notice] Child 1928: Starting thread to listen on port 80. Any idea why it outputs nothing?

    Read the article

  • Can I specify which VCS module or adaptor is used by a Luntbuild buiulder or schedule?

    - by pkaeding
    I have luntbuild set up and working great for my project, with several different builder schedules running on trunk. Now, I just created a branch, and I want to add at least one schedule (and corresponding builder, if necessary) to build that branch (I want the branch built separately from trunk). Is this possible? Some others in my company have just created separate projects in Luntbuild for their branches, but I feel like that might not be the ideal solution. So, to reiterate, I want to build this branch continuously. If I just add a second module to the current setup, it tries to build both trunk and the branch together. I want them build separately. Thanks for any insight! EDIT: The bounty is about to end, and still no answers. I have worked around the original problem by creating a new project in LB, but I feel like there must be a better way.

    Read the article

  • Ant ignores attempt to override the LANG environment variable

    - by Steen
    We need to test a java build with the languages set to different values. I can manually (i.e. via export LANG=en_DK.UTF-8 and export LANG=en_DK) test that the unit tests run with the ant build script behaves differently, but I need to set the environment variable from ant. I have tried setting it using these methods (with the shell $LANG set to en_DK.UTF-8): using -D on the command line: ant -DLANG=en_DK using a build.properties file with the line LANG=en_DK in it using the following statements in the build.xml file (sorry for the formatting, I can't get SO to display it otherwise): : <property environment="ANTENV"/> <property name="ANTENV.LANG" value="en_DK"/> Using any of the three possibilities, and when run with -debug, ant reports that: Override ignored for property "LANG" What can I do to set the LANG environment variable from within ant?

    Read the article

  • Prevent OCUnit tests from running when compilation fails

    - by mhenry1384
    I'm using Xcode 3.2.2 and the built in OCUnit test stuff. One problem I'm running into is that every time I do a build my unit tests are run, even if the build failed. Let's say I make a syntax error in one of my tests. The test fails to compile and the last successful compilation of the unit tests are run. The same thing happens if one of the dependent targets fail to build - the tests are still run. Which is obviously not what I want. How can I prevent the tests from running if the build fails? If this is not possible then I'd rather have the tests never run automatically, is that possible? Sorry if this is obvious, I'm an Xcode noob. Should I be using a better unit testing framework?

    Read the article

  • Can't update scala on Gentoo

    - by xhochy
    As I wanted to test Scala 2.9.2 on my gentoo system I tried updated the package but ended up with this error. I can't figure out where the problem may be: Calculating dependencies ...... done! >>> Verifying ebuild manifests >>> Jobs: 0 of 1 complete, 1 running Load avg: 0.23, 0.16, 0.20 >>> Emerging (1 of 1) dev-lang/scala-2.9.2 >>> Jobs: 0 of 1 complete, 1 running Load avg: 0.23, 0.16, 0.20 >>> Failed to emerge dev-lang/scala-2.9.2, Log file: >>> Jobs: 0 of 1 complete, 1 running Load avg: 0.23, 0.16, 0.20 >>> '/var/tmp/portage/dev-lang/scala-2.9.2/temp/build.log' >>> Jobs: 0 of 1 complete, 1 running Load avg: 0.23, 0.16, 0.20 >>> Jobs: 0 of 1 complete, 1 running, 1 failed Load avg: 0.23, 0.16, 0.20 >>> Jobs: 0 of 1 complete, 1 failed Load avg: 0.23, 0.16, 0.20 * Package: dev-lang/scala-2.9.2 * Repository: gentoo * Maintainer: [email protected] * USE: amd64 elibc_glibc kernel_linux multilib userland_GNU * FEATURES: sandbox [01m[31;06m!!! ERROR: Couldn't find suitable VM. Possible invalid dependency string. Due to jdk-with-com-sun requiring a target of 1.7 but the virtual machines constrained by virtual/jdk-1.6 and/or this package requiring virtual(s) jdk-with-com-sun[0m * Unable to determine VM for building from dependencies: NV_DEPEND: virtual/jdk:1.6 java-virtuals/jdk-with-com-sun !binary? ( dev-java/ant-contrib:0 ) app-arch/xz-utils >=dev-java/java-config-2.1.9-r1 source? ( app-arch/zip ) >=dev-java/ant-core-1.7.0 dev-java/ant-nodeps >=dev-java/javatoolkit-0.3.0-r2 >=dev-lang/python-2.4 * ERROR: dev-lang/scala-2.9.2 failed (setup phase): * Failed to determine VM for building. * * Call stack: * ebuild.sh, line 93: Called pkg_setup * scala-2.9.2.ebuild, line 43: Called java-pkg-2_pkg_setup * java-pkg-2.eclass, line 53: Called java-pkg_init * java-utils-2.eclass, line 2187: Called java-pkg_switch-vm * java-utils-2.eclass, line 2674: Called die * The specific snippet of code: * die "Failed to determine VM for building." * * If you need support, post the output of `emerge --info '=dev-lang/scala-2.9.2'`, * the complete build log and the output of `emerge -pqv '=dev-lang/scala-2.9.2'`. !!! When you file a bug report, please include the following information: GENTOO_VM= CLASSPATH="" JAVA_HOME="" JAVACFLAGS="" COMPILER="" and of course, the output of emerge --info * The complete build log is located at '/var/tmp/portage/dev-lang/scala-2.9.2/temp/build.log'. * The ebuild environment file is located at '/var/tmp/portage/dev-lang/scala-2.9.2/temp/die.env'. * Working directory: '/var/tmp/portage/dev-lang/scala-2.9.2' * S: '/var/tmp/portage/dev-lang/scala-2.9.2/work/scala-2.9.2-sources' * Messages for package dev-lang/scala-2.9.2: * Unable to determine VM for building from dependencies: * ERROR: dev-lang/scala-2.9.2 failed (setup phase): * Failed to determine VM for building. * * Call stack: * ebuild.sh, line 93: Called pkg_setup * scala-2.9.2.ebuild, line 43: Called java-pkg-2_pkg_setup * java-pkg-2.eclass, line 53: Called java-pkg_init * java-utils-2.eclass, line 2187: Called java-pkg_switch-vm * java-utils-2.eclass, line 2674: Called die * The specific snippet of code: * die "Failed to determine VM for building." * * If you need support, post the output of `emerge --info '=dev-lang/scala-2.9.2'`, * the complete build log and the output of `emerge -pqv '=dev-lang/scala-2.9.2'`. * The complete build log is located at '/var/tmp/portage/dev-lang/scala-2.9.2/temp/build.log'. * The ebuild environment file is located at '/var/tmp/portage/dev-lang/scala-2.9.2/temp/die.env'. * Working directory: '/var/tmp/portage/dev-lang/scala-2.9.2' * S: '/var/tmp/portage/dev-lang/scala-2.9.2/work/scala-2.9.2-sources' The following eix output may help: % eix java-virtuals/jdk-with-com-sun [I] java-virtuals/jdk-with-com-sun Available versions: 20111111 {{ELIBC="FreeBSD"}} Installed versions: 20111111(16:08:51 18/04/12)(ELIBC="-FreeBSD") Homepage: http://www.gentoo.org Description: Virtual ebuilds that require internal com.sun classes from a JDK Both virtual jdks 1.6 and 1.7 are installed: % eix virtual/jdk [I] virtual/jdk Available versions: (1.4) ~1.4.2-r1[1] (1.5) 1.5.0 ~1.5.0-r3[1] (1.6) 1.6.0 1.6.0-r1 (1.7) (~)1.7.0 Installed versions: 1.6.0-r1(1.6)(23:22:48 10/11/12) 1.7.0(1.7)(23:21:09 10/11/12) Description: Virtual for JDK [1] "java-overlay" /var/lib/layman/java-overlay

    Read the article

  • How to get a service to listen on port 80 on Windows Server 2003

    - by Miky D
    I've coded a custom windows service that listens on TCP port 80 but when I try to install it on a Windows Server 2003 machine it fails to start because some other service is already listening on that port. So far I've disabled the IIS Admin service and the HTTP SSL service but no luck. When I run netstat -a -n -o | findstr 0.0:80 it gives me the process id 4 as the culprit, but when I look at the running processes that process id points to the "System" process. What can I do to get the System process to stop listening on port 80 and get my service to listen instead? P.S. I should point out that the service runs fine if I install it on my Windows XP or Windows 7 development boxes. Also, I should specify that this has nothing to do with it being a service. I've tried starting a regular application that attempts to bing to port 80 on the Windows Server 2003 with the same outcome - it fails because another application is already bound to that port.

    Read the article

  • 'Unable to read symbols' error

    - by cannyboy
    When I 'Build and Go' on the device, the console shows: warning: Unable to read symbols for ""/Users/Steve/Blue/build/Debug-iphoneos"/Blue.app/Blue" (file not found). warning: Unable to read symbols for ""/Users/Steve/Blue/build/Debug-iphoneos"/Blue.app/Blue" (file not found). Is this something I should worry about? If so, where should I look to find the root of the issue? The app works OK, but I'm just worried that this might be an AppStore approval issue.

    Read the article

  • error: Unable to find vcvarsall.bat

    - by okada
    Hello. I'm trying to install dulwich for bzr-git. now, I use Python 2.6 based bazaar. (I use msys.) My steps are as follows: $ bzr branch lp:dulwich $ cd dulwich/ $ python setup.py install running install running build running build_py creating build : : creating build\lib.win32-2.6\dulwich\tests : : running build_ext building 'dulwich._objects' extension error: Unable to find vcvarsall.bat If you know any hints, tell me please. Thanks. Okada.

    Read the article

  • Nant task sysinfo verbose - fails

    - by Broken Link
    Nant.core.dll : 0.86.2898.0 I can not get the following tag working on my machine. <sysinfo verbose="true" /> <sysinfo /> It gives me the following error. If I comment out those two lines I'm able to build. I google'd but not much help. Any idea? NAnt 0.85 (Build 0.85.1932.0; rc3; 4/16/2005) Copyright (C) 2001-2005 Gerry Shaw http://nant.sourceforge.net Buildfile: file:///C:/xyz/source/Default.build Target framework: Microsoft .NET Framework 1.1 Target(s) specified: all [tstamp] Thursday, July 30, 2009 3:10:24 PM. [sysinfo] Setting system information properties under sys.* BUILD FAILED Property name 'sys.env.Zen Managed Workstation' is invalid. Total time: 0 seconds.

    Read the article

  • Oracle Solaris Zones Physical to virtual (P2V)

    - by user939057
    IntroductionThis document describes the process of creating and installing a Solaris 10 image build from physical system and migrate it into a virtualized operating system environment using the Oracle Solaris 10 Zones Physical-to-Virtual (P2V) capability.Using an example and various scenarios, this paper describes how to take advantage of theOracle Solaris 10 Zones Physical-to-Virtual (P2V) capability with other Oracle Solaris features to optimize performance using the Solaris 10 resource management advanced storage management using Solaris ZFS plus improving operating system visibility with Solaris DTrace. The most common use for this tool is when performing consolidation of existing systems onto virtualization enabled platforms, in addition to that we can use the Physical-to-Virtual (P2V) capability  for other tasks for example backup your physical system and move them into virtualized operating system environment hosted on the Disaster Recovery (DR) site another option can be building an Oracle Solaris 10 image repository with various configuration and a different software packages in order to reduce provisioning time.Oracle Solaris ZonesOracle Solaris Zones is a virtualization and partitioning technology supported on Oracle Sun servers powered by SPARC and Intel processors.This technology provides an isolated and secure environment for running applications. A zone is a virtualized operating system environment created within a single instance of the Solaris 10 Operating System.Each virtual system is called a zone and runs a unique and distinct copy of the Solaris 10 operating system.Oracle Solaris Zones Physical-to-Virtual (P2V)A new feature for Solaris 10 9/10.This feature provides the ability to build a Solaris 10 images from physical system and migrate it into a virtualized operating system environmentThere are three main steps using this tool1. Image creation on the source system, this image includes the operating system and optionally the software in which we want to include within the image. 2. Preparing the target system by configuring a new zone that will host the new image.3. Image installation on the target system using the image we created on step 1. The host, where the image is built, is referred to as the source system and the host, where theimage is installed, is referred to as the target system. Benefits of Oracle Solaris Zones Physical-to-Virtual (P2V)Here are some benefits of this new feature:  Simple- easy build process using Oracle Solaris 10 built-in commands.  Robust- based on Oracle Solaris Zones a robust and well known virtualization technology.  Flexible- support migration between V series servers into T or -M-series systems.For the latest server information, refer to the Sun Servers web page. PrerequisitesThe target Oracle Solaris system should be running the latest version of the patching patch cluster. and the minimum Solaris version on the target system should be Solaris 10 9/10.Refer to the latest Administration Guide for Oracle Solaris for a complete procedure on how todownload and install Oracle Solaris. NOTE: If the source system that used to build the image is an older version then the targetsystem, then during the process, the operating system will be upgraded to Solaris 10 9/10(update on attach).Creating the Image Used to distribute the software.We will create an image on the source machine. We can create the image on the local file system and then transfer it to the target machine, or build it into a NFS shared storage andmount the NFS file system from the target machine.Optional  before creating the image we need to complete the software installation that we want to include with the Solaris 10 image.An image is created by using the flarcreate command:Source # flarcreate -S -n s10-system -L cpio /var/tmp/solaris_10_up9.flarThe command does the following:  -S specifies that we skip the disk space check and do not write archive size data to the archive (faster).  -n specifies the image name.  -L specifies the archive format (i.e cpio). Optionally, we can add descriptions to the archive identification section, which can help to identify the archive later.Source # flarcreate -S -n s10-system -e "Oracle Solaris with Oracle DB10.2.0.4" -a "oracle" -L cpio /var/tmp/solaris_10_up9.flarYou can see example of the archive identification section in Appendix A: archive identification section.We can compress the flar image using the gzip command or adding the -c option to the flarcreate commandSource # gzip /var/tmp/solaris_10_up9.flarAn md5 checksum can be created for the image in order to ensure no data tamperingSource # digest -v -a md5 /var/tmp/solaris_10_up9.flar Moving the image into the target system.If we created the image on the local file system, we need to transfer the flar archive from the source machine to the target machine.Source # scp /var/tmp/solaris_10_up9.flar target:/var/tmpConfiguring the Zone on the target systemAfter copying the software to the target machine, we need to configure a new zone in order to host the new image on that zone.To install the new zone on the target machine, first we need to configure the zone (for the full zone creation options see the following link: http://docs.oracle.com/cd/E18752_01/html/817-1592/index.html  )ZFS integrationA flash archive can be created on a system that is running a UFS or a ZFS root file system.NOTE: If you create a Solaris Flash archive of a Solaris 10 system that has a ZFS root, then bydefault, the flar will actually be a ZFS send stream, which can be used to recreate the root pool.This image cannot be used to install a zone. You must create the flar with an explicit cpio or paxarchive when the system has a ZFS root.Use the flarcreate command with the -L archiver option, specifying cpio or pax as themethod to archive the files. (For example, see Step 1 in the previous section).Optionally, on the target system you can create the zone root folder on a ZFS file system inorder to benefit from the ZFS features (clones, snapshots, etc...).Target # zpool create zones c2t2d0 Create the zone root folder:Target # chmod 700 /zones Target # zonecfg -z solaris10-up9-zonesolaris10-up9-zone: No such zone configuredUse 'create' to begin configuring a new zone.zonecfg:solaris10-up9-zone> createzonecfg:solaris10-up9-zone> set zonepath=/zoneszonecfg:solaris10-up9-zone> set autoboot=truezonecfg:solaris10-up9-zone> add netzonecfg:solaris10-up9-zone:net> set address=192.168.0.1zonecfg:solaris10-up9-zone:net> set physical=nxge0zonecfg:solaris10-up9-zone:net> endzonecfg:solaris10-up9-zone> verifyzonecfg:solaris10-up9-zone> commitzonecfg:solaris10-up9-zone> exit Installing the Zone on the target system using the imageInstall the configured zone solaris10-up9-zone by using the zoneadm command with the install -a option and the path to the archive.The following example shows how to create an Image and sys-unconfig the zone.Target # zoneadm -z solaris10-up9-zone install -u -a/var/tmp/solaris_10_up9.flarLog File: /var/tmp/solaris10-up9-zone.install_log.AJaGveInstalling: This may take several minutes...The following example shows how we can preserve system identity.Target # zoneadm -z solaris10-up9-zone install -p -a /var/tmp/solaris_10_up9.flar Resource management Some applications are sensitive to the number of CPUs on the target Zone. You need tomatch the number of CPUs on the Zone using the zonecfg command:zonecfg:solaris10-up9-zone>add dedicated-cpuzonecfg:solaris10-up9-zone> set ncpus=16DTrace integrationSome applications might need to be analyzing using DTrace on the target zone, you canadd DTrace support on the zone using the zonecfg command:zonecfg:solaris10-up9-zone>setlimitpriv="default,dtrace_proc,dtrace_user" Exclusive IP stack An Oracle Solaris Container running in Oracle Solaris 10 can have a shared IP stack with the global zone, or it can have an exclusive IP stack (which was released in Oracle Solaris 10 8/07). An exclusive IP stack provides a complete, tunable, manageable and independent networking stack to each zone. A zone with an exclusive IP stack can configure Scalable TCP (STCP), IP routing, IP multipathing, or IPsec. For an example of how to configure an Oracle Solaris zone with an exclusive IP stack, see the following example zonecfg:solaris10-up9-zone set ip-type=exclusivezonecfg:solaris10-up9-zone> add netzonecfg:solaris10-up9-zone> set physical=nxge0 When the installation completes, use the zoneadm list -i -v options to list the installedzones and verify the status.Target # zoneadm list -i -vSee that the new Zone status is installedID NAME STATUS PATH BRAND IP0 global running / native shared- solaris10-up9-zone installed /zones native sharedNow boot the ZoneTarget # zoneadm -z solaris10-up9-zone bootWe need to login into the Zone order to complete the zone set up or insert a sysidcfg file beforebooting the zone for the first time see example for sysidcfg file in Appendix B: sysidcfg filesectionTarget # zlogin -C solaris10-up9-zoneTroubleshootingIf an installation fails, review the log file. On success, the log file is in /var/log inside the zone. Onfailure, the log file is in /var/tmp in the global zone.If a zone installation is interrupted or fails, the zone is left in the incomplete state. Use uninstall -F to reset the zone to the configured state.Target # zoneadm -z solaris10-up9-zone uninstall -FTarget # zonecfg -z solaris10-up9-zone delete -FConclusionOracle Solaris Zones P2V tool provides the flexibility to build pre-configuredimages with different software configuration for faster deployment and server consolidation.In this document, I demonstrated how to build and install images and to integrate the images with other Oracle Solaris features like ZFS and DTrace.Appendix A: archive identification sectionWe can use the head -n 20 /var/tmp/solaris_10_up9.flar command in order to access theidentification section that contains the detailed description.Target # head -n 20 /var/tmp/solaris_10_up9.flarFlAsH-aRcHiVe-2.0section_begin=identificationarchive_id=e4469ee97c3f30699d608b20a36011befiles_archived_method=cpiocreation_date=20100901160827creation_master=mdet5140-1content_name=s10-systemcreation_node=mdet5140-1creation_hardware_class=sun4vcreation_platform=SUNW,T5140creation_processor=sparccreation_release=5.10creation_os_name=SunOScreation_os_version=Generic_142909-16files_compressed_method=nonecontent_architectures=sun4vtype=FULLsection_end=identificationsection_begin=predeploymentbegin 755 predeployment.cpio.ZAppendix B: sysidcfg file sectionTarget # cat sysidcfgsystem_locale=Ctimezone=US/Pacificterminal=xtermssecurity_policy=NONEroot_password=HsABA7Dt/0sXXtimeserver=localhostname_service=NONEnetwork_interface=primary {hostname= solaris10-up9-zonenetmask=255.255.255.0protocol_ipv6=nodefault_route=192.168.0.1}name_service=NONEnfs4_domain=dynamicWe need to copy this file before booting the zoneTarget # cp sysidcfg /zones/solaris10-up9-zone/root/etc/

    Read the article

  • How to compile x264 for iPhone

    - by SUKIYAKI
    Hi, I'm trying to compile x264 for use in an iPhone application. I can compile with using http://github.com/gabriel/ffmpeg-iphone-build File: build-x264-armv6/7. but only decoding. I want to use encoding too. when I use build-x264-armv6/7,The console show me "mp4 output: no". Does anyone know how to compile x264 which is able to encoding H.264?

    Read the article

  • Deadlock in SQL Server 2005! Two real-time bulk upserts are fighting. WHY?

    - by skimania
    Here's the scenario: I've got a table called MarketDataCurrent (MDC) that has live updating stock prices. I've got one process called 'LiveFeed' which reads prices streaming from the wire, queues up inserts, and uses a 'bulk upload to temp table then insert/update to MDC table.' (BulkUpsert) I've got another process which then reads this data, computes other data, and then saves the results back into the same table, using a similar BulkUpsert stored proc. Thirdly, there are a multitude of users running a C# Gui polling the MDC table and reading updates from it. Now, during the day when the data is changing rapidly, things run pretty smoothly, but then, after market hours, we've recently started seeing an increasing number of Deadlock exceptions coming out of the database, nowadays we see 10-20 a day. The imporant thing to note here is that these happen when the values are NOT changing. Here's all the relevant info: Table Def: CREATE TABLE [dbo].[MarketDataCurrent]( [MDID] [int] NOT NULL, [LastUpdate] [datetime] NOT NULL, [Value] [float] NOT NULL, [Source] [varchar](20) NULL, CONSTRAINT [PK_MarketDataCurrent] PRIMARY KEY CLUSTERED ( [MDID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] - I've got a Sql Profiler Trace Running, catching the deadlocks, and here's what all the graphs look like. Process 258 is called the following 'BulkUpsert' stored proc, repeatedly, while 73 is calling the next one: ALTER proc [dbo].[MarketDataCurrent_BulkUpload] @updateTime datetime, @source varchar(10) as begin transaction update c with (rowlock) set LastUpdate = getdate(), Value = t.Value, Source = @source from MarketDataCurrent c INNER JOIN #MDTUP t ON c.MDID = t.mdid where c.lastUpdate < @updateTime and c.mdid not in (select mdid from MarketData where LiveFeedTicker is not null and PriceSource like 'LiveFeed.%') and c.value <> t.value insert into MarketDataCurrent with (rowlock) select MDID, getdate(), Value, @source from #MDTUP where mdid not in (select mdid from MarketDataCurrent with (nolock)) and mdid not in (select mdid from MarketData where LiveFeedTicker is not null and PriceSource like 'LiveFeed.%') commit And the other one: ALTER PROCEDURE [dbo].[MarketDataCurrent_LiveFeedUpload] AS begin transaction -- Update existing mdid UPDATE c WITH (ROWLOCK) SET LastUpdate = t.LastUpdate, Value = t.Value, Source = t.Source FROM MarketDataCurrent c INNER JOIN #TEMPTABLE2 t ON c.MDID = t.mdid; -- Insert new MDID INSERT INTO MarketDataCurrent with (ROWLOCK) SELECT * FROM #TEMPTABLE2 WHERE MDID NOT IN (SELECT MDID FROM MarketDataCurrent with (NOLOCK)) -- Clean up the temp table DELETE #TEMPTABLE2 commit To clarify, those Temp Tables are being created by the C# code on the same connection and are populated using the C# SqlBulkCopy class. To me it looks like it's deadlocking on the PK of the table, so I tried removing that PK and switching to a Unique Constraint instead but that increased the number of deadlocks 10-fold. I'm totally lost as to what to do about this situation and am open to just about any suggestion. HELP!! In response to the request for the XDL, here it is: <deadlock-list> <deadlock victim="processc19978"> <process-list> <process id="processaf0b68" taskpriority="0" logused="0" waitresource="KEY: 6:72057594090487808 (d900ed5a6cc6)" waittime="718" ownerId="1102128174" transactionname="user_transaction" lasttranstarted="2010-06-11T16:30:44.750" XDES="0xffffffff817f9a40" lockMode="U" schedulerid="3" kpid="8228" status="suspended" spid="73" sbid="0" ecid="0" priority="0" transcount="2" lastbatchstarted="2010-06-11T16:30:44.750" lastbatchcompleted="2010-06-11T16:30:44.750" clientapp=".Net SqlClient Data Provider" hostname="RISKAPPS_VM" hostpid="3836" loginname="RiskOpt" isolationlevel="read committed (2)" xactid="1102128174" currentdb="6" lockTimeout="4294967295" clientoption1="671088672" clientoption2="128056"> <executionStack> <frame procname="MKP_RISKDB.dbo.MarketDataCurrent_BulkUpload" line="28" stmtstart="1062" stmtend="1720" sqlhandle="0x03000600a28e5e4ef4fd8e00849d00000100000000000000"> UPDATE c WITH (ROWLOCK) SET LastUpdate = getdate(), Value = t.Value, Source = @source FROM MarketDataCurrent c INNER JOIN #MDTUP t ON c.MDID = t.mdid WHERE c.lastUpdate &lt; @updateTime and c.mdid not in (select mdid from MarketData where BloombergTicker is not null and PriceSource like &apos;Blbg.%&apos;) and c.value &lt;&gt; t.value </frame> <frame procname="adhoc" line="1" stmtstart="88" sqlhandle="0x01000600c1653d0598706ca7000000000000000000000000"> exec MarketDataCurrent_BulkUpload @clearBefore, @source </frame> <frame procname="unknown" line="1" sqlhandle="0x000000000000000000000000000000000000000000000000"> unknown </frame> </executionStack> <inputbuf> (@clearBefore datetime,@source nvarchar(10))exec MarketDataCurrent_BulkUpload @clearBefore, @source </inputbuf> </process> <process id="processc19978" taskpriority="0" logused="0" waitresource="KEY: 6:72057594090487808 (74008e31572b)" waittime="718" ownerId="1102128228" transactionname="user_transaction" lasttranstarted="2010-06-11T16:30:44.780" XDES="0x380be9d8" lockMode="U" schedulerid="5" kpid="8464" status="suspended" spid="248" sbid="0" ecid="0" priority="0" transcount="2" lastbatchstarted="2010-06-11T16:30:44.780" lastbatchcompleted="2010-06-11T16:30:44.780" clientapp=".Net SqlClient Data Provider" hostname="RISKBBG_VM" hostpid="4480" loginname="RiskOpt" isolationlevel="read committed (2)" xactid="1102128228" currentdb="6" lockTimeout="4294967295" clientoption1="671088672" clientoption2="128056"> <executionStack> <frame procname="MKP_RISKDB.dbo.MarketDataCurrentBlbgRtUpload" line="14" stmtstart="840" stmtend="1220" sqlhandle="0x03000600005f9d24c8878f00849d00000100000000000000"> UPDATE c WITH (ROWLOCK) SET LastUpdate = t.LastUpdate, Value = t.Value, Source = t.Source FROM MarketDataCurrent c INNER JOIN #TEMPTABLE2 t ON c.MDID = t.mdid; -- Insert new MDID </frame> <frame procname="adhoc" line="1" sqlhandle="0x010006004a58132228bf8d73000000000000000000000000"> MarketDataCurrentBlbgRtUpload </frame> </executionStack> <inputbuf> MarketDataCurrentBlbgRtUpload </inputbuf> </process> </process-list> <resource-list> <keylock hobtid="72057594090487808" dbid="6" objectname="MKP_RISKDB.dbo.MarketDataCurrent" indexname="PK_MarketDataCurrent" id="lock5ba77b00" mode="U" associatedObjectId="72057594090487808"> <owner-list> <owner id="processc19978" mode="U"/> </owner-list> <waiter-list> <waiter id="processaf0b68" mode="U" requestType="wait"/> </waiter-list> </keylock> <keylock hobtid="72057594090487808" dbid="6" objectname="MKP_RISKDB.dbo.MarketDataCurrent" indexname="PK_MarketDataCurrent" id="lock65dca340" mode="U" associatedObjectId="72057594090487808"> <owner-list> <owner id="processaf0b68" mode="U"/> </owner-list> <waiter-list> <waiter id="processc19978" mode="U" requestType="wait"/> </waiter-list> </keylock> </resource-list> </deadlock> </deadlock-list>

    Read the article

  • Shell script task status monitoring

    - by Bikram Agarwal
    I'm running an ANT task in background and checking in 60 second intervals whether that task is complete or not. If it is not, every 60 seconds, a message should be displayed on screen - "Deploy process is still running. $slept seconds since deploy started", where $slept is 60, 120, 180 n so on. There's a limit of 1200 seconds, after which the script will show the log via 'ant log' command and ask the user whether to continue. If the user chooses to continue, 300 seconds are added to the time limit and the process repeats. The code that I am using for this task is - ant deploy & limit=1200 deploy_check() { while [ ${slept:-0} -le $limit ]; do sleep 60 && slept=`expr ${slept:-0} + 60` if [ $$ = "`ps -o ppid= -p $!`" ]; then echo "Deploy process is still running. $slept seconds since deploy started." else wait $! && echo "Application ${New_App_Name} deployed successfully" || echo "Deployment of ${New_App_Name} failed" break fi done } deploy_check if [ $$ = "`ps -o ppid= -p $!`" ]; then echo "Deploy process did not finish in $slept seconds. Here's the log." ant log echo "Do you want to kill the process? Press Ctrl+C to kill. Press Enter to continue." read log limit=`expr ${limit} + 300` deploy_check fi Now, the problem is - this code is not working. This looks like a perfectly good code and yet, this is not working. Can anyone point out what is wrong with this code, please.

    Read the article

  • Using Sphinx with a distutils-built C extension

    - by detly
    I have written a Python module including a submodule written in C: the module itself is called foo and the C part is foo._bar. The structure looks like: src/ foo/__init__.py <- contains the public stuff foo/_bar/bar.c <- the C extension doc/ <- Sphinx configuration conf.py ... foo/__init__.py imports _bar to augment it, and the useful stuff is exposed in the foo module. This works fine when it's built, but obviously won't work in uncompiled form, since _bar doesn't exist until it's built. I'd like to use Sphinx to document the project, and use the autodoc extension on the foo module. This means I need to build the project before I can build the documentation. Since I build with distutils, the built module ends up in some variably named dir build/lib.linux-ARCH-PYVERSION — which means I can't just hard-code the directory into a Sphinx' conf.py. So how do I configure my distutils setup.py script to run the Sphinx builder over the built module? For completeness, here's the call to setup (the 'fake' things are custom builders that subclass build and build_ext): setup(cmdclass = { 'fake': fake, 'build_ext_fake' : build_ext_fake }, package_dir = {'': 'src'}, packages = ['foo'], name = 'foo', version = '0.1', description = desc, ext_modules = [module_real])

    Read the article

  • Static Libraries on iPhone device

    - by Akusete
    I have two projects, a Cocoa iPhone application and a static library which it uses. I've tested it successfully on the iPhone simulator, but when I try to deploy it to my iPhone device I get (symbol not found) link errors. If I remove the dependancy of the library the project builds/runs fine. I have made sure all the build settings are set to iPhoneOS not the simulator. Im sure its something simple, but has anyone run into similar problems moving from iPhone simulator to device? --EDIT: I have managed to create new projects (one for the application and one for the static library), and successfully get them to run on the iPhone or simulator. But I have a very strange problem... for each specific project I cannot get it working for BOTH the device and the simulator... I have double checked the build settings, made sure the libraries that are being references are for the matching build settings (I believe) but I cannot resolve these linking errors. I think I must be doing something very wrong... all the apple documentation says 'its super simple - one click' but this is giving me a lot of problems. This is probably something to do with xCode build settings, but I cannot seem to understand why selecting the different build platforms and rebuilding the libraries does not work.

    Read the article

  • Web Deployment Project failed to map path for include virtual

    - by Duke
    I am building my site with a web deployment project but the build fails with a number of errors all relating to the "#include virtual" directives in my master page. The includes are necessary to import a set of centrally managed html template files. Here is an example of the include directive and associated error: <!-- #include virtual="/v3/sits/pdpdev/assets-templates/inc/head.html" --> /PDPRegistration.csproj/Pages/ContentPage.Master(15): error ASPPARSE: Failed to map the path '/v3/sits/pdpdev/assets-templates/inc/head.html'. The error for each included file actually appears multiple times. I'm not sure what is being mapped or why, but this was never a problem until I started using WDP (which I wanted to alter web.config depending on the build environment, among other things.) The project is built locally on my PC and then copied to the web server via a mapped drive. I found a few solutions on the 'net involving IIS metabase - they weren't quite clear to me, and I'm not sure if they apply given how I build and deploy the project (that is, would I have to build on the same system as IIS in order to make use of the metabase?) Can anyone suggest how I can get my project to build with WDP?

    Read the article

  • Can a 32-bit RHEL4 userland work with a 64-bit kernel?

    - by James
    Is there a way to change an i386 RHEL4 machine to run an amd64 kernel, but ensure that it still builds software into same i386 binaries? On Debian this seems quite straightforward: just install an amd64 kernel (worst case, build one like this guy: http://www.debian-administration.org/users/jonesy/weblog/1) and prefix everything with "linux32". Then everything that considers uname -m will be unchanged, I just need to handle the few cases that consider uname -r. What is the Red Hat equivalent? Is the only way a full 64-bit installation on another disk and then chrooting back to the 32-bit system before anyone builds anything? (Even the best examples of that seem to be Debian-based.) Background: We make a large system that runs on (a variant of) i386 RHEL4. However, some of the larger RHEL build machines now have enough RAM that they might benefit from going 64-bit (for the kernel and maybe some of the bigger build steps). Our build system doesn't support cross-compilation.

    Read the article

  • C# and com for vb6

    - by Jim
    Hi all I have an issue with C# and COM. :( [Guid("f7d936ba-d816-48d2-9bfc-c18be6873b4d")] [ComVisible(true)] [ClassInterface(ClassInterfaceType.None)] public class Process : IProcess { public Process() { } public int UpdateBalance(string accountNumber, string adminEventDescription, decimal curAmount) { return 10; } } [ComVisible(true)] [Guid("5c640a0f-0dce-47d4-87df-07cee3b9a1f9")] [InterfaceType(ComInterfaceType.InterfaceIsIUnknown)] public interface IProcess { int UpdateBalance(string accountNumber, string adminEventDescription, decimal curAmount); } And the VB code Private Sub Command1_Click() Dim test As Object Set test = New Forwardslash_PlayerTrackingSystem_Api.Process End Sub I get the following ActiveX component can't create object?

    Read the article

< Previous Page | 230 231 232 233 234 235 236 237 238 239 240 241  | Next Page >