Search Results

Search found 22884 results on 916 pages for 'team build'.

Page 125/916 | < Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >

  • Is it possible to build this type of program in PHP?

    - by Steven
    I want to build a QA program that will crawl all the pages of a site (all files under a specified domain name), and it will return all external links on the site that doesn't open in a new window (does not have the target="_blank" attribute in the href). I can make a php or javascript to open external links in new windows or to report all problem links that don't open in new windows of a single page (the same page the script is in) but what I want is for the QA tool to go and search all pages of a website and report back to me what it finds. This "spidering" is what I have no idea how to do, and am not sure if it's even possible to do with a language like PHP. If it's possible how can I go about it?

    Read the article

  • How to build a Drag and Dropable Tree using Jquery for Menu?

    - by Anil
    Hi all i want to build a Menu Builder with drag and drop which i am trying to do like here is the jsfiddle of what i am trying to do This and i want the functionality to be like This i will have all products list on a side and i should have a Add column button ,when i click on Add Column button a div should open where i should be able to drag and drop my products like the above the first product i drop in a div should be main then i should be able to add child to that particular product i should be able to add any number of submenus to it like this div1 Mens shoes SportShoes CasualShoe Shirts Casualshirts div2 Womens shoes SportShoes CasualShoe Shirts Casualshirts this is what i have to do can any one help me doing this please

    Read the article

  • How do I build two different installers from the same script in inno?

    - by Tim
    I want to make a "standard" install for external use, but I also want to use the same script and tell it (with a command line param perhaps?) to include another set of files (PDB files for debugging) for our lab installations. How can I do this? Is it possible? I don't see how to set this in the [files] section. NOTE - this is not for having an option during the install. It is for static build time. I suppose it is just best to create a separate installer for the pdbs.

    Read the article

  • How to build your non-gui Java program into a console program.

    - by Robert
    I dont know how to describe it well, but i will try. Ok, i want to be able to build my java program so that when it opens, it will look and work exactly as it does in the console. So it reads the Scanner class and prints normally, and does everything it would do if it was in the console. Ive looked around for this and havent found anything. I can make a gui java program fairly easily, but i would rather have a terminal, console like program, that works exactly as the java console, thanks.

    Read the article

  • How do I get artifacts from one Maven module included in the resources of another in my build?

    - by Hanno Fietz
    I have Maven modules that produce a Flex application as an SWF file. I want to include that file in a web application that is made with another Maven module from the same build. I'm wondering how and at which lifecycle phase I get Maven to grab the artifact from the other module and put it insode the appropriate folder of the webapp module. Would I use a separate assembly module? The web app is running on a Jetty server in an OSGi environment (using Pax), the server side of the web app uses Struts. The final artifact as I see it would be a WAR file including my Action etc classes, JSP templates, static contents such as CSS or JS, and the SWF movies. I might be better off with these split over some other setup, but right now, I wouldn't know which.

    Read the article

  • How to extract data from Google Analytics and build a data warehouse (webhouse) from it?

    - by nkaur301
    I have click stream data such as referring URL, top landing pages, top exit pages and metrics such as page views, number of visits, bounces all in Google Analytics. I am required to build a data warehouse from scratch(which I believe is known as web-house) from this data. My questions are:- 1)Is it possible? Every day data increases (some in terms of metrics or measures such as visits and some in terms of new referring sites), how would the process of loading the warehouse go about? 2)What ETL tool would help me to achieve this? Pentaho I believe has a way to pull out data from Google Analytics, has anyone used it? How does that process go? Any references, links would be appreciated besides answers.

    Read the article

  • how to get Geo::Coder::Many with cpan?

    - by mnemonic
    Ubuntu is installed for development of a Perl project. aptitude search Geo-Coder i libgeo-coder-googlev3-perl - Perl module providing access to Google Map Aptitude does not refer to Geo::Coder::Many cpan can not build it. sudo cpan Geo::Coder::Many Then: CPAN: Storable loaded ok (v2.27) Going to read '/home/jh/.cpan/Metadata' Database was generated on Wed, 16 Oct 2013 06:17:04 GMT Running install for module 'Geo::Coder::Many' Running make for K/KA/KAORU/Geo-Coder-Many-0.42.tar.gz CPAN: Digest::SHA loaded ok (v5.61) CPAN: Compress::Zlib loaded ok (v2.033) Checksum for /home/jh/.cpan/sources/authors/id/K/KA/KAORU/Geo-Coder-Many-0.42.tar.gz ok CPAN: File::Temp loaded ok (v0.22) CPAN: Parse::CPAN::Meta loaded ok (v1.4401) CPAN: CPAN::Meta loaded ok (v2.110440) CPAN: Module::CoreList loaded ok (v2.49_02) CPAN: Module::Build loaded ok (v0.38) CPAN.pm: Going to build K/KA/KAORU/Geo-Coder-Many-0.42.tar.gz Can't locate Geo/Coder/Many/Google.pm in @INC (@INC contains: /etc/perl /usr/local/lib/perl/5.14.2 /usr/local/share/perl/5.14.2 /usr/lib/perl5 /usr/share/perl5 /usr/lib/perl/5.14 /usr/share/perl/5.14 /usr/local/lib/site_perl .) at /usr/share/perl/5.14/Module/Load.pm line 27. Can't locate Geo/Coder/Many/Google in @INC (@INC contains: /etc/perl /usr/local/lib/perl/5.14.2 /usr/local/share/perl/5.14.2 /usr/lib/perl5 /usr/share/perl5 /usr/lib/perl/5.14 /usr/share/perl/5.14 /usr/local/lib/site_perl .) at /usr/share/perl/5.14/Module/Load.pm line 27. BEGIN failed--compilation aborted at Build.PL line 54. Warning: No success on command[/usr/bin/perl Build.PL --installdirs site] CPAN: YAML loaded ok (v0.77) KAORU/Geo-Coder-Many-0.42.tar.gz /usr/bin/perl Build.PL --installdirs site -- NOT OK Running Build test Make had some problems, won't test Running Build install Make had some problems, won't install Could not read metadata file. Falling back to other methods to determine prerequisites Any suggestions how to resolve this issue?

    Read the article

  • What is the simplest way to build your own .deb package?

    - by Calvin Fisher
    Having used Ubuntu for several years now, I've assembled a short list of scripts and packages that I always install on my computers. I would like to pack them up into a .deb to make it easier to get set up on a fresh OS installation. I'm imagining, for instance, one package that would install all of my custom BASH scripts that I've made for common tasks, and another one that would depend on other packages (like w64codecs) that I always install but forget that I need to until I go to do something and it's not there. It doesn't even have to be by-the-book; I'm not looking to deploy these publicly. I'm just looking to roll up all these tasks into one sudo dpkg --install. To quantify "simple" or "easy," I mean to say that I'm looking for the method with the fewest steps requiring the least technical knowledge and, most importantly, taking the least time.

    Read the article

  • What is the crappiest network build you've ever seen?

    - by Ivan Petrushev
    This is only about networks you have seen personaly, not heared from others or seen on pictures at the web. Cables hanging from the ceiling lamps? Cables going trough culverts and other tubes? Switches and other equipment in the cleaner's closet? Cleaning lady's rags drying hanging from the cables? Key to the main node door possesed only by the janitor (or other non-tech and completely non-network-related guy)? Switches powered by foreign power adapters (cheaper and providing non-specified voltage or amperage)? All of this was in my old dormitory. Tell us about your bad experience.

    Read the article

  • How do I disable a laptop's build-in keyboard on ubuntu?

    - by David
    I have a Kinesis keyboard that I ideally like to use by placing it on top of my laptop. When I do that, I wind up pressing keys on the built-in keyboard with my external keyboard. I've been playing around with the GUI keyboard controls and reading the ubuntu forums for a couple hours without really any forward progress. Even a hacky solution or a pointer to a good library for configuring multiple keyboards differently would be much appreciated. thanks.

    Read the article

  • Is there anyway to build a raid system without all drives?

    - by xenoterracide
    I'm building a raid1 (ok it will probably be a raid10,f2 but the difference with 2 drives... isn't much) system with 2 1TB drives. However, 1 of the drives I've ordered is bad so I'm RMA-ing it. I'm wondering if I could partition and install to the 1 drive and then rebuild the array when I get the second drive (after I test it of course) My initial investigation doesn't show me a way of creating the array without specifying all devices... and the device the second drive will be is one that has data that I will need to migrate (plus it's not big enough). Is it possible that I could create an array without specifying all devices? or specify false ones and reconfigure to the right ones later? Or some other method I'm not thinking of.

    Read the article

  • Cannot install git-core using macports

    - by robUK
    Hello, Snow Leopard 10.6.4 mac ports 1.9.1 I have just installed macports and I want to install git-core. However, I get the following errors: ---> Computing dependencies for git-core ---> Dependencies to be installed: python26 db46 gdbm readline sqlite3 rsync popt ---> Building db46 Error: Target org.macports.build returned: shell command failed Log for db46 is at: /opt/local/var/macports/logs/_opt_local_var_macports_sources_rsync.macports.org_release_ports_databases_db46/main.log Error: The following dependencies failed to build: python26 db46 gdbm readline sqlite3 rsync popt Error: Status 1 encountered during processing. To report a bug, see <http://guide.macports.org/#project.tickets> I have tried doing a port selfupdate and a port clean all and then trying to install again. But still get the same problem. I have also tried install the dependent db46 on its own. Here is the log message: :error:build Target org.macports.build returned: shell command failed :debug:build Backtrace: shell command failed while executing "command_exec build" (procedure "portbuild::build_main" line 8) invoked from within "$procedure $targetname" :info:build Warning: the following items did not execute (for db46): org.macports.activate org.macports.build org.macports.destroot org.macports.install This is my first time using mac ports. Many thanks for any suggestions.

    Read the article

  • Install VMWare Tools from VMWare Workstation 7.1.1 build-282343 on Debian squeeze: complaint about gcc path not valid

    - by Misha Koshelev
    Dear All: I am trying to install VMWare tools on Debian Squeeze. My error: Before you can compile modules, you need to have the following installed... make gcc kernel headers of the running kernel Searching for GCC... The path "/usr/bin/gcc" is not valid path to the gcc binary. Would you like to change it? [yes] uname -a: Linux debian 2.6.32-5-686 #1 SMP Sat Sep 18 02:14:45 UTC 2010 i686 GNU/Linux dpkg -l | grep make ii make 3.81-8 An utility for Directing compilation. dpkg -l | grep gcc ii gcc 4:4.4.4-2 The GNU C compiler ii gcc-4.4 4.4.4-8 The GNU C compiler ii gcc-4.4-base 4.4.4-8 The GNU Compiler Collection (base package) ii libgcc1 1:4.4.4-8 GCC support library whereis gcc gcc: /usr/bin/gcc /usr/lib/gcc Thank you Misha

    Read the article

  • Can I build or test a computer without a case?

    - by jasondavis
    I am in the process of building a really nice new PC right now. It's going to have a nice Lian Li case with the internals powder coated black and all the wires will be sleeved. So my problem is I am getting parts in a couple days but my case will not be completed for about a month because it is on back order plus time to powder coat it. I am purchasing many of m y parts from newegg.com and they claim you must return any dead parts within 30 days or the invoice for there warranty to replace bad parts. So is it possible for me to set up the PC without a case just to test that the main parts are working correctly within the timeframe I am allowed? If this is possible, how do I deal with turning the system on/off without a powere button? Or is therer one on a motherboard? Thanks for any tips/advice

    Read the article

  • Can I build a VPN on top of Tor?

    - by Thilo
    If I understand correctly, the Tor client works as a combination of a proxy server and application plugins (such as the Firefox Torbutton) that enable use of the proxy and contain additional application-specific privacy features (such as suppressing cookies, sandboxing JavaScript, turning off Flash). That works very well with applications that support it (such as Firefox). But is there a way to establish a VPN over Tor, so that my whole Wifi network can be protected, including applications that do not support proxy configuration and devices like iPod touches?

    Read the article

  • How to build a self-sufficient gcc/glibc/binutils set in a non-standard path?

    - by netvope
    Suppose a set of custom-built gcc/glibc/binutils are in $prefix (e.g. /home/user/path) I want: gcc to look for libraries in $prefix/lib64 instead of /lib64 gcc to look for headers in $prefix/include instead of /include to use $prefix/lib64/ld-linux-x86-64.so.2 as the (hard-coded) loader path instead of /lib64/ld-linux-x86-64.so.2 the dynamic loader to look for shared libraries in $prefix/lib64 instead of /lib64 How should I configure the builds? Do I need to modify gcc's specs file or do anything else?

    Read the article

  • rpmbuild gives seg fault

    - by Deepti Jain
    I am trying to build an rpm using the rpmbuild tool. I have source code which build binaries around 30 GB. This software for which I am making the rpm has dozens of executables. When I copy only the binaries of a single executable (Eg. init) my rpm builds successfully. But when I dump the entire build to the rpm, rpmbuild does everything but gives a seg fault in the end. Here is my spec file: # This is a sample spec file for wget %define _topdir /root/mywget %define name source %define release 1 %define version 1.12 %define _builddir /root/mywget/BUILD/glenlivet %define _buildrootdir /root/mywget/BUILDROOT %define _buildroot /root/mywget/BUILDROOT %define _sourcedir /root/mywget/SOURCES BuildRoot: %{_buildroot} Summary: GNU source License: GPL Name: %{name} Version: %{version} Release: %{release} Source: %{name}-%{version}.tar.gz Prefix: /usr Group: Development/Tools %description The GNU sample program downloads files from the Internet using the command-line. %prep %setup -q -n glenlivet %build cd %{_builddir} make all %install rm -rf %{_buildrootdir} mkdir -p %{_buildrootdir}/bin cp -p -r %{_builddir}/build/obj-x64/* %{_buildrootdir}/bin/ %files %defattr(-,root,root) /bin/* If I only copy some of the binaries (let say one utility and its dependent binaries) it works fine. But when I try to copy the entire build, I get a seg fault. I get the seg fault after rpmbuild has executed these sections: %prep %build %install rpmbuild also processes my source file. Processing files: source-1.12-1 Finding Provides: Finding Requires: Finding Supplements: Provides:...... Requires:...... Checking for unpackaged file(s):/ usr/lib/rpm/check-files /root/mywget/BUILDROOT Checking for unpackaged file(s):/ usr/lib/rpm/check-files /root/mywget/BUILDROOT Segmentation fault Any clue what wrong is going on or where does rpmbuild fails? Thanks in advance

    Read the article

  • What postgresql client version should I build against, if server is 8.x?

    - by Ben Voigt
    I'm planning updates to a system that is currently running with 8.x server on Windows, 8.x client on Windows, and 8.x client on Linux. Obviously that seems like a bad choice of platform in a mixed environment, but the Linux machine has no persistent writable storage (as an anti-rootkit measure). I'm concerned with compatibility between versions right now. Can a linux postgresql 9.0.x client connect to a Windows 8.x server? The server is using some third-party binary extensions, so upgrading it is a more involved task and will be done later. If combining a 9.0.x client and 8.x server is discouraged, would latest 8.x clients be able to continue to connect if I did upgrade the server first? META: What tag is appropriate for backward-compatibility questions?

    Read the article

  • How to change the build directory of a Hudson job?

    - by mark
    Dear ladies and sirs. My C: drive is full. I wish to move the builds folder from the job to another location. I can cheat with the help of the JUNCTION utility to redirect the original builds folder, but I am interested to know if there is the Hudson way to do it right. Thanks.

    Read the article

  • copied the reachability-test from apple, but the linker gives a failure

    - by nico
    i have tried to use the reachability-project published by apple to detect a reachability in an own example. i copied the most initialization, but i get this failure in the linker: Ld build/switchViews.build/Debug-iphoneos/test.build/Objects-normal/armv6/test normal armv6 cd /Users/uid04100/Documents/TEST setenv IPHONEOS_DEPLOYMENT_TARGET 3.1.3 setenv PATH "/Developer/Platforms/iPhoneOS.platform/Developer/usr/bin:/Developer/usr/bin:/usr/bin:/bin:/usr/sbin:/sbin" /Developer/Platforms/iPhoneOS.platform/Developer/usr/bin/gcc-4.2 -arch armv6 -isysroot /Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS3.1.3.sdk -L/Users/uid04100/Documents/TEST/build/Debug-iphoneos -F/Users/uid04100/Documents/TEST/build/Debug-iphoneos -filelist /Users/uid04100/Documents/TEST/build/switchViews.build/Debug-iphoneos/test.build/Objects-normal/armv6/test.LinkFileList -dead_strip -miphoneos-version-min=3.1.3 -framework Foundation -framework UIKit -framework CoreGraphics -o /Users/uid04100/Documents/TEST/build/switchViews.build/Debug-iphoneos/test.build/Objects-normal/armv6/test Undefined symbols: "_SCNetworkReachabilitySetCallback", referenced from: -[Reachability startNotifer] in Reachability.o "_SCNetworkReachabilityCreateWithAddress", referenced from: +[Reachability reachabilityWithAddress:] in Reachability.o "_SCNetworkReachabilityScheduleWithRunLoop", referenced from: -[Reachability startNotifer] in Reachability.o "_SCNetworkReachabilityGetFlags", referenced from: -[Reachability connectionRequired] in Reachability.o -[Reachability currentReachabilityStatus] in Reachability.o "_SCNetworkReachabilityUnscheduleFromRunLoop", referenced from: -[Reachability stopNotifer] in Reachability.o "_SCNetworkReachabilityCreateWithName", referenced from: +[Reachability reachabilityWithHostName:] in Reachability.o ld: symbol(s) not found collect2: ld returned 1 exit status my delegate.h: import @class Reachability; @interface testAppDelegate : NSObject { UIWindow *window; UINavigationController *navigationController; Reachability* hostReach; Reachability* internetReach; Reachability* wifiReach; } @property (nonatomic, retain) IBOutlet UIWindow *window; @property (nonatomic, retain) IBOutlet UINavigationController *navigationController; @end my delegate.m: import "testAppDelegate.h" import "SecondViewController.h" import "Reachability.h" @implementation testAppDelegate @synthesize window; @synthesize navigationController; (void) updateInterfaceWithReachability: (Reachability*) curReach { if(curReach == hostReach) { BOOL connectionRequired= [curReach connectionRequired]; if(connectionRequired) { //in these brackets schould be some code with sense, if i´m getting it to run } else { } } if(curReach == internetReach) { } if(curReach == wifiReach) { } } //Called by Reachability whenever status changes. - (void) reachabilityChanged: (NSNotification* )note { Reachability* curReach = [note object]; NSParameterAssert([curReach isKindOfClass: [Reachability class]]); [self updateInterfaceWithReachability: curReach]; } (void)applicationDidFinishLaunching:(UIApplication *)application { // Override point for customization after application launch // Observe the kNetworkReachabilityChangedNotification. When that notification is posted, the // method "reachabilityChanged" will be called. // [[NSNotificationCenter defaultCenter] addObserver: self selector: @selector(reachabilityChanged:) name: kReachabilityChangedNotification object: nil]; //Change the host name here to change the server your monitoring hostReach = [[Reachability reachabilityWithHostName: @"www.apple.com"] retain]; [hostReach startNotifer]; [self updateInterfaceWithReachability: hostReach]; internetReach = [[Reachability reachabilityForInternetConnection] retain]; [internetReach startNotifer]; [self updateInterfaceWithReachability: internetReach]; wifiReach = [[Reachability reachabilityForLocalWiFi] retain]; [wifiReach startNotifer]; [self updateInterfaceWithReachability:wifiReach]; [window addSubview:[navigationController view]]; [window makeKeyAndVisible]; } (void)dealloc { [navigationController release]; [window release]; [super dealloc]; } @end

    Read the article

  • Cloud hosted CI for .NET projects

    - by Scott Dorman
    Originally posted on: http://geekswithblogs.net/sdorman/archive/2014/06/02/cloud-hosted-ci-for-.net-projects.aspxContinuous integration (CI) is important. If you don’t have it set up…you should. There are a lot of different options available for hosting your own CI server, but they all require you to maintain your own infrastructure. If you’re a business, that generally isn’t a problem. However, if you have some open source projects hosted, for example on GitHub, there haven’t really been any options. That has changed with the latest release of AppVeyor, which bills itself as “Continuous integration for busy developers.” What’s different about AppVeyor is that it’s a hosted solution. Why is that important? By being a hosted solution, it means that I don’t have to maintain my own infrastructure for a build server. How does that help if you’re hosting an open source project? AppVeyor has a really competitive pricing plan. For an unlimited amount of public repositories, it’s free. That gives you a cloud hosted CI system for all of your GitHub projects for the cost of some time to set them up, which actually isn’t hard to do at all. I have several open source projects (hosted at https://github.com/scottdorman), so I signed up using my GitHub credentials. AppVeyor fully supported my two-factor authentication with GitHub, so I never once had to enter my password for GitHub into AppVeyor. Once it was done, I authorized GitHub and it instantly found all of the repositories I have (both the ones I created and the ones I cloned from elsewhere). You can even add “build badges” to your markdown files in GitHub, so anyone who visits your project can see the status of the lasted build. Out of the box, you can simply select a repository, add the build project, click New Build and wait for the build to complete. You now have a complete CI server running for your project. The best part of this, besides the fact that it “just worked” with almost zero configuration is that you can configure it through a web-based interface which is very streamlined, clean and easy to use or you can use a appveyor.yml file. This means that you can define your CI build process (including any scripts that might need to be run, etc.) in a standard file format (the YAML format) and store it in your repository. The benefits to that are huge. The file becomes a versioned artifact in your source control system, so it can be branched, merged, and is completely transparent to anyone working on the project. By the way, AppVeyor isn’t limited to just GitHub. It currently supports GitHub, BitBucket, Visual Studio Online, and Kiln. I did have a few issues getting one of my projects to build, but the same day I posted the problem to the support forum a fix was deployed, and I had a functioning CI build about 5 minutes after that. Since then, I’ve provided some additional feature requests and had a few other questions, all of which have seen responses within a 24-hour period. I have to say that it’s easily been one of the best customer support experiences I’ve seen in a long time. AppVeyor is still young, so it doesn’t yet have full feature parity with some of the older (more established) CI systems available,  but it’s getting better all the time and I have no doubt that it will quickly catch up to those other CI systems and then pass them. The bottom line, if you’re looking for a good cloud-hosted CI system for your .NET-based projects, look at AppVeyor.

    Read the article

  • WiX 3 Tutorial: Generating file/directory fragments with Heat.exe

    - by Mladen Prajdic
    In previous posts I’ve shown you our SuperForm test application solution structure and how the main wxs and wxi include file look like. In this post I’ll show you how to automate inclusion of files to install into your build process. For our SuperForm application we have a single exe to install. But in the real world we have 10s or 100s of different files from dll’s to resource files like pictures. It all depends on what kind of application you’re building. Writing a directory structure for so many files by hand is out of the question. What we need is an automated way to create this structure. Enter Heat.exe. Heat is a command line utility to harvest a file, directory, Visual Studio project, IIS website or performance counters. You might ask what harvesting means? Harvesting is converting a source (file, directory, …) into a component structure saved in a WiX fragment (a wxs) file. There are 2 options you can use: Create a static wxs fragment with Heat and include it in your project. The pro of this is that you can add or remove components by hand. The con is that you have to do the pro part by hand. Automation always beats manual labor. Run heat command line utility in a pre-build event of your WiX project. I prefer this way. By always recreating the whole fragment you don’t have to worry about missing any new files you add. The con of this is that you’ll include files that you otherwise might not want to. There is no perfect solution so pick one and deal with it. I prefer using the second way. A neat way of overcoming the con of the second option is to have a post-build event on your main application project (SuperForm.MainApp in our case) to copy the files needed to be installed in a special location and have the Heat.exe read them from there. I haven’t set this up for this tutorial and I’m simply including all files from the default SuperForm.MainApp \bin directory. Remember how we created a System Environment variable called SuperFormFilesDir? This is where we’ll use it for the first time. The command line text that you have to put into the pre-build event of your WiX project looks like this: "$(WIX)bin\heat.exe" dir "$(SuperFormFilesDir)" -cg SuperFormFiles -gg -scom -sreg -sfrag -srd -dr INSTALLLOCATION -var env.SuperFormFilesDir -out "$(ProjectDir)Fragments\FilesFragment.wxs" After you install WiX you’ll get the WIX environment variable. In the pre/post-build events environment variables are referenced like this: $(WIX). By using this you don’t have to think about the installation path of the WiX. Remember: for 32 bit applications Program files folder is named differently between 32 and 64 bit systems. $(ProjectDir) is obviously the path to your project and is a Visual Studio built in variable. You can view all Heat.exe options by running it without parameters but I’ll explain some that stick out the most. dir "$(SuperFormFilesDir)": tell Heat to harvest the whole directory at the set location. That is the location we’ve set in our System Environment variable. –cg SuperFormFiles: the name of the Component group that will be created. This name is included in out Feature tag as is seen in the previous post. -dr INSTALLLOCATION: the directory reference this fragment will fall under. You can see the top level directory structure in the previous post. -var env.SuperFormFilesDir: the name of the variable that will replace the SourceDir text that would otherwise appear in the fragment file. -out "$(ProjectDir)Fragments\FilesFragment.wxs": the full path and name under which the fragment file will be saved. If you have source control you have to include the FilesFragment.wxs into your project but remove its source control binding. The auto generated FilesFragment.wxs for our test app looks like this: <?xml version="1.0" encoding="utf-8"?><Wix xmlns="http://schemas.microsoft.com/wix/2006/wi"> <Fragment> <ComponentGroup Id="SuperFormFiles"> <ComponentRef Id="cmp5BB40DB822CAA7C5295227894A07502E" /> <ComponentRef Id="cmpCFD331F5E0E471FC42A1334A1098E144" /> <ComponentRef Id="cmp4614DD03D8974B7C1FC39E7B82F19574" /> <ComponentRef Id="cmpDF166522884E2454382277128BD866EC" /> </ComponentGroup> </Fragment> <Fragment> <DirectoryRef Id="INSTALLLOCATION"> <Component Id="cmp5BB40DB822CAA7C5295227894A07502E" Guid="{117E3352-2F0C-4E19-AD96-03D354751B8D}"> <File Id="filDCA561ABF8964292B6BC0D0726E8EFAD" KeyPath="yes" Source="$(env.SuperFormFilesDir)\SuperForm.MainApp.exe" /> </Component> <Component Id="cmpCFD331F5E0E471FC42A1334A1098E144" Guid="{369A2347-97DD-45CA-A4D1-62BB706EA329}"> <File Id="filA9BE65B2AB60F3CE41105364EDE33D27" KeyPath="yes" Source="$(env.SuperFormFilesDir)\SuperForm.MainApp.pdb" /> </Component> <Component Id="cmp4614DD03D8974B7C1FC39E7B82F19574" Guid="{3443EBE2-168F-4380-BC41-26D71A0DB1C7}"> <File Id="fil5102E75B91F3DAFA6F70DA57F4C126ED" KeyPath="yes" Source="$(env.SuperFormFilesDir)\SuperForm.MainApp.vshost.exe" /> </Component> <Component Id="cmpDF166522884E2454382277128BD866EC" Guid="{0C0F3D18-56EB-41FE-B0BD-FD2C131572DB}"> <File Id="filF7CA5083B4997E1DEC435554423E675C" KeyPath="yes" Source="$(env.SuperFormFilesDir)\SuperForm.MainApp.vshost.exe.manifest" /> </Component> </DirectoryRef> </Fragment></Wix> The $(env.SuperFormFilesDir) will be replaced at build time with the directory where the files to be installed are located. There is nothing too complicated about this. In the end it turns out that this sort of automation is great! There are a few other ways that Heat.exe can compose the wxs file but this is the one I prefer. It just seems the clearest. Play with its options to see what can it do. It’s one awesome little tool.   WiX 3 tutorial by Mladen Prajdic navigation WiX 3 Tutorial: Solution/Project structure and Dev resources WiX 3 Tutorial: Understanding main wxs and wxi file WiX 3 Tutorial: Generating file/directory fragments with Heat.exe

    Read the article

  • Go for the Deep Dive on Oracle Products and Technology

    - by Oracle OpenWorld Blog Team
    by Karen Shamban Oracle University gives you more learning for your conference investment. It’s easier than ever before to get in-depth Oracle product and technology training if you’re attending any of the Oracle conferences this fall, including Oracle OpenWorld, the Oracle Customer Experience Summit @ OpenWorld, the Oracle PartnerNetwork Exchange @ OpenWorld, and MySQL Connect. Why is it easier? Because Oracle University preconference training takes place on Sunday, September 30 from 8:00 a.m. to 3:30 p.m. And you’re going to be in town for the conference anyway, right? The training ends early enough in the afternoon that you’ll still be able to get good seats for conference opening keynotes and get psyched for the welcome reception that follows. Each session will be taught by an expert Oracle University instructor and will be fact-packed with demos and tips to help you do more than ever before with your Oracle product and technology investment. The training sessions being offered include: Applications:·             PeopleSoft Test Framework Script Creation and Optimization·             New Integration Technologies for PeopleTools 8.52·             Oracle Fusion Applications: Security Fundamentals Database and Systems:·             Certification Exam Cram: Oracle Database 11g: New Features for Administrators·             Exadata Database Machine Administration Workshop·             Introduction to Big Data·             Using Oracle Enterprise Manager Cloud Control 12c·             Using Java - for PL/SQL and Database Developers Fusion Middleware:·             Developing Portable Java EE Applications with the Enterprise JavaBeans 3.1 API and Java Persistence API 2.0·             Developing Secure Java Web Services·             How The Latest Java EE and SOA Help in Architecting and Designing Robust Enterprise Applications·             Oracle Business Intelligence 11g: Overview to Analyses and Dashboards·             Oracle Fusion Middleware 11g: Build Applications with ADF I·             Oracle Fusion Middleware 11g Administer Forms Services·             Oracle SOA Suite 11g Administration·             WebLogic Server Administration Essentials Don’t miss this great opportunity to maximize your Oracle OpenWorld experience and investment. Learn more about Oracle University training sessions.

    Read the article

  • SQL Server Developer Tools &ndash; Codename Juneau vs. Red-Gate SQL Source Control

    - by Ajarn Mark Caldwell
    So how do the new SQL Server Developer Tools (previously code-named Juneau) stack up against SQL Source Control?  Read on to find out. At the PASS Community Summit a couple of weeks ago, it was announced that the previously code-named Juneau software would be released under the name of SQL Server Developer Tools with the release of SQL Server 2012.  This replacement for Database Projects in Visual Studio (also known in a former life as Data Dude) has some great new features.  I won’t attempt to describe them all here, but I will applaud Microsoft for making major improvements.  One of my favorite changes is the way database elements are broken down.  Previously every little thing was in its own file.  For example, indexes were each in their own file.  I always hated that.  Now, SSDT uses a pattern similar to Red-Gate’s and puts the indexes and keys into the same file as the overall table definition. Of course there are really cool features to keep your database model in sync with the actual source scripts, and the rename refactoring feature is now touted as being more than just a search and replace, but rather a “semantic-aware” search and replace.  Funny, it reminds me of SQL Prompt’s Smart Rename feature.  But I’m not writing this just to criticize Microsoft and argue that they are late to the party with this feature set.  Instead, I do see it as a viable alternative for folks who want all of their source code to be version controlled, but there are a couple of key trade-offs that you need to know about when you choose which tool set to use. First, the basics Both tool sets integrate with a wide variety of source control systems including the most popular: Subversion, GIT, Vault, and Team Foundation Server.  Both tools have integrated functionality to produce objects to upgrade your target database when you are ready (DACPACs in SSDT, integration with SQL Compare for SQL Source Control).  If you regularly live in Visual Studio or the Business Intelligence Development Studio (BIDS) then SSDT will likely be comfortable for you.  Like BIDS, SSDT is a Visual Studio Project Type that comes with SQL Server, and if you don’t already have Visual Studio installed, it will install the shell for you.  If you already have Visual Studio 2010 installed, then it will just add this as an available project type.  On the other hand, if you regularly live in SQL Server Management Studio (SSMS) then you will really enjoy the SQL Source Control integration from within SSMS.  Both tool sets store their database model in script files.  In SSDT, these are on your file system like other source files; in SQL Source Control, these are stored in the folder structure in your source control system, and you can always GET them to your file system if you want to browse them directly. For me, the key differentiating factors are 1) a single, unified check-in, and 2) migration scripts.  How you value those two features will likely make your decision for you. Unified Check-In If you do a continuous-integration (CI) style of development that triggers an automated build with unit testing on every check-in of source code, and you use Visual Studio for the rest of your development, then you will want to really consider SSDT.  Because it is just another project in Visual Studio, it can be added to your existing Solution, and you can then do a complete, or unified single check-in of all changes whether they are application or database changes.  This is simply not possible with SQL Source Control because it is in a different development tool (SSMS instead of Visual Studio) and there is no way to do one unified check-in between the two.  You CAN do really fast back-to-back check-ins, but there is the possibility that the automated build that is triggered from the first check-in will cause your unit tests to fail and the CI tool to report that you broke the build.  Of course, the automated build that is triggered from the second check-in which contains the “other half” of your changes should pass and so the amount of time that the build was broken may be very, very short, but if that is very, very important to you, then SQL Source Control just won’t work; you’ll have to use SSDT. Refactoring and Migrations If you work on a mature system, or on a not-so-mature but also not-so-well-designed system, where you want to refactor the database schema as you go along, but you can’t have data suddenly disappearing from your target system, then you’ll probably want to go with SQL Source Control.  As I wrote previously, there are a number of changes which you can make to your database that the comparison tools (both from Microsoft and Red Gate) simply cannot handle without the possibility (or probability) of data loss.  Currently, SSDT only offers you the ability to inject PRE and POST custom deployment scripts.  There is no way to insert your own script in the middle to override the default behavior of the tool.  In version 3.0 of SQL Source Control (Early Access version now available) you have that ability to create your own custom migration script to take the place of the commands that the tool would have done, and ensure the preservation of your data.  Or, even if the default tool behavior would have worked, but you simply know a better way then you can take control and do things your way instead of theirs. You Decide In the environment I work in, our automated builds are not triggered off of check-ins, but off of the clock (currently once per night) and so there is no point at which the automated build and unit tests will be triggered without having both sides of the development effort already checked-in.  Therefore having a unified check-in, while handy, is not critical for us.  As for migration scripts, these are critically important to us.  We do a lot of new development on systems that have already been in production for years, and it is not uncommon for us to need to do a refactoring of the database.  Because of the maturity of the existing system, that often involves data migrations or other additional SQL tasks that the comparison tools just can’t detect on their own.  Therefore, the ability to create a custom migration script to override the tool’s default behavior is very important to us.  And so, you can see why we will continue to use Red Gate SQL Source Control for the foreseeable future.

    Read the article

< Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >