Search Results

Search found 40998 results on 1640 pages for 'setup project'.

Page 460/1640 | < Previous Page | 456 457 458 459 460 461 462 463 464 465 466 467  | Next Page >

  • Is there a way to easily convert a series of tarballs of a source tree into a git repository?

    - by Hotei
    I'm new to git and I have a moderately large number of weekly tarballs from a long running project. Each tarball has on average a few hundred files in it. I'm looking for a git strategy that will allow me to add the expanded contents of each tarball to a new git repository, starting from version 1.001 and going through version 1.650. As of this stage of the project 99.5% of tarball(n) is just a copy of version(n-1) - in other words, a perfect candidate for git. The desired end result is to have only the master branch remaining at the end of the process. I think I know git well enough to do this "by hand". As I understand it there is no possibility of a merge conflict since there will be no opportunity to change the master before the next version is added and committed. A shell script is my first guess, but I'm not sure how well bash will like it when git checkout branch_n gets processed while bash is executing in branch_n-1. For the purposes of this project the host environment is Ubuntu 10.4, resources available are 8 Gig RAM, 500 Gig Disk space free and 4 CPU processor at 3.ghz . I don't need someone else to solve the problem but I could use a nudge in the right direction as to how a git expert would approach it. Any advice from someone who's "been there done that" would be appreciated. Hotei PS: I have looked at site's suggested "related questions" and found nothing relevant.

    Read the article

  • ASP .NET confusion - server controls

    - by Brandi
    I have read through the information in this question: http://stackoverflow.com/questions/22084/asp-net-aspxxx-controls-versus-standard-html but am still rather confused. The situation was I was asked to do a web project where I made a wizard. When I was done with the project everyone asked why I had used an <asp:Wizard...>. I thought this was what was being asked for, but apparently not, so after this I was led to believe that server controls were just prototyping tools. However, the next project I did my DB queries through C# code-behind and loaded the results via html. I was then asked why I had not used a gridview and a dataset. Does anyone have a list of pros and cons why they would choose to use specific html controls over specific server controls and why? I guess I'm looking for a list... what server controls are okay to use and why? EDIT: I guess this question is open ended, so I'll clarify a few more specific questions... Is it okay to use very simple controls such as asp:Label or do these just end up wasting space? It seems like it would be difficult to access html in the code behind otherwise. Are there a few controls that should just never be used? Does anyone have a good resource that will show me pros and cons of each control?

    Read the article

  • VS 2008 C++ build output?

    - by STingRaySC
    Why when I watch the build output from a VC++ project in VS do I see: 1Compiling... 1a.cpp 1b.cpp 1c.cpp 1d.cpp 1e.cpp [etc...] 1Generating code... 1x.cpp 1y.cpp [etc...] The output looks as though several compilation units are being handled before any code is generated. Is this really going on? I'm trying to improve build times, and by using pre-compiled headers, I've gotten great speedups for each ".cpp" file, but there is a relatively long pause during the "Generating Code..." message. I do not have "Whole Program Optimization" nor "Link Time Code Generation" turned on. If this is the case, then why? Why doesn't VC++ compile each ".cpp" individually (which would include the code generation phase)? If this isn't just an illusion of the output, is there cross-compilation-unit optimization potentially going on here? There don't appear to be any compiler options to control that behavior (I know about WPO and LTCG, as mentioned above). EDIT: The build log just shows the ".obj" files in the output directory, one per line. There is no indication of "Compiling..." vs. "Generating code..." steps. EDIT: I have confirmed that this behavior has nothing to do with the "maximum number of parallel project builds" setting in Tools - Options - Projects and Solutions - Build and Run. Nor is it related to the MSBuild project build output verbosity setting. Indeed if I cancel the build before the "Generating code..." step, the ".obj" files will not exist for the most recent set of "compiled" files. E.g., if I cancel the build during "c.cpp" above, I will see only "a.obj" and "b.obj".

    Read the article

  • How can I provide users with the functionality of the DBUnit DatabaseOperation methods from a web in

    - by reckoner
    I am currently updating a java-based web application which allows database developers to create stored procedure regression test suites for database testing. Currently, for test setup, execution and clean-up stages, the user is provided with text boxes where they are able to enter SQL code which is executed by the isql command. I would like to extend the application to use DB Unit’s DatabaseOperation methods to provide more ways to setup the state of the database than just SQL statements. The main reason for using Db Unit rather than just SQL statements is to be able to create and store xml and xls DataSets on a server where they can be associated with their test cases and used for data setup. My question is: How can I provide users with the functionality of the DBUnit DatabaseOperation methods from a web interface? I have considered: Creating a simple programming language and a parser to read some simple syntax involving the DB Unit method names which accept a parameter being the file location to an xml or xls DataSet. I was thinking of allowing the user to register the files they need with the web app which would catalogue them and provide each file with an identifier which could passed as a parameter to the methods in this simple programming language. Creating an XML DTD which provides the user with the ability to specify operations and parameters. If I went this approach, how can I execute the methods and their parameters that I parse from the XML document? Creating a table in the database which stores the method and a FK relation to a catalogued DataSet file, however I don’t think this would be good solution due to the fact that data entry would be tedious. Thanks for your help.

    Read the article

  • Adding an ADO.NET Entity Data Model throws build errors

    - by user3726262
    I am using Visual Studio 2013 express. I create a new project and then I add a database to that project. But, when I add an ADO.NET Entity Framework model to that project and then run the program, I get the following four build errors listed below. To try to remedy this myself, I added the namespaces 'System.Data.Entity' and 'System.Data.Entity.Design', but that didn't help. Also, I uninstalled and re-installed the Nuget package. I also uninstalled and re-installed Visual Studio 2013 Express for Windows Desktop. But these measures didn't help the situation either. Please note that I used to use the Entity Data model just fine. But it was around the time that I did a system restore on my computer, and when I updated VS 2013 with an update offered on the start page, and finally, when I signed up for MS Azure, that I started running into the problem described above. Now I would think that uninstalling and reinstalling Visual Studio 2013, and then installing the 'Nuget' Package would solve all problems. What am I missing here? The errors mentioned above are: Error 1 The type or namespace name 'Infrastructure' does not exist in the namespace 'System.Data.Entity' (are you missing an assembly reference?) C:\Users\John\documents\visual studio 2013\Projects\Riches\Riches\RichesModel.Context.cs 14 30 DataLayer Error 2 The type or namespace name 'DbContext' could not be found (are you missing a using directive or an assembly reference?) C:\Users\John\documents\visual studio 2013\Projects\Riches\Riches\RichesModel.Context.cs 16 52 DataLayer Error 3 The type or namespace name 'DbModelBuilder' could not be found (are you missing a using directive or an assembly reference?) C:\Users\John\documents\visual studio 2013\Projects\Riches\Riches\RichesModel.Context.cs 23 49 DataLayer Error 4 The type or namespace name 'DbSet' could not be found (are you missing a using directive or an assembly reference?) C:\Users\John\documents\visual studio 2013\Projects\Riches\Riches\RichesModel.Context.cs 28 16 DataLayer Thank you and I realize that my last attempt at this question was rather rough-draftish, John

    Read the article

  • Use .bat file to recursively loop through folders and get hold of .class files

    - by user320550
    HI all, This is what i'm trying to do. I have a .bat file which takes in a argument which is nothing but a folder name. What i do first is go one level up (cd ..). Now in this directory i have 3 folders and each folders have sub-folders and have .class files in them. What i want to do is recursively loop through the folders and get hold of the .class files. Once this is done i want to echo the target folder of the .class file as well as echo the name of the.class file. So c:\temp\potter\myclass.class. I would echo out c:\temp\potter\ and myclass. I'm able to do this by writing a separate bat file which works. But when i integrate this with the recursive function it seems to break. This is what i'm doing: :: call the junit classes... and save the results echo step 3... cd %1 cd .. for /r %%a in (*.class) do set Var=%%a echo Full file location %Var% for %%i in ("%Var%") do Set CF=%%~dpi Set CF=%CF:~0,-1% :LOOP If "%CF:~-1,1%"=="\" GoTo :DONE Set CF=%CF:~0,-1% GoTo :LOOP :DONE Set CF=%CF:~0,-1% echo Folder Location %CF% ::cd %CF% For %%j in ("%Var%") Do Set name=%%~nxj :: -6 because of Quotations Set name=%name:~0,-6% echo File Name %name% echo step 3 complete... However i only get the output of one directory, while i have multiple directories having .class files. This is the output i get: step 3... Full file location C:\NKCV\Project\MyActivities\6_Selenium\htmlTestCasesConve rted2JUnit\iexplore\flow2\testCase_app2.class Folder Location C:\NKCV\Project\MyActivities\6_Selenium\htmlTestCasesConverte d2JUnit\iexplore\flow2 File Name testCase_app2 step 3 complete... missing argument! usage htmltestCaseLocation for eg., "C:\NKCV\Project\MyActivities\6_Selenium\htmlTestCases" Could anyone please let me know whats wrong here? Thanks.

    Read the article

  • Windows Service doesn't start process with different credentials

    - by Marcus
    I have a Windows Service, running as a user, that should start several processes under different user credentials. I'm using the following code to start a process: Dim winProcess As New System.Diagnostics.Process With winProcess .StartInfo.Arguments = "some_args" .StartInfo.CreateNoWindow = True .StartInfo.ErrorDialog = False .StartInfo.FileName = "C:\TEMP\ProcessFromService\ProcessFromService\bin\Debug\ProcessFromService.exe" .StartInfo.UseShellExecute = False .StartInfo.WindowStyle = ProcessWindowStyle.Hidden 'Opgave WorkingDirectory kan soms tot problemen leiden, indien betreffende directory 'niet bereikbaar (rechten) is voor opgegeven gebruiker. 'Beter dus om deze niet op te geven. '.StartInfo.WorkingDirectory = My.Computer.FileSystem.SpecialDirectories.Temp .StartInfo.Domain = "" .StartInfo.UserName = "MyUserId" Dim strPassword As String = "MyPassword" Dim ssPassword As New Security.SecureString For Each chrPassword As Char In strPassword.ToCharArray ssPassword.AppendChar(chrPassword) Next .StartInfo.Password = ssPassword .Start() End With The process is correctly started when I use the same credentials as of which the Windows Service is running under. The process is not started, without any error, when I use different credentials. In other words: If the Windows Service is running as UserA then I can start a process running as UserA. If the Windows Service is running as UserB then I can not start a process running as UserA. I have created a test project in which I can reproduce this problem. If you put this project in C:\Temp then the used paths will be correct. You can download this test project here: https://dl.dropboxusercontent.com/u/5391091/ProcessFromService.zip NB: I hope this info is enough to explain it. If you need more info, please let me know and I will add it.

    Read the article

  • How to manage sessions in NHibernate unit tests?

    - by Ben
    I am a little unsure as to how to manage sessions within my nunit test fixtures. In the following test fixture, I am testing a repository. My repository constructor takes in an ISession (since I will be using session per request in my web application). In my test fixture setup I configure NHibernate and build the session factory. In my test setup I create a clean SQLite database for each test executed. [TestFixture] public class SimpleRepository_Fixture { private static ISessionFactory _sessionFactory; private static Configuration _configuration; [TestFixtureSetUp] // called before any tests in fixture are executed public void TestFixtureSetUp() { _configuration = new Configuration(); _configuration.Configure(); _configuration.AddAssembly(typeof(SimpleObject).Assembly); _sessionFactory = _configuration.BuildSessionFactory(); } [SetUp] // called before each test method is called public void SetupContext() { new SchemaExport(_configuration).Execute(true, true, false); } [Test] public void Can_add_new_simpleobject() { var simpleObject = new SimpleObject() { Name = "Object 1" }; using (var session = _sessionFactory.OpenSession()) { var repo = new SimpleObjectRepository(session); repo.Save(simpleObject); } using (var session =_sessionFactory.OpenSession()) { var repo = new SimpleObjectRepository(session); var fromDb = repo.GetById(simpleObject.Id); Assert.IsNotNull(fromDb); Assert.AreNotSame(simpleObject, fromDb); Assert.AreEqual(simpleObject.Name, fromDb.Name); } } } Is this a good approach or should I be handling the sessions differently? Thanks Ben

    Read the article

  • Exception while hosting a WCF Service in a DependencyInjection Module ?

    - by Maciek
    Hello, I've written a small just-for-fun console project using Ninject, I'm pasting some of the code below just so that you get the idea : Program.cs using System; using Ninject; using Ninjectionn.Modules; // My namespace for my modules namespace Ninjections { class Program { static void Main(string[] args) { IKernel kernel = new StandardKernel(); kernel.Load<ServicesHostModule>(); Console.ReadKey(); } } } ServicesHostModule.cs using System; using System.ServiceModel; using Ninject; using Ninject.Modules; namespace Ninjections.Modules { public class ServicesHostModule : INinjectModule { #region INinjectModule Members public string Name { get { return "ServicesHost"; }} public void OnLoad(IKernel kernel) { if(m_host != null) m_host.Close(); else m_host = new ServiceHost(typeof(WCFTestService)); m_host.Open(); // (!) EXCEPTION HERE } public void OnUnLoad(IKernel kernel) { m_host.Close(); } #endregion } } ITestWCFService.cs using System.ServiceModel; namespace Ninjections.Modules { [ServiceContract] public interface ITestWCFService { [OperationContract] string GetString1(); [OperationContract] string GetString2(); } } An auto-generated App.config is in the ServicesHostModule project. I've "added" an existing item (the app config) as link in the main project. Q: at the m_host.Open(); line, an InvalidOperationException occurs. The message says : "Service "Ninjections.Modules.TestWCFService" has zero application endopints. What's wrong?

    Read the article

  • DataSets to POCOs - an inquiry regarding DAL architecture

    - by alexsome
    Hello all, I have to develop a fairly large ASP.NET MVC project very quickly and I would like to get some opinions on my DAL design to make sure nothing will come back to bite me since the BL is likely to get pretty complex. A bit of background: I am working with an Oracle backend so the built-in LINQ to SQL is out; I also need to use production-level libraries so the Oracle EF provider project is out; finally, I am unable to use any GPL or LGPL code (Apache, MS-PL, BSD are okay) so NHibernate/Castle Project are out. I would prefer - if at all possible - to avoid dishing out money but I am more concerned about implementing the right solution. To summarize, there are my requirements: Oracle backend Rapid development (L)GPL-free Free I'm reasonably happy with DataSets but I would benefit from using POCOs as an intermediary between DataSets and views. Who knows, maybe at some point another DAL solution will show up and I will get the time to switch it out (yeah, right). So, while I could use LINQ to convert my DataSets to IQueryable, I would like to have a generic solution so I don't have to write a custom query for each class. I'm tinkering with reflection right now, but in the meantime I have two questions: Are there any problems I overlooked with this solution? Are there any other approaches you would recommend to convert DataSets to POCOs? Thanks in advance.

    Read the article

  • How do I make the manifest available during a Maven/Surefire unittest run "mvn test" ?

    - by Ernst de Haan
    How do I make the manifest available during a Maven/Surefire unittest run "mvn test" ? I have an open-source project that I am converting from Ant to Maven, including its unit tests. Here's the project source repository with the Maven project: http://github.com/znerd/logdoc My question pertains to the primary module, called "base". This module has a unit test that tests the behaviour of the static method getVersion() in the class org.znerd.logdoc.Library. This method returns: Library.class.getPackage().getImplementationVersion() The getImplementationVersion() method returns a value of a setting in the manifest file. So far, so good. I have tested this in the past and it works well, as long as the manifest is indeed available on the classpath at the path META-INF/MANIFEST.MF (either on the file system or inside a JAR file). Now my challenge is that the manifest file is not available when I run the unit tests: mvn test Surefire runs the unit tests, but my unit test fails with a mesage indicating that Library.getVersion() returned null. When I want to check the JAR, I find that it has not even been generated. Maven/Surefire runs the unit tests against the classes, before the resources are added to the classpath. So can I either run the unit tests against the JAR (implicitly requiring the JAR to be generated first) or can I make sure the resources (including the manifest file) are generated/copied under target/classes before the unit tests are run? Note that I use Maven 2.2.0, Java 1.6.0_17 on Mac OS X 10.6.2, with JUnit 4.8.1.

    Read the article

  • e4x filter on more then on childeren?

    - by Chris
    My XML Looks like this: <?xml version="1.0" encoding="utf-8" ?> <projects> <project id="1" thumb="media/images/thumb.jpg" > <categories> <id>1</id> <id>2</id> </categories> <director>Director name</director> <name><![CDATA[IPhone commercial]]></name> <url><![CDATA[http://www.iphone.com]]></url> <description><![CDATA[Description about the project]]></description> <thumb><![CDATA[/upload/images/thumb.jpg]]></thumb> </project> </projects> But I cannot figure out how to filter projects based on a category id? Does anybody know how to do ? :)

    Read the article

  • Should the code being tested compile to a DLL or an executable file?

    - by uriDium
    I have a solution with two projects. One for project for the production code and another project for the unit tests. I did this as per the suggestions I got here from SO. I noticed that in the Debug Folder that it includes the production code in executable form. I used NUnit to run the tests after removing the executable and they all fail trying to find the executable. So it definitely is trying to find it. I then did a quick read to find out which is better, a DLL or an executable. It seems that an DLL is much faster as they share memory space where communication between executables is slower. Unforunately our production code needs to be an exectuable. So the unit tests will be slightly slower. I am not too worried about that. But the project does rely on code written in another library which is also in executable format at the moment. Should the projects that expose some sort of SDK rather be compiled to an DLL and then the projects that use the SDK be compiled to executable?

    Read the article

  • New projects not built when target platform is set explicitly

    - by stiank81
    I create a new solution with one project, and then change the target platform from "Any CPU" to "x86". After this new projects added doesn't get built by default, and their target platform doesn't follow the global settings. Why?! Looking at the configuration manager new projects added are not checked to "Build", and they get target platform "Any CPU" instead of the globally set x86. Why is this happening? I expect new projects too to get the globally set and defined x86 target platform.. Some things I've tried: Toggle global platform back to Any CPU, and then to x86 again. No change.. Choosing platform explicitly for the new project. x86 is not available in the list, and when I say <New..> and try adding it I'm not allowed as ".. a solution platform with the same name already exists.". On the build properties for the new project I can't change the platform in the Configuration section, but I can set "Platform target" to x86 in the General section. It is however not clear whether this actually makes a difference, and it wouldn't respond if I change the target platform globally later. Initially I thought this was a problem from converting my solution from VS2008 to VS2010, but the problem applies both places. I.e. when I create a solution in VS2008 and just stay in VS2008 I still get the problem.

    Read the article

  • Visual Studio crashes when I add a .settings file in a C++ Windows form application

    - by Ant
    I'm trying to add a .settings file in a windows form application by adding a (whatever) file in the project and have it named smthng.settings. Right after it is created, it crashes (if I look into the project's directory the file is there, but it's not "in" the project). Am I doing it wrong or could it be that the problem lies elsewhere? Edit: It seems that it's the settings designer that crashes. Partially Solved:If I add a (whatever).config file, then rename it to .settings and change it's file type to C/C++ Code (don't ask how I figured this out..) then I can add to it settings, but if I do add something that has any connection to the form then automatically a (whatever).config with thew same name spawns (which is identical to the (whatever).setting even if change one of them) and at the stdafx.cpp a #include '(whatever).h' appears, which is a problem, because there is not such a header. (if I erase it or just create a blank (whatever).h it doesn't work. Apparently I have to somehow connect all the data in the (w/e).settings to (w/e).h as well or maybe something else) Anyone had this problem before? Anyone has any ideas?

    Read the article

  • SQLite Databases and Grid Hosting

    - by jocull
    I'm considering moving my site from a GoDaddy shared hosting account to a Media Temple grid hosting account in anticipation of traffic. However, I first have some concerns with the grid hosting setup. My site stores a large personal set of data on a per-user basis (possibly 3-4MB per user). At this rate I was worried about blowing over a 1GB MySQL limit in no time. To deal with this I created distributed SQLite databases per user to store large data objects. It's worked wonderfully so far. SQLite is super fast and simple. I know that reading from and writing to files is different in a Grid Hosting environment. I need to know if this setup is going to cause serious problems. These databases are not (and will not be) highly trafficked. They are personal to the user and will only be touched maybe two locations at the same time (one updating the data hourly at the most, and one or more reading on demand). I'd like to keep this setup as getting additional space (beyond 4GB) on a MySQL database seems to be a real trouble point. Will Grid Hosting cause me serious problems? Thanks.

    Read the article

  • How can I sign a Windows Mobile application for internal use?

    - by AR
    I'm developing a Windows Mobile application for internal company use, using the Windows Mobile 6 Professional SDK. Same old story: I've developed and tested on the emulator and all is well, but as soon as I deploy to advice I get an UnauthorizedAccessException when writing files or creating directories. I'm aware that an application installed to a device needs to be signed but I'm running into roadblocks at every turn: Using the project properties 'Devices' window I select 'Sign the project output with this certificate, and choose one of the sample certificates from the SDK. This results in a build error: "The signer's certificate is not valid for signing" when running SignTool. If I try to run SignTool.exe from the commandline, I get an error telling me to run SignTool.exe from a location in the system's PATH. I can't use the 'Signing' tab in the Project Properties to create a test certificate - this is greyed out (presumably for WinMobile projects?). If at all possible, I would like to avoid having to go through Versign or the like to get a Mobile2Market certificate. If I have to go this route for a final version that's fine, but I need to at least be able to test the app on real devices. Any advice would be most welcome!

    Read the article

  • Sharing runtime variables between files

    - by nightcracker
    I have a project with a few files that all include the header global.hpp. Those files want to share and update information that is relevant for the whole program during runtime (that data is gathered progressively during the program runs but the fields of data are known at compile-time). Now my idea was to use a struct like this: global.hpp #include <string> #ifndef _GLOBAL_SESSION_STRUCT #define _GLOBAL_SESSION_STRUCT struct session_struct { std::string username; std::string password; std::string hostname; unsigned short port; // more data fields as needed }; #endif extern struct session_struct session; main.cpp #include "global.hpp" struct session_struct session; int main(int argc, char* argv[]) { session.username = "user"; session.password = "secret"; session.hostname = "example.com"; session.port = 80; // other stuff, etc return 0; } Now every file that includes global.hpp can just read & write the fields of the session struct and easily share information. Is this the correct way to do this? NOTE: For this specific project no threading is used. But please (for future projects and other people reading) clarify in your answer how this (or your proposed) solution works when threaded. Also, for this example/project session variables are shared. But this should also apply to any other form of shared variables.

    Read the article

  • StrcutureMap Wiring - Sanity Check Please

    - by Steve Ward
    Hi - Im new to IOC and StructureMap and have an n-level application and am looking at how to setup the wirings (ForRequestedType ...) and just want to check with people with more experience that this is the best way of doing it! I dont want my UI application object to reference my persistence layer directly so am not able to wire everything up in this UI project. I now have it working by defining a Registry class in each project which wires up the types in the project as needed. The layer above registers its types and also calls the assembly below and looks for registries so that all types are registered throught the hierrachy. E.g. I have UI, Service, Domain, and Persistence libraries. In my service layer the registry looks like Scan(x => { x.Assembly("MyPersistenceProject"); x.LookForRegistries(); }); ForRequestedType<IService>().TheDefault.Is.OfConcreteType<MyService>(); Is this a recommended way of doing this in a setup such as this? Are there better ways and what are the advantages / disadvantages of these approaches in this case?

    Read the article

  • AnkhSVN: Cannot checkout Subsolution due to existing "versioned" folder

    - by lostiniceland
    Hello Everyone I am using Subversion since quite some time for Java-Development and I have setup a repository on my local NAS. Since I have a MSDN subscription via my company I recently installed Visual Studio 2010 to do a small project with .NET. According to some "best-practices" my project folder looks like the following. MySolution main.sln Services services.sln Service A files Service A Test files View projectfiles Persistence persistence.sln PersistenceXml files PersistenceXml Test files PersistenceDB files PersistenceDB Test files The idea is, that the main.sln only contains the projects for the application, meaning no test projects. The subsolutions, contain the project(s) and their corresponding testprojects. I was able to put all those projects under versioncontrol with AnkhSVN, so I have the same structure there in my trunk. Commiting changes was also no problem. Now I would like to check the this out on another machine. I was able to check out the main.sln which downloaded everything that was inside this solution. It skipped the services.sln, persistence.sln and all the test-projects. Until now everything is fine. Now, here comes the problem: when I am tryting to check out the subsolution (eg. services.sln) I get an error, I think it was UnsupportedOperation. I guess this happens because ankhsvn is tryting to download the folder Service A again and create ist hidden .svn folder which is already present. The only workaround I can think of by now is installing Tortoise SVN and check out the whole thing at once. It would be nicer though to have everything from within VS. Does anyone know how I can solve this? Is another client the only solution?

    Read the article

  • Errors with redefinitions after upgrade to XCode 3.2.3

    - by CA Bearsfan
    I recently upgraded to Snow Leopard and Xcode 3.2.5 so I could test on my iPod Touch and iPhone and ran into some problems with the project I was working on. First it couldn't find a Base SDK, then my old frameworks weren't hooking up correctly. Finally after setting the Project Format to Xcode 3.1 compatible (3.2 also worked) and the Base SDK for all configurations to iOS 4.2, then setting my iOS deployment target to iOS 3.0 I was able to get the system to find a Base SDK and attempt a build. That's when the frameworks didn't want to cooperate. 4/6 I'm using displayed in red, so I re routed the path to the iPhone simulator 4.2 platform which worked perfectly. I was able to build my project, no errors or warnings and my app worked fine. I went to work last night thinking I had fixed the problem. This morning I fired up the laptop and went to build my code base and now have 1142 errors all of which have to do with code I haven't written deemed as being redefined. Suggestions? The following is just a small sample of the error list (obviously don't need to see all 1142) //Frameworks/Foundation.framework/Headers/NSZone.h:48: error: redefinition of 'NSMakeCollectable' /Frameworks/Foundation.framework/Headers/NSObject.h:65: error: duplicate interface declaration for class 'NSObject' /Frameworks/Foundation.framework/Headers/NSObject.h:67: error: redefinition of 'struct NSObject'

    Read the article

  • Flex Modules vs RSL

    - by nil
    Hi, I'm a little bit confused about when is better to use Flex Modules or RSL libriaries (in Flex 3.5). My goal is split my project in several unit projects, so I can test and work separately. Let's assume I have a Customer app and Vendor app. I also have a front-end panel with two buttons. Each button launches Customer app or Vendor app. These applications make different things. They share some .as functions and common components, too. I understand that if I make a main project (for user login and to show a first panel) and two modules (customer, vendor) I must have all that components in my Eclipse project, isn't it? Instead of doing modules, should I create SWC for Vendor and other for Customer app and call from main app by using RSL? So, which option is more suitable? What do you advise me? Which are the trade-offs of each option? On the other side, this flex application is integrated with Java through Blaze and ibatis for persistence managment, and hold by a web apache server. I considered also to create independent war files to keep this indpendence, but I thought this do not optimize flex code. I'm right? Thank you. Nil

    Read the article

  • How do I use price data in one table for a calculation that is stored in another table?

    - by shane
    I'm still leanring PHP/MySQL but have learned quite a bit thanks to codies on StackOverflow. I'm trying to setup a sort of room reservations system using two tables: SETUP: Room price table: Has, prices for a type room a client may want to rent as well as the dates (day of week) they wish to use it. Pricing varies based on day of the week and per room. I've setup a different table for each room type as each room type carries different pricing for each day of the week. So, There is an Alpha room table, Bravo room, etc. Within Alpha table are headers for the days of the week with pricing pre-entered into the rows. Client info table: Has the name, address, date of room use, etc data for the specific client. EXAMPLE: Alpha-room price table: Sun = $100; Mon = $200; Tue=$300 and so on. Bravo-room price table: Sun = $100; Mon = $200; Tue=$300 and so on. Client data table: ClientName; date-of-room-use; address; day_subtotal; grand_total. QUESTION: I'm trying to find PHP code that will: look at the date of room use in the client data table, look up the associated cost for that date in the specific room pricing table, record that unit cost in the day subtotal of the client data table and sum a grand total in the grand total row of the client data table (assuming the room may be used more than one day by the customer). I know there's something to do with join but I'm finding it difficult to grasp the concept and, if someone can demonstrate using this example, I think I will have a better understanding of how to work this sort of transaction. Thank you ALL in advance for your suggestions or alternatvie approaches.

    Read the article

  • Get an array of structures from native dll to c# application

    - by PaulH
    I have a C# .NET 2.0 CF project where I need to invoke a method in a native C++ DLL. This native method returns an array of type TableEntry. At the time the native method is called, I do not know how large the array will be. How can I get the table from the native DLL to the C# project? Below is effectively what I have now. // in C# .NET 2.0 CF project [StructLayout(LayoutKind.Sequential)] public struct TableEntry { [MarshalAs(UnmanagedType.LPWStr)] public string description; public int item; public int another_item; public IntPtr some_data; } [DllImport("MyDll.dll", CallingConvention = CallingConvention.Winapi, CharSet = CharSet.Auto)] public static extern bool GetTable(ref TableEntry[] table); SomeFunction() { TableEntry[] table = null; bool success = GetTable( ref table ); // at this point, the table is empty } // In Native C++ DLL std::vector< TABLE_ENTRY > global_dll_table; extern "C" __declspec(dllexport) bool GetTable( TABLE_ENTRY* table ) { table = &global_dll_table.front(); return true; } Thanks, PaulH

    Read the article

  • How are developers using source control, I am trying to find the most efficient way to do source con

    - by RJ
    I work in a group of 4 .Net developers. We rarely work on the same project at the same time but it does happen from time to time.We use TFS for source control. My most recent example is a project I just placed into production last night that included 2 WCF services and a web application front end. I worked out of a branch called "prod" because the application is brand new and has never seen the light of day. Now that the project is live, I need to branch off the prod branch for features, bugs, etc... So what is the best way to do this? Do I simple create a new branch and sort of archive the old branch and never use it again? Do I branch off and then merge my branch changes back into the prod branch when I want to deploy to production? And what about the file and assembly version. They are currently at 1.0.0.0. When do they change and why? If I fix a small bug, which number changes if any? If I add a feature, which number changes if any? What I am looking for is what you have found to be the best way to efficiently manage source control. Most places I have worked always seem to bang heads with the source control system in on way or another and I would just like to find out what you have found that works the best.

    Read the article

< Previous Page | 456 457 458 459 460 461 462 463 464 465 466 467  | Next Page >