Search Results

Search found 6882 results on 276 pages for 'git repository'.

Page 255/276 | < Previous Page | 251 252 253 254 255 256 257 258 259 260 261 262  | Next Page >

  • AnkhSVN: Cannot checkout Subsolution due to existing "versioned" folder

    - by lostiniceland
    Hello Everyone I am using Subversion since quite some time for Java-Development and I have setup a repository on my local NAS. Since I have a MSDN subscription via my company I recently installed Visual Studio 2010 to do a small project with .NET. According to some "best-practices" my project folder looks like the following. MySolution main.sln Services services.sln Service A files Service A Test files View projectfiles Persistence persistence.sln PersistenceXml files PersistenceXml Test files PersistenceDB files PersistenceDB Test files The idea is, that the main.sln only contains the projects for the application, meaning no test projects. The subsolutions, contain the project(s) and their corresponding testprojects. I was able to put all those projects under versioncontrol with AnkhSVN, so I have the same structure there in my trunk. Commiting changes was also no problem. Now I would like to check the this out on another machine. I was able to check out the main.sln which downloaded everything that was inside this solution. It skipped the services.sln, persistence.sln and all the test-projects. Until now everything is fine. Now, here comes the problem: when I am tryting to check out the subsolution (eg. services.sln) I get an error, I think it was UnsupportedOperation. I guess this happens because ankhsvn is tryting to download the folder Service A again and create ist hidden .svn folder which is already present. The only workaround I can think of by now is installing Tortoise SVN and check out the whole thing at once. It would be nicer though to have everything from within VS. Does anyone know how I can solve this? Is another client the only solution?

    Read the article

  • How can I implement CRUD operations in a base class for an entity framework app?

    - by hminaya
    I'm working a simple EF/MVC app and I'm trying to implement some Repositories to handle my entities. I've set up a BaseObject Class and a IBaseRepository Interface to handle the most basic operations so I don't have to repeat myself each time: public abstract class BaseObject<T> { public XA.Model.Entities.XAEntities db; public BaseObject() { db = new Entities.XAEntities(); } public BaseObject(Entities.XAEntities cont) { db = cont; } public void Delete(T entity) { db.DeleteObject(entity); db.SaveChanges(); } public void Update(T entity) { db.AcceptAllChanges(); db.SaveChanges(); } } public interface IBaseRepository<T> { void Add(T entity); T GetById(int id); IQueryable<T> GetAll(); } But then I find myself having to implement 3 basic methods in every Repository ( Add, GetById & GetAll): public class AgencyRepository : Framework.BaseObject<Agency>, Framework.IBaseRepository<Agency> { public void Add(Agency entity) { db.Companies.AddObject(entity); db.SaveChanges(); } public Agency GetById(int id) { return db.Companies.OfType<Agency>().FirstOrDefault(x => x.Id == id); } public IQueryable<Agency> GetAll() { var agn = from a in db.Companies.OfType<Agency>() select a; return agn; } } How can I get these into my BaseObject Class so I won't run in conflict with DRY.

    Read the article

  • Named previously unnamed branch

    - by Jab
    It seems naming a previously unnamed branch doesn't really work out. It creates a nasty multiple heads problem that I can't find a solution for. Here is the workflow... UserA starts working on feature that they expect to be small, so they just start working(off the default branch). The change turns out to be a large project and will need multiple contributors. So UserA issues... hg branch "Feature1" and continues working, committing locally s needed. UserA then pulls down the changes from the central repo so he can push. At this point, why does hg heads return 3 heads? It shows 2 for default and 1 for Feature1. The first head for default is the latest change by another user on the branch(irrelevant). The second default head is the commit prior to the hg branch "Feature1" commit. The central repository has rules enforced so that only 1 head per branch is allowed, so forcing a push isn't an option. The repo doesn't want multiple heads on the default branch. UserA should be able to push these changes so that other users can see the Feature1 branch and help out. I can't seem to find a way to "correct" this. I don't think I can re-write the branch of the initial commits for the feature, before it was a named branch. I know the initial changes before the named branch are technically on the default branch, but does that mean they will be heads until that Feature1 branch is merged?

    Read the article

  • Moq.Mock<T> - how to setup a method that takes an expression

    - by Paul
    I am Mocking my repository interface and am not sure how to setup a method that takes an expression and returns an object? I am using Moq and NUnit Interface: public interface IReadOnlyRepository : IDisposable { IQueryable<T> All<T>() where T : class; T Single<T>(Expression<Func<T, bool>> expression) where T : class; } Test with IQueryable already setup, but don't know how to setup the T Single: private Moq.Mock<IReadOnlyRepository> _mockRepos; private AdminController _controller; [SetUp] public void SetUp() { var allPages = new List<Page>(); for (var i = 0; i < 10; i++) { allPages.Add(new Page { Id = i, Title = "Page Title " + i, Slug = "Page-Title-" + i, Content = "Page " + i + " on page content." }); } _mockRepos = new Moq.Mock<IReadOnlyRepository>(); _mockRepos.Setup(x => x.All<Page>()).Returns(allPages.AsQueryable()); //Not sure what to do here??? _mockRepos.Setup(x => x.Single<Page>() //---- _controller = new AdminController(_mockRepos.Object); }

    Read the article

  • Can the Subversion client (svn) derefence symbolic links as if they were files?

    - by Ryan B. Lynch
    I have a directory on a Linux system that mostly contains symlinks to files on a different filesystem. I'd like to add the directory to a Subversion repository, dereferencing the symlinks in the process (treating them as the files they point to, rather than links). Generally, I'd like to be able to handle any working-copy operations with this behavior, but the 'svn add' command is where it starts, I think. The SVN client utility doesn't appear to have any options related to symlink dereferencing in the working copy. I didn't find any references to this in the manual (http://svnbook.red-bean.com/en/1.5/index.html), either. I found a poster on the SVN users mailing list who asked the same question but never received an answer, here: http://markmail.org/message/ngchfnzlmm43yj7h (That poster ended up using hard links instead of symlinks. That technique is not an option, in my case, because the real underlying files reside on a separate filesystem.) I'm using Subversion v1.6.1 on Fedora 11. For what it's worth, I know that there are alternative tools/techniques that could help approximate this behavior, but which I have to discard for various reasons. I've already considered [and dust-binned] these possibilities: - a "union" mount, merging all of the the directories containing the real files, with the SVN working-copy directory as the "top" layer in the union; - copying/moving the real files to the same filesystem as the SVN working-copy, and using hardlinks instead of symlinks; - non-SVN version control systems. These were all neat ideas, and I'm sure they are good solutions to other problems, but they won't work given the constraints of this environment and situation.

    Read the article

  • Hudson Maven build fails using workspace POM, works when pointing to development copy

    - by Deejay
    I'm developing a series of web applications using Eclipse IDE, Maven, SVN, and Hudson for CI. When I specify the "Root POM" option in my Hudson job to be the copy of pom.xml in its workspace directory, the build fails citing compilation failure due to missing classpath entries. [ERROR] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Compilation failure C:\Users\djones\.hudson\jobs\Store\workspace\trunk\src\main\java\com\app\store\model\User.java:[24,42] package org.hibernate.validator.constraints does not exist C:\Users\djones\.hudson\jobs\Store\workspace\trunk\src\main\java\com\app\store\dao\UserGroupHibernateSupportDao.java:[8,20] package org.hibernate does not exist C:\Users\djones\.hudson\jobs\Store\workspace\trunk\src\main\java\com\app\store\dao\UserGroupHibernateSupportDao.java:[10,49] package org.springframework.orm.hibernate3.support does not exist When I specify the "Root POM" to be the copy of pom.xml in my Eclipse workspace, it builds just fine. It builds fine from Eclipse too. I want to move Hudson over to a separate machine so several developers can use it, so I can't very well point to my own development workspace to give it a POM. If I try putting an SVN URL in the "root pom.xml" option, it says file not found. What should I be entering here for a project worked on by several developers, and hosted in an SVN repository?

    Read the article

  • INSERT stored procedure does not work?

    - by vikitor
    Hello, I'm trying to make an insertion from one database called suspension to the table called Notification in the ANimals database. My stored procedure is this: ALTER PROCEDURE [dbo].[spCreateNotification] -- Add the parameters for the stored procedure here @notRecID int, @notName nvarchar(50), @notRecStatus nvarchar(1), @notAdded smalldatetime, @notByWho int AS BEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; -- Insert statements for procedure here INSERT INTO Animals.dbo.Notification values (@notRecID, @notName, @notRecStatus, null, @notAdded, @notByWho); END The null inserting is to replenish one column that otherwise will not be filled, I've tried different ways, like using also the names for the columns after the name of the table and then only indicate in values the fields I've got. I know it is not a problem of the stored procedure because I executed it from the sql server management studio and it works introducing the parameters. Then I guess the problem must be in the repository when I call the stored procedure: public void createNotification(Notification not) { try { DB.spCreateNotification(not.NotRecID, not.NotName, not.NotRecStatus, (DateTime)not.NotAdded, (int)not.NotByWho); } catch { return; } } It does not record the value in the database. I've been debugging and getting mad about this, because it works when I execute it manually, but not when I automatize the proccess in my application. Does anyone see anything wrong with my code? Thank you

    Read the article

  • CRONTAB doesn't finish svndump

    - by Andrew
    I just discovered that the automated dumps I've been creating of my SVN repository have been getting cut off early and basically only half the dump is there. It's not an emergency, but I hate being in this situation. It defeats the purpose of making automated backups in the first place. The command I'm using is below. If I execute it manually in the terminal, it completes fine; the output.txt file is 16 megs in size with all 335 revisions. But if I leave it to crontab, it bails at the halfway mark, at around 8.1 megs and only the first 169 revisions. # m h dom mon dow command 18 00 * * * svnadmin dump /var/svn/repos/myproject > /home/andrew/output.txt I actually save to a dated gzipped file, and there's no shortage of space on the server, so this is not a disk space issue. It seems to bail after two seconds, so this could be a time issue, but the file size is the same every single time for the past month, so I don't think it's that either. Does crontab execute within a limited memory space?

    Read the article

  • C++: best way to implement globally scoped data

    - by bobobobo
    I'd like to make program-wide data in a C++ program, without running into pesky LNK2005 errors when all the source files #includes this "global variable repository" file. I have 2 ways to do it in C++, and I'm asking which way is better. The easiest way to do it in C# is just public static members. C#: public static class DataContainer { public static Object data1 ; public static Object data2 ; } In C++ you can do the same thing C++ global data way#1: class DataContainer { public: static Object data1 ; static Object data2 ; } ; Object DataContainer::data1 ; Object DataContainer::data2 ; However there's also extern C++ global data way #2: class DataContainer { public: Object data1 ; Object data2 ; } ; extern DataContainer * dataContainer ; // instantiate in .cpp file In C++ which is better, or possibly another way which I haven't thought about? The solution has to not cause LNK2005 "object already defined" errors.

    Read the article

  • Should I return IEnumerable<T> or IQueryable<T> from my DAL?

    - by Gary '-'
    I know this could be opinion, but I'm looking for best practices. As I understand, IQueryable implements IEnumerable, so in my DAL, I currently have method signatures like the following: IEnumerable<Product> GetProducts(); IEnumerable<Product> GetProductsByCategory(int cateogoryId); Product GetProduct(int productId); Should I be using IQueryable here? What are the pros and cons of either approach? Note that I am planning on using the Repository pattern so I will have a class like so: public class ProductRepository { DBDataContext db = new DBDataContext(<!-- connection string -->); public IEnumerable<Product> GetProductsNew(int daysOld) { return db.GetProducts() .Where(p => p.AddedDateTime > DateTime.Now.AddDays(-daysOld )); } } Should I change my IEnumerable<T> to IQueryable<T>? What advantages/disadvantages are there to one or the other?

    Read the article

  • Basic Team Foundation Server 2010 Question - System Resource Usage?

    - by user127954
    Guys / Gals i have a real basic Team Foundation Server 2010 question. For those of you who have played around with tfs 2010 is it a lot more light weight than tfs2008 is? I remember installing all the pieces needed for TFS 2008 one one machine at work. I remember it being a pain to install (i know 2010 is supposed to be much better) We wanted to play around with it a little bit to see if it met our needs. Well it brought that machine to a screeching halt. I'm needing a source control repository for home and i thought why not just install tfs 2010 so i can get familiar with it and maybe in the future i can make a better sell to my organization and FINALLY get them to move off of Source Safe but my concern is i only have one server at home (granted i already have SQL Server installed) and don't want to buy a machine just for this purpose. I'd also like to get more familiar with CI too. Anyways, if team is going to be to heavy i'll just use subversion but i'd like to use TFS if possible. Any help would be appreciated. thanks, Ncage

    Read the article

  • Is there any way to provide custom factory for .Net Framework creation Entities from EF4 ?

    - by ILICH
    There are a lot of posts about how cool POCO objects are and how Entity Framework 4 supports them. I decided to try it out with domain driven development oriented architecture and finished with domain entities that has dependencies from services. So far so good. Imagine my Products are POCO objects. When i query for objects like this: NorthwindContext db = new NorthwindContext(); var products = db.Products.ToList(); EF creates instances of products for me. Now I want to inject dependencies in my POCO objects (products) The only way I see is make some method within NorthwindContext that makes something like pseudo-code below: public List<Product> GetProducts(){ var products = database.Products.ToList(); container.BuildUp(products); //inject dependencies return products; } But what if i want to make my repository to be more flexible like this: public ObjectSet<Product> GetProducts() { ... } So, I really need a factory to make it more lazy and linq friendly. Please help !

    Read the article

  • If the dependency is a unique snapshot version and install is called, what does maven select?

    - by chrsk
    Imagine two projects. The first is the framework-core project which is in version 1.1.0 and has several snapshot builds. The other is the example-business project which has the following dependency to framework-core on the build-iteration number 9. <dependency> <groupId>org.example</groupId> <artifactId>framework-core</artifactId> <version>1.1.0-20100518.134928-9</version> </dependency> What happens if mvn install is called on the framework-core? I found out that the artifact is copied to the folder and is named to *.1.1.0-SNAPSHOT.jar (as expected). This lead me to the assumption that this version is only used if even this 1.1.0-SNAPSHOT version is defined as dependency and not the precise build. To test something local without deploying it to the maven repository: call mvn install, change the dependency to 1.1.0-SNAPSHOT -- and the artifact just installed is used? Or is it possible to overwrite the specific build (with using the install lifecycle phase)?

    Read the article

  • Doubt about instance creation by using Spring framework ???

    - by Arthur Ronald F D Garcia
    Here goes a command object which needs to be populated from a Spring form public class Person { private String name; private Integer age; /** * on-demand initialized */ private Address address; // getter's and setter's } And Address public class Address { private String street; // getter's and setter's } Now suppose the following MultiActionController @Component public class PersonController extends MultiActionController { @Autowired @Qualifier("personRepository") private Repository<Person, Integer> personRepository; /** * mapped To /person/add */ public ModelAndView add(HttpServletRequest request, HttpServletResponse response, Person person) throws Exception { personRepository.add(person); return new ModelAndView("redirect:/home.htm"); } } Because Address attribute of Person needs to be initialized on-demand, i need to override newCommandObject to create an instance of Person to initiaze address property. Otherwise, i will get NullPointerException @Component public class PersonController extends MultiActionController { /** * code as shown above */ @Override public Object newCommandObject(Class clazz) thorws Exception { if(clazz.isAssignableFrom(Person.class)) { Person person = new Person(); person.setAddress(new Address()); return person; } } } Ok, Expert Spring MVC and Web Flow says Options for alternate object creation include pulling an instance from a BeanFactory or using method injection to transparently return a new instance. First option pulling an instance from a BeanFactory can be written as @Override public Object newCommandObject(Class clazz) thorws Exception { /** * Will retrieve a prototype instance from ApplicationContext whose name matchs its clazz.getSimpleName() */ getApplicationContext().getBean(clazz.getSimpleName()); } But what does he want to say by using method injection to transparently return a new instance ??? Can you show how i implement what he said ??? ATT: I know this funcionality can be filled by a SimpleFormController instead of MultiActionController. But it is shown just as an example, nothing else

    Read the article

  • EF/LINQ: Where() against a property of a subtype

    - by ladenedge
    I have a set of POCOs, all of which implement the following simple interface: interface IIdObject { int Id { get; set; } } A subset of these POCOs implement this additional interface: interface IDeletableObject : IIdObject { bool IsDeleted { get; set; } } I have a repository hierarchy that looks something like this: IRepository<T <: BasicRepository<T <: ValidatingRepository<T (where T is IIdObject) I'm trying to add a FilteringRepository to the hierarchy such that all of the POCOs that implement IDeletableObject have a Where(p => p.IsDeleted == false) filter applied before any other queries take place. My goal is to avoid duplicating the hierarchy solely for IDeletableObjects. My first attempt looked like this: public override IQueryable<T> Query() { return base.Query().Where(t => ((IDeletableObject)t).IsDeleted == false); } This works well with LINQ to Objects, but when I switch to an EF backend I get: "LINQ to Entities only supports casting Entity Data Model primitive types." I went on to try some fancier parameterized solutions, but they ultimately failed because I couldn't make T covariant in the following case for some reason I don't quite understand: interface IQueryFilter<out T> // error { Expression<Func<T, bool>> GetFilter(); } I'd be happy to go into more detail on my more complicated solutions if it would help, but I think I'll stop here for now in hope that someone might have an idea for me to try. Thanks very much in advance!

    Read the article

  • Tagging in Subversion - how do I make the decision about continuing to work on my trunk vs. the new

    - by Howiecamp
    I'm running Tortoise SVN to manage a project. Obviously the principles around tagging apply to any implementation of SVN but in this question I'll be referring to some TortoiseSVN-specific dialog boxes and messages. My working directory and the subversion repository structure both have a Source root directory and the Trunk, Tags and Branches directories underneath. (I couldn't figure out how to do a multilevel indented hierarchy in markdown without using bullets, so if someone could edit and fix this I'd appreciate it.) I'm working out of the Trunk directory in my working copy and it's pointing at the Trunk directory in the repo. I want to apply a Tag "Release1" so I click the "Branch/tag..." menu option and set the repo path as my [repo_path/bla/Source/Tags/Release1" tag. This dialog box gives me the option to "Switch my working copy to new branch/tag". I understand that if this option is left unchecked, the new "Release1" branch under /Tags" will be created but my working copy will remain on the previous "Trunk" path. If I do check this option (or use the Switch command) I understand that my working copy will switch to the new "Release1" branch under "/Tags". Where I'm missing a concept is how to make this decision. It doesn't seem like I want to switch my working directory to the recently created tag since by definition (?) I want that tag to be a snapshot of my code as of a point in time. If I don't switch the working directory, I'll continue working off Trunk and when I'm ready to take another snapshot I'll make another tag. And so on... Am I understanding this right or am I stating something incorrectly in the previous paragraph (e.g. the statement about not wanting to switch to the tag since the tag should represent a point in time snapshot) or otherwise missing something regarding how to make this decision?

    Read the article

  • Netbeans 6.1 Incorrect CVS Status on a file that does not exist

    - by Coder
    Hi, I have been trying to figure this out for a few hours off and on now and can't figure it out. I committed a lot of binary (jar files) to cvs and they worked fine, but one of the 6 directories, netbeans thinks has a file that it keeps trying to commit, but it doesn't actually exist in the file system. There is also another file in the same directory that i did commit, and netbeans cvs status says that it's an unknown file, but when i delete the directory and check it out, it shows up fine, but netbeans can't get the correct cvs status for the file. I looked in the repository and all looks fine. There is only one file present as it should be. Looking at the CVS directory in the checkout folder also reveals nothing suspicious. I don't know what to do about this. I don't know why netbeans thinks there is a file in that directory that is not actually there. I did a search in my working directory and my netbeans project directory for any file containing a reference to this file but there is nothing.

    Read the article

  • What could possibly be different between the table in a DataContext and an IQueryable<Table> when do

    - by Nate Bross
    I have a table, where I need to do a case insensitive search on a text field. If I run this query in LinqPad directly on my database, it works as expected Table.Where(tbl => tbl.Title.Contains("StringWithAnyCase") In my application, I've got a repository which exposes IQueryable objects which does some initial filtering and it looks like this var dc = new MyDataContext(); public IQueryable<Table> GetAllTables() { var ret = dc.Tables.Where(t => t.IsActive == true); return ret; } In the controller (its an MVC app) I use code like this in an attempt to mimic the LinqPad query: var rpo = new RepositoryOfTable(); var tables = rpo.GetAllTables(); // for some reason, this does a CASE SENSITIVE search which is NOT what I want. tables = tables.Where(tbl => tbl.Title.Contains("StringWithAnyCase"); return View(tables); The column is defiend as an nvarchar(50) in SQL Server 2008. Any help or guidance is greatly appreciated!

    Read the article

  • Good case for a Null Object Pattern? (Provide some service with a mailservice)

    - by fireeyedboy
    For a website I'm working on, I made an Media Service object that I use in the front end, as well as in the backend (CMS). This Media Service object manipulates media in a local repository (DB); it provides the ability to upload/embed video's and upload images. In other words, website visitors are able to do this in the front end, but administrators of the site are also able to do this in the backend. I'ld like this service to mail the administrators when a visitor has uploaded/embedded a new medium in the frontend, but refrain from mailing them when they upload/embed a medium themself in the backend. So I started wondering whether this is a good case for passing a null object, that mimicks the mail funcionality, to the Media Service in the backend. I thought this might come in handy when they decide the backend needs to have implemented mail functionality as well. In simplified terms I'ld like to do something like this: Frontend: $mediaService = new MediaService( new MediaRepository(), new StandardMailService() ); Backend: $mediaService = new MediaService( new MediaRepository(), new NullMailService() ); How do you feel about this? Does this make sense? Or am I setting myself up for problems down the road?

    Read the article

  • Error while compiling pjsip 2 for cxode 4.3.2 and SDK 5.1

    - by Linus Persson
    I'm trying to compile pjsip version 2 using the terminal and I'm getting constant error no matter what I try. Been looking for the answer all over the internet including stackoverflow. I downloaded pjsip version 2 using their subversion repository today so all files should be up to date. When following this guide: http://trac.pjsip.org/repos/wiki/Getting-Started/iPhone I get this error after running "make dep && make clean && make": ld: symbol(s) not found for architecture armv7 collect2: ld returned 1 exit status make[2]: *** [../bin/pjsua-arm-apple-darwin9] Error 1 make[1]: *** [pjsua] Error 2 make: *** [all] Error 1 When using the above guide combined with this guide: http://lists.pjsip.org/pipermail/pjsip_lists.pjsip.org/2011-October/013481.html I get this error after running "make dep && make clean && make": ld: symbol(s) not found for architecture arm collect2: ld returned 1 exit status make[2]: *** [../bin/pjsua-arm-apple-darwin10] Error 1 make[1]: *** [pjsua] Error 2 make: *** [all] Error 1 I've included /pjlib/include/pj/config_site.h with the following code: #define PJ_CONFIG_IPHONE 1 #include <pj/config_site_sample.h> How do I get pjsip to compile without errors? Please consider that I'm new to this, thank you!

    Read the article

  • odd nullreference error at foreach when rendering view

    - by giddy
    This error is so weird I Just can't really figure out what is really wrong! In UserController I have public virtual ActionResult Index() { var usersmdl = from u in RepositoryFactory.GetUserRepo().GetAll() select new UserViewModel { ID = u.ID, UserName = u.Username, UserGroupName = u.UserGroupMain.GroupName, BranchName = u.Branch.BranchName, Password = u.Password, Ace = u.ACE, CIF = u.CIF, PF = u.PF }; if (usersmdl != null) { return View(usersmdl.AsEnumerable()); } return View(); } My view is of type @model IEnumerable<UserViewModel> on the top. This is what happens: Where and what exactly IS null!? I create the users from a fake repository with moq. I also wrote unit tests, which pass, to ensure the right amount of mocked users are returned. Maybe someone can point me in the right direction here? Top of the stack trace is : at lambda_method(Closure , User ) at System.Linq.Enumerable.WhereSelectArrayIterator`2.MoveNext() at ASP.Index_cshtml.Execute() Is it something to do with linq here? Tell me If I should include the full stack trace.

    Read the article

  • eclipse - starting with android sdk

    - by dontHaveName
    I want start programming for android.. What I have: -Windows 7 -Eclipse Classic 4.2 -Downloaded all there required files - http://developer.android.com/sdk/installing/adding-packages.html -ADT Plugin I want install new ADT plugin.. at first I tried to download it from http://dl-ssl.google.com/android/eclipse, I add it, but if i selected it there is only "pending.." and nothing has load..(maybe internet connection?I have selected Native connection in preferences after pending it wrotes: Unable to connect to repository http://dl-ssl.google.com/android/eclipse/content.xml org.eclipse.equinox.p2.core.ProvidesException ) Thats why I download ADT plugin. So if I select downloaded ADT plugin - content of it load - developer tools and ndk plugin so I select all and click next. It loads and writes this: "Cannot complete the install because one or more required items could not be found. Software being installed: Android Development Tools 20.0.3.v201208082019-427395 (com.android.ide.eclipse.adt.feature.group 20.0.3.v201208082019-427395) Missing requirement: Android Development Tools 20.0.3.v201208082019-427395 (com.android.ide.eclipse.adt.feature.group 20.0.3.v201208082019-427395) requires 'org.eclipse.wst.sse.core 0.0.0' but it could not be found" requires 'org.eclipse.wst.sse.core 0.0.0 this problem is shown here: http://developer.android.com/resources/faq/troubleshooting.html#installeclipsecomponents but there is solution only for version 3.3 and 3.4 (I have 4.2) anyway but I tried it- I look for updates but nothing were found I really dont know where could be problem.. Thanks for any answer. (sorry for my english) I will send 1€ to somebody who can solve my problem ;) (I think all problems all for internet connection but I cant set it..)

    Read the article

  • Recommendations for Continuous integration for Mercurial/Kiln + MSBuild + MSTest

    - by TDD
    We have our source code stored in Kiln/Mercurial repositories; we use MSBuild to build our product and we have Unit Tests that utilize MSTest (Visual Studio Unit Tests). What solutions exist to implement a continuous integration machine (i.e. Build machine). The requirements for this are: A build should be kicked of when necessary (i.e. code has changed in the Repositories we care about) Before the actual build, the latest version of the source code must be acquired from the repository we are building from The build must build the entire product The build must build all Unit Tests The build must execute all unit tests A summary of success/failure must be sent out after the build has finished; this must include information about the build itself but also about which Unit Tests failed and which ones succeeded. The summary must contain which changesets were in this build that were not yet in the previous successful (!) build The system must be configurable so that it can build from multiple branches(/Repositories). Ideally, this system would run on a single box (our product isn't that big) without any server components. What solutions are currently available? What are their pros/cons? From the list above, what can be done and what cannot be done? Thanks

    Read the article

  • Exception when retrieving record using Nhibernate

    - by Muhammad Akhtar
    I am new to NHibernate and have just started right now. I have very simple table contain Id(Int primary key and auto incremented), Name(varchar(100)), Description(varchar(100)) Here is my XML <class name="DevelopmentStep" table="DevelopmentSteps" lazy="true"> <id name="Id" type="Int32" column="Id"> </id> <property name="Name" column="Name" type="String" length="100" not-null="false"/> <property name="Description" column="Description" type="String" length="100" not-null="false"/> here is how I want to get all the record public List<DevelopmentStep> getDevelopmentSteps() { List<DevelopmentStep> developmentStep; developmentStep = Repository.FindAll<DevelopmentStep>(new OrderBy("Name", Order.Asc)); return developmentStep; } But I am getting exception The element 'id' in namespace 'urn:nhibernate-mapping-2.2' has incomplete content. List of possible elements expected: 'urn:nhibernate-mapping-2.2:meta urn:nhibernate-mapping- 2.2:column urn:nhibernate-mapping-2.2:generator'. Please Advise me --- Thanks

    Read the article

  • Incremental deploy from a shell script

    - by WishCow
    I have a project, where I'm forced to use ftp as a means of deploying the files to the live server. I'm developing on linux, so I hacked together a bash script that makes a backup of the ftp server's contents, deletes all the files on the ftp, and uploads all the fresh files from the mercurial repository. (and taking care of user uploaded files and folders, and making post-deploy changes, etc) It's working well, but the project is starting to get big enough to make the deployment process too long. I'd like to modify the script to look up which files have changed, and only deploy the modified files. (the backup is fine atm as it is) I'm using mercurial as a VCS, so my idea is to somehow request the changed files between two revisions from it, iterate over the changed files, and upload each modified file, and delete each removed file. I can use hg log -vr rev1:rev2, and from the output, I can carve out the changed files with grep/sed/etc. Two problems: I have heard the horror stories that parsing the output of ls leads to insanity, so my guess is that the same applies to here, if I try to parse the output of hg log, the variables will undergo word-splitting, and all kinds of transformations. hg log doesn't tell me a file is modified/added/deleted. Differentiating between modified and deleted files would be the least. So, what would be the correct way to do this? I'm using yafc as an ftp client, in case it's needed, but willing to switch.

    Read the article

< Previous Page | 251 252 253 254 255 256 257 258 259 260 261 262  | Next Page >