Search Results

Search found 64186 results on 2568 pages for 'access control service'.

Page 483/2568 | < Previous Page | 479 480 481 482 483 484 485 486 487 488 489 490  | Next Page >

  • Managing aesthetic code changes in git

    - by Ollie Saunders
    I find that I make a lot of small changes to my source code, often things that have almost no functional effect. For example: Refining or correcting comments. Moving function definitions within a class for a more natural reading order. Spacing and lining up some declarations for readability. Collapsing something using multiple lines on to one. Removing an old piece of commented-out code. Correcting some inconsistent whitespace. I guess I have a formidable attention to detail in my code. But the problem is I don't know what to do about these changes and they make it difficult to switch between branches etc. in git. I find myself not knowing whether to commit the minor changes, stash them, or put them in a separate branch of little tweaks and merge that in later. None those options seems ideal. The main problem is that these sort of changes are unpredictable. If I was to commit these there would be so many commits with the message "Minor code aesthetic change.", because, the second I make such a commit I notice another similar issue. What should I do when I make a minor change, a significant change, and then another minor change? I'd like to merge the three minor changes into one commit. It's also annoying seeing files as modified in git status when the change barely warrants my attention. I know about git commit --amend but I also know that's bad practice as it makes my repo inconsistent with remotes.

    Read the article

  • 2-Version software: Best VCS approach?

    - by Tom R
    I suppose I'd better explain my situation: I'm in the process of developing some software, and I'm at the stage where I'd like to split my project into two branches which differ in features. It so happens that this application is an Android application which I will be deploying on the Market, which has the constraint that every app must have a unique package identifier (sensible, no?). My current approach has been to clone the git repo of my original project, but this causes issues with package names. I want the system to be robust enough so that a bugfix/new feature on one branch will merge into another branch, but only when I want it to. Does anyone have any suggestions?

    Read the article

  • What do you choose, protected or internal?

    - by brickner
    If I have a class with a method I want protected and internal. I want that only derived classes in the assembly would be able to call it. Since protected internal means protected or internal, you have to make a choice. What do you choose in this case - protected or internal?

    Read the article

  • Unity Configuration and Same Assembly

    - by tyndall
    I'm currently getting an error trying to resolve my IDataAccess class. The value of the property 'type' cannot be parsed. The error is: Could not load file or assembly 'TestProject' or one of its dependencies. The system cannot find the file specified. (C:\Source\TestIoC\src\TestIoC\TestProject\bin\Debug\TestProject.vshost.exe.config line 14) This is inside a WPF Application project. What is the correct syntax to refer to the Assembly you are currently in? is there a way to do this? I know in a larger solution I would be pulling Types from seperate assemblies so this might not be an issue. But what is the right way to do this for a small self-contained test project. Note: I'm only interested in doing the XML config at this time, not the C# (in code) config. UPDATE: see all comments My XML config: <configuration> <configSections> <section name="unity" type="Microsoft.Practices.Unity.Configuration.UnityConfigurationSection, Microsoft.Practices.Unity.Configuration" /> </configSections> <unity> <typeAliases> <!-- Lifetime manager types --> <typeAlias alias="singleton" type="Microsoft.Practices.Unity.ContainerControlledLifetimeManager, Microsoft.Practices.Unity" /> <typeAlias alias="external" type="Microsoft.Practices.Unity.ExternallyControlledLifetimeManager, Microsoft.Practices.Unity" /> <typeAlias alias="IDataAccess" type="TestProject.IDataAccess, TestProject" /> <typeAlias alias="DataAccess" type="TestProject.DataAccess, TestProject" /> </typeAliases> <containers> <container name="Services"> <types> <type type="IDataAccess" mapTo="DataAccess" /> </types> </container> </containers> </unity> </configuration>

    Read the article

  • SQL Server CE 3.5 SP1 Stored Procedures

    - by Robert
    I have been tasked with taking an existing WinForms application and modifying it to work in an "occasionally-connected" mode. This was to be achieved with SQL Server CE 3.5 on a user's laptop and sync the server and client either via SQL Server Merge Replication or utilizing Microsoft's Sync Framework. Currently, the application connects to our SQL Server and retrieves, inserts, updates data using stored procedures. I have read that SQL Server CE does not support stored procedures. Does this mean that all my stored procedures will need to be converted to straight SQL statements, either in my code or as a query inside a tableadapter? If this is true, what are my alternatives?

    Read the article

  • How do people manage changes to common library files stored across mutiple (Mercurial) repositories?

    - by mckoss
    This is perhaps not a question unique to Mercurial, but that's the SCM that I've been using most lately. I work on multiple projects and tend to copy source code for libraries or utilities from a previous project to get a leg up on starting a new project. The problem comes in when I want to merge all the changes I made in my latest project, back into a "master" copy of those shared library files. Since the files stored in disjoint repositories will have distinct version histories, Mercurial won't be able to perform an intelligent merge if I just copy the files back to the master repo (or even between two independent projects). I'm looking for an easy way to preserve the change history so I can merge library files back to the master with a minimum of external record keeping (which is one of the reasons I'm using SVN less as merges require remembering when copies were made across branches). Perhaps I need to do a bit more up-front organization of my repository to prepare for a future merge back to a common master.

    Read the article

  • Good practice to create extension methods that apply to System.Object?

    - by Christian
    Hello, I'm wondering whether I should create extension methods that apply on the object level or whether they should be located at a lower point in the class hierarchy. What I mean is something along the lines of: public static string SafeToString(this Object o) { if (o == null || o is System.DBNull) return ""; else { if (o is string) return (string)o; else return ""; } } public static int SafeToInt(this Object o) { if (o == null || o is System.DBNull) return 0; else { if (o.IsNumeric()) return Convert.ToInt32(o); else return 0; } } //same for double.. etc I wrote those methods since I have to deal a lot with database data (From the OleDbDataReader) that can be null (shouldn't, though) since the underlying database is unfortunately very liberal with columns that may be null. And to make my life a little easier, I came up with those extension methods. What I'd like to know is whether this is good style, acceptable style or bad style. I kinda have my worries about it since it kinda "pollutes" the Object-class. Thank you in advance & Best Regards :) Christian P.S. I didn't tag it as "subjective" intentionally.

    Read the article

  • Checking whether an object exists in StructureMap container

    - by Kevin Pang
    I'm using StructureMap to handle the creation of NHibernate's ISessionFactory and ISession. I've scoped ISessionFactory as a singleton so that it's only created once for my web app and I've scoped ISession as a hybrid so that it will only be opened once per web request. I want to make sure that at the end of each web request, I properly dispose of ISession if it was created for that web request. I figured I could put some code in my Application_EndRequest routine to first check if an ISession was created, and if so, call ISession.Dispose. My current workaround is to just open up an ISession on Application_BeginRequest then dispose of it on Application_EndRequest, but that seems somewhat wasteful in that static file requests for images and css files and whatnot will create an ISession without ever using it. I know that the overall performance hit is negligable since ISessions are very lightweight, but it's getting annoying seeing all those ISessions being created inside NHProf.

    Read the article

  • What are .git/info/grafts for?

    - by Big 40wt Svetlyak
    I am trying to figure out what is the 'grafts' in the Git. For example, in one of the latest comments here, Tobu suppose to use git-filter-branch and .git/info/grafts to join two repositories. But I don't understand why I need these grafts? It seems, that all work without last two commands.

    Read the article

  • Separation of interfaces and implementation

    - by bonefisher
    From assembly(or module) perspective, what do you think of separation of Interface (1.assembly) and its Implementation (2.assembly)? In this way we can use some IoC container to develop more decoupling desing.. Say we have an assembly 'A', which contains interfaces only. Then we have an assembly 'B' which references 'A' and implements those interfaces..It is dependent only on 'A'. In assembly 'C' then we can use the IoC container to create objects of 'A' using dependency injection of objects from 'B'. This way 'B' and 'C' are completely unaware (not dependent) of themselves..

    Read the article

  • Branching Strategies

    - by Craig H
    The company I work for is starting to have issues with their current branching model, and I was wondering what different kinds of branching strategies the community has been exposed to? Are there any good ones for different situations? What does your company use? What are the advantages and disadvantages of them?

    Read the article

  • Launching applications on remote windows xp from ubuntu 10.04 machine

    - by Aniket Vibhute
    I have ubuntu 10.04 installed on my machine. I want to execute commands on remote windows xp machine ( I have username and password of admin account of remote machine ) so as to launch application like Internet Explorer, Notepad or some bat script. Is there any command line utility to do this via ubuntu? I tried rdesktop, winexe, ssh, telnet but they are not much of use. Can you please suggest some other way?

    Read the article

  • VBA: Difference in two ways of declaring a new object? (Trying to understand why my solution works)

    - by Matt
    I was creating a new object within a loop, and adding that object to a collection; but when I read back the collection after, it was always filled entirely with the last object I had added. I've come up with two ways around this, but I simply do not understand why my initial implementation was wrong. Original: Dim oItem As Variant Dim sOutput As String Dim i As Integer Dim oCollection As New Collection For i = 0 To 10 Dim oMatch As New clsMatch oMatch.setLineNumber i oCollection.Add oMatch Next For Each oItem In oCollection sOutput = sOutput & "[" & oItem.lineNumber & "]" Next MsgBox sOutput This resulted in every lineNumber being 10; I was obviously not creating new objects, but instead using the same one each time through the loop, despite the declaration being inside of the loop. So, I added Set oMatch = Nothing immediately before the Next line, and this fixed the problem, it was now 0 to 10. So if the old object was explicitly destroyed, then it was willing to create a new one? I would have thought the next iteration through the loop would cause anything declared within the loop do be destroyed due to scope? Curious, I tried another way of declaring a new object: Dim oMatch As clsMatch: Set oMatch = New clsMatch. This, too, results in 0 to 10. Can anyone explain to me why the first implementation was wrong?

    Read the article

  • How to best deal with photos passed to IFilter?

    - by sharptooth
    I'm implementing an IFilter for indexing image formats. One problem is photos - many users have tons of photos, photos are huge and loading and searching for text on them is time consuming. Yes, sometimes people use cameras instead of scanners for digitizing documents, but the potential problems IMO far outweight the possibility of encountering a document digitized with a photo camera. So my implementation will not extract text from photos at all. What should the IFilter do once it detects that a given file is a photo image - indicate an error or return empty text?

    Read the article

  • How to export all changed/added files from Git?

    - by dr Hannibal Lecter
    Hi all! I am very new to Git and I have a slight problem. In SVN [this feels like an Only Fools and Horses story by uncle Albert.."during the war..."] when I wanted to update a production site with my latest changes, I'd do a diff in TSVN and export all the changed/added files between two revisions. As you can imagine, it was easy to get those files to a production site afterwards. However, it seems like I'm unable to find an "export changed files" option in Git. I can do a diff and see the changes, I can get a list of files, but I can't actually export them. Is there a reasonable way to do this? Am I missing something simple? Just to clarify once again, I need to export all the changes between two specific commits. Thanks in advance!

    Read the article

  • How does mercurial's bisect work when the range includes branching?

    - by Joshua Goldberg
    If the bisect range includes multiple branches, how does hg bisect's search work. Does it effectively bisect each sub-branch (I would think that would be inefficient)? For instance, borrowing, with gratitude, a diagram from an answer to this related question, what if the bisect got to changeset 7 on the "good" right-side branch first. @ 12:8ae1fff407c8:bad6 | o 11:27edd4ba0a78:bad5 | o 10:312ba3d6eb29:bad4 |\ | o 9:68ae20ea0c02:good33 | | | o 8:916e977fa594:good32 | | | o 7:b9d00094223f:good31 | | o | 6:a7cab1800465:bad3 | | o | 5:a84e45045a29:bad2 | | o | 4:d0a381a67072:bad1 | | o | 3:54349a6276cc:good4 |/ o 2:4588e394e325:good3 | o 1:de79725cb39a:good2 | o 0:2641cc78ce7a:good1 Will it then look only between 7 and 12, missing the real first-bad that we care about? (thus using "dumb" numerical order) or is it smart enough to use the full topography and to know that the first bad could be below 7 on the right-side branch, or could still be anywhere on the left-side branch. The purpose of my question is both (a) just to understand the algorithm better, and (b) to understand whether I can liberally extend my initial bisect range without thinking hard about what branch I go to. I've been in high-branching bisect situations where it kept asking me after every test to extend beyond the next merge, so that the whole procedure was essentially O(n). I'm wondering if I can just throw the first "good" marker way back past some nest of merges without thinking about it much, and whether that would save time and give correct results.

    Read the article

  • Is a base class with shared fields and functions good design

    - by eych
    I've got a BaseDataClass with shared fields and functions Protected Shared dbase as SqlDatabase Protected Shared dbCommand as DBCommand ... //also have a sync object used by the derived classes for Synclock'ing Protected Shared ReadOnly syncObj As Object = New Object() Protected Shared Sub Init() //initializes fields, sets connections Protected Shared Sub CleanAll() //closes connections, disposes, etc. I have several classes that derive from this base class. The derived classes have all Shared functions that can be called directly from the BLL with no instantiation. The functions in these derived classes call the base Init(), call their specific stored procs, call the base CleanAll() and then return the results. So if I have 5 derived classes with 10 functions each, totaling 50 possible function calls, since they are all Shared, the CLR only calls one at a time, right? All calls are queued to wait until each Shared function completes. Is there a better design with having Shared functions in your DAL and still have base class functions? Or since I have a base class, is it better to move towards instance methods within the DAL?

    Read the article

  • How does git save space and is fast at the same time?

    - by eSKay
    I just saw the first git tutorial at http://blip.tv/play/Aeu2CAI How does git store all the versions of all the files and still be more economical in space than subversion which saves only the latest version of the code? I know this can be done using compression but that would be at the cost of speed, but this also says that git is much faster (though where is gains the max is the fact that most of its operations are offline). So, my guess is that git compresses data extensively it is still faster because uncompression + work is still faster than network_fetch + work Am I correct? even close?

    Read the article

  • Which IOC runs in medium trust

    - by Rippo
    Hi I am trying to get a website running with Mosso that has Castle Windsor as my IOC, however I am getting the following error. [SecurityException: That assembly does not allow partially trusted callers.] GoldMine.WindsorControllerFactory..ctor() in WindsorControllerFactory.cs:33 GoldMine.MvcApplication.Application_Start() in Global.asax.cs:70 My questions are Does Castle Windsor run under medium trust? Can I download the DLL's without having to recompile with nant? (as I don't have this set up and don't know nant at all) Or is there another IOC that I can use that I can download and works in Medium Trust? Thanks

    Read the article

  • iPhone app throwing EXC_BAD_ACCESS with dictionary from contents of file

    - by Mark Szymanski
    I have the code NSArray *paths = [[NSArray alloc] initWithArray:NSSearchPathForDirectoriesInDomains(NSLibraryDirectory, NSUserDomainMask, YES)]; NSString *docsDirectory = [[NSString alloc] initWithString:[paths objectAtIndex:0]]; NSLog(@"This app's documents directory: %@",docsDirectory); NSString *docsDirectoryWithPlist = [[NSString alloc] initWithFormat:@"%@/Stuff.plist", docsDirectory]; BOOL fileExists = [[NSFileManager defaultManager] fileExistsAtPath:docsDirectoryWithPlist isDirectory:NO]; if (fileExists) { chdir([docsDirectory UTF8String]); NSMutableDictionary *readDict = [[NSMutableDictionary alloc] initWithContentsOfFile:@"Stuff.plist"]; in an application's applicationDidFinishLaunching method and whenever it gets to the last line it crashes, throwing EXC_BAD_ACCESS along the way. Thanks in advance!

    Read the article

  • svn track brand new code base

    - by Fire Crow
    I'm at a company, we keep recieviing new codebases from a third party vendor. we'd like to track the changes in subversion. is there a way to replace a branch with the new code and track the changes? currently we just delete all files in the branch, and then add the new files and commit. we'd like to track the files, but I havn't found a tool that will easily deal with all the .svn directories found in subfolders. does anyone know a tool that will replace an svn directory with a new branch and create the respective modify add and delete records as if the code base was organically modified?

    Read the article

  • How much Maximum Data we can store in a File in salesforce

    - by Ritesh Mehandiratta
    i searched a little for the size of file in salesforce . i found this link http://help.salesforce.com/HTViewHelpDoc?id=collab_files_size_limits.htm&language=en_US its showing that file size can be upto 2 GB.i have to store IDs in a text file and want to make it scalable for for nearly about 1 Million record .file size will be equal to 15 MB .can any one please provide some good tutorial how to create such kind of files and using it in apex for retrieving and updating data

    Read the article

< Previous Page | 479 480 481 482 483 484 485 486 487 488 489 490  | Next Page >