Search Results

Search found 5845 results on 234 pages for 'commit protocol'.

Page 142/234 | < Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >

  • Building an Issue Tracker Plugin for TortoiseSVN

    - by Oded
    I've read a lot about IBugTraqProvider interface and implementing an issue tracker into the commit dialog of TortoiseSVN. IBugTraqProvider is written here. Is there a more simpler way not to do it, building the plug-in and installing it on TortoiseSVN. The Document is not that clear that a developer can create its own plugin. I'm working with SalesForce as the Issue Tracker, and retrieved the WSDL file to integrate with the Working Items. Now I need to know how to connect it to TortoiseSVN. Please any suggestions?

    Read the article

  • NHibernate mapping error SQL Server 2008 Express

    - by developer
    Hi All, I tried an example from NHibernate in Action book and when I try to run the app, it throws an exception saying "Could not compile the mapping document: HelloNHibernate.Employee.hbm.xml" Below is my code, Employee.hbm.xml <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" auto-import="true"> <class name="HelloNHibernate.Employee, HelloNHibernate" lazy="false" table="Employee"> <id name="id" access="field"> <generator class="native"/> </id> <property name="name" access="field" column="name"/> <many-to-one access="field" name="manager" column="manager" cascade="all"/> </class> Program.cs using System; using System.Collections.Generic; using System.Linq; using System.Text; using NHibernate; using System.Reflection; using NHibernate.Cfg; namespace HelloNHibernate { class Program { static void Main(string[] args) { CreateEmployeeAndSaveToDatabase(); UpdateTobinAndAssignPierreHenriAsManager(); LoadEmployeesFromDatabase(); Console.WriteLine("Press any key to exit..."); Console.ReadKey(); } static void CreateEmployeeAndSaveToDatabase() { Employee tobin = new Employee(); tobin.name = "Tobin Harris"; using (ISession session = OpenSession()) { using (ITransaction transaction = session.BeginTransaction()) { session.Save(tobin); transaction.Commit(); } Console.WriteLine("Saved Tobin to the database"); } } static ISession OpenSession() { if (factory == null) { Configuration c = new Configuration(); c.AddAssembly(Assembly.GetCallingAssembly()); factory = c.BuildSessionFactory(); } return factory.OpenSession(); } static void LoadEmployeesFromDatabase() { using (ISession session = OpenSession()) { IQuery query = session.CreateQuery("from Employee as emp order by emp.name asc"); IList<Employee> foundEmployees = query.List<Employee>(); Console.WriteLine("\n{0} employees found:", foundEmployees.Count); foreach (Employee employee in foundEmployees) Console.WriteLine(employee.SayHello()); } } static void UpdateTobinAndAssignPierreHenriAsManager() { using (ISession session = OpenSession()) { using (ITransaction transaction = session.BeginTransaction()) { IQuery q = session.CreateQuery("from Employee where name='Tobin Harris'"); Employee tobin = q.List<Employee>()[0]; tobin.name = "Tobin David Harris"; Employee pierreHenri = new Employee(); pierreHenri.name = "Pierre Henri Kuate"; tobin.manager = pierreHenri; transaction.Commit(); Console.WriteLine("Updated Tobin and added Pierre Henri"); } } } static ISessionFactory factory; } } Employee.cs using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace HelloNHibernate { class Employee { public int id; public string name; public Employee manager; public string SayHello() { return string.Format("'Hello World!', said {0}.", name); } } } App.config <?xml version="1.0" encoding="utf-8" ?> <configuration> <configSections> <section name="hibernate-configuration" type="NHibernate.Cfg.ConfigurationSectionHandler,NHibernate"/> </configSections> <hibernate-configuration xmlns="urn:nhibernate-configuration-2.2"> <session-factory> <property name="connection.provider">NHibernate.Connection.DriverConnectionProvider</property> <property name="connection.driver_class">NHibernate.Driver.SqlClientDriver</property> <property name="connection.connection_string"> Data Source=SQLEXPRESS2008;Integrated Security=True; User ID=SQL2008;Password=;initial catalog=HelloNHibernate </property> <property name="dialect">NHibernate.Dialect.MsSql2008Dialect</property> <property name="show_sql">false</property> </session-factory> </hibernate-configuration> </configuration>

    Read the article

  • Using sub-repo with hgwebdir difficulties in mercurial

    - by Ton
    Allright I got myself in a deadlock with Mercurial and sub-repos... Here's what happenend: I had a large mercurial repo that I server via apache and hgweb.cgi. Due to the size of the repo I decided to move to sub-repositories and share these with hgwebdir.cgi. Using the convert tool with the filemap option I created several sub-repositories: /main/foo /main/bar Nicely created an entry for the sub-repositories in .hgsub: foo = foo bar = bar And set hgwebdir.cgi up to show $/** as the root folder. Now when I went to my site (foo.com/hg) I saw my sub-repositories with one empty reposory among them (no name, no content), but I could not download it (archive location unknown): That was allright until I added a new sub-repository. I could not push the new .hgsub file to foo.com/hg, since that page is served by hgwebdir. The only method I can work currently is switch from hgwebdir to hgweb, commit .hgsubste and switch back to hgwebdir. Does someone have a good setup for such a mess?

    Read the article

  • Tortoise SVN tree conflict with myself

    - by Jesse Pepper
    Has anyone had the experience of moving a file in tortoise and committing successfully, only to later commit a different change and be told of a tree conflict where: the file in its original location has been deleted, but in tortoise is marked as missing the file in its new location is there, but marked as already added. (I use tortoise SVN, and we have client and server 1.60) Nobody else changed either the directory or the file (according to svn log). Why is this happening? Is there a way to avoid it happening? If it does happen, is there a more elegant way of fixing the problem than by deleting the whole folder and updating again?

    Read the article

  • Transfer file using BITS without using IIS as the server?

    - by rwmnau
    I know that I can transfer files using BITS and a wrapper like SharpBITS, but it seems that I need an IIS server on one end - either to upload to or download from. Is there a way to use the BITS protocol to transfer a file without requiring IIS? Some kind of a "BITS Server" or "Listener" project that my client's BITS service can connect to. I'm looking for functionality that's exactly what BITS provides, but I'd prefer not to require that IIS be installed (though if I have to, I can). Thanks!

    Read the article

  • Can Tornado communicate with Cassandra, in Non-blocking asynchronous style?

    - by takaomag
    I'm working on a web project, which have to process so many client requests. So I am considering to use Cassandra and tornado. Tornado seems to have a build-in client(tornado.httpclient.AsyncHTTPClient), which can do http Non-Blocking request. But, Cassandra uses Thrift protocol. Using Thrift, Tornado seems to be blocked while quering to Cassandra. Has anyone got expereince? Please suggest how should I do. Or, is there any add-on module for this purpose? Thanks.

    Read the article

  • GData for my own API?

    - by Malax
    Hi StackOverflow! Im currently planning to build an API for my service. I want to use GData because it fits the application scheme and there are libraries for many programming languages available. The first question that rose: Am I allowed to do that? I mean, Google put lots of work into the GData specification and have some sort of copyright. Does anyone know anything about this issue or did that before? You could extend the case if you want to specifically mimic an API which uses GData like the YouTube API to have my API 100% compliant. This is not my case, but I was wondering about that too. :-) Thank you for any input, Malax Edit: Note that i want to use it for my own service. So, I am implementing an API using the GData protocol, not using one of the Google APIs.

    Read the article

  • How to determine the UID of a message in IMAP

    - by Emanuel
    I'm working in a mail client project using C#. I'm using both the POP and IMAP protocol to communicate with the server. The problem is than I can not figure out why when I want to get the UID for a message the result from the POP server and the IMAP server are different. POP C: UIDL 1 S: +OK 1 UID2-1269789826 and IMAP C: $ FETCH 1 (UID) S: * 1 FETCH (UID 2) S: $ OK Fetch completed. Why the result for obtaining the UID is so different? In IMAP is another function for this? Any help is welcome. Thanks.

    Read the article

  • How does the timeout work in Restlet's client class?

    - by Greg Noe
    Here's some code: Client client = new Client(Protocol.HTTP); client.setConnectTimeout(1); //milliseconds Response response = client.post(url, paramRepresentation); System.out.println("timed out"); What I would expect to happen is that it prints "timed out" before the resource has time to process. Instead, nothing happens with the timeout and it doesn't print "timed out" until after the resource returns. Even if I put a Thread.sleep(5000) at the resource that's handling the request, the entire sleep is performed, like the timeout did nothing. Anyone have experience with this? I'm using Restlet 1.1.1. Thanks.

    Read the article

  • Persisting details in Master Detail relation EF4 POCO

    - by Roger Alsing
    Scenario: Entity Framework 4 , POCO templates and Master Detail relation. Lets say I have a master type like this: //partial implementation of master entity partial class Master { public void AddDetail(x,y,z) { var detail = new Detail() { X = x, Y = y, Z = z, }; //add the detail to the master this.Details.Add(detail); } } If I then add a master instance to my context and commit, the details will not be saved: var masterObject = new Master(); masterObject.AddDetail(1,2,3); myContext.MasterSet.AddObject(masterObject); Is there any way to make the details to be persisted by reachabillity when using POCO templates? Or any other way? the Details collection in the Master entity is a FixUpCollection, so it ought to track the changes IMO. So, any ideas how to make this work W/O killing the POCO'ness too much?

    Read the article

  • Advice on HTTPS connections using Ruby on Rails

    - by user502052
    Since I am developing a "secure" OAuth protocol for my RoR3 apps, I need to send protected information over the internet, so I need to use HTTPS connections (SSL/TSL). I read How to Cure Net::HTTP’s Risky Default HTTPS Behavior aticle that mentions the 'always_verify_ssl_certificates' gem, but, since I want to be more "pure" (it means: I do not want to install other gems, but I try to do everything with Ruby on Rails) as possible, I want to do that work without installing new gems. I read about 'open_uri' (it is also mentioned in the linked article: "open_uri is a common exception - it gets things right!") that is from the Ruby OOPL and I think it can do the same work. So, for my needs, is 'open_uri' the best choice (although it is more complicated of 'always_verify_ssl_certificates' gem)? If so, can someone help me using that (with an example, if possible) because I have not found good guides about?

    Read the article

  • Save a UIImageView using NSUSerDefaults

    - by Magician Software
    How do you save an Image using NSUSerDefaults The main image is set in IB, the secondary image is set here - (IBAction)changeImage { CATransition *fadeThing = [CATransition animation]; fadeThing.type = kCATransitionFade; fadeThing.subtype = kCATransitionFade; fadeThing.duration = 1; [CATransaction begin]; [background.layer addAnimation:fadeThing forKey:@"superCoolSloMoFadingAnimation"]; [background setImage:[UIImage imageNamed:@"Mainbackground.png"]]; [CATransaction commit]; I will have different actions for different images. Anyway of setting this up so I can use a toggle button maybe and change the image as well as save it Thanks

    Read the article

  • Comparison of Code Review Tools/Systems

    - by SytS
    There are a number of tools/systems available aimed at streamlining and enhancing the code review process, including: CodeStriker Review Board, code review system in use at VMWare Code Collaborator, commercial product by SmartBear Rietveld, based on Modrian, the code review system in use at Google Crucible, commercial product by Atlassian These systems all have varying feature sets, and differ in degrees of maturity and polish; the selection is a little bewildering for someone who is evaluating code review systems for the frist time. Some of these tools have already been mentioned in other questions/answers on StackOverflow, but I would like to see a more comprehensive comparison of the more popular systems, especially with respect to: integration with source control systems integration with bug tracking systems supported workflow (reviews pre/post commit, review or contiguous/non-contigous revision ranges, etc) deployment/maintenance requirements

    Read the article

  • Test Column exists, Add Column, and Update Column

    - by david.clarke
    I'm trying to write a SQL Server database update script. I want to test for the existence of a column in a table, then if it doesn't exist add the column with a default value, and finally update that column based on the current value of a different column in the same table. I want this script to be runnable multiple times, the first time updating the table and on subsequent runs the script should be ignored. My script currently looks like the following: IF NOT EXISTS(SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = 'PurchaseOrder' AND COLUMN_NAME = 'IsDownloadable') BEGIN ALTER TABLE [dbo].[PurchaseOrder] ADD [IsDownloadable] bit NOT NULL DEFAULT 0 UPDATE [dbo].[PurchaseOrder] SET [IsDownloadable] = 1 WHERE [Ref] IS NOT NULL END SQL Server returns error "Invalid column name 'IsDownloadable'", i.e. I need to commit the DDL before I can update the column. I've tried various permutations but I'm getting nowhere fast.

    Read the article

  • I need git-svn to act as a Subversion v1.5+ client

    - by Ben Ward
    I'm running git 1.7 on Mac OSX, installed via Homebrew. I'm trying to use git svn to work with a Subversion server that requires Subversion 1.5 clients (a restriction enforced via a pre-commit hook.) Running git svn --version reveals that as far as git is concerned, git svn is equivalent to svn v1.4.4. I can't establish whether git svn is a total clone of subversion, and thus needs to be updated to meet subversion v1.5 functionality, or if git is compiled against or just points to some version of subversion under the hood. I've struggled finding anyone else trying to upgrade the version of git-svn, and I'm guessing this client version restriction is unusual, but I'm stuck with it (corporate environment.) Is it possible to have git operate as svn 1.5?

    Read the article

  • setDelegate:self, how does it work?

    - by fuzzygoat
    I have a query regarding how delegates work. My understanding was that delegates take responsibility for doing certain tasks on behalf of another object. locationManager = [[CLLocationManager alloc] init]; [locationManager setDelegate:self]; [locationManager setDistanceFilter:kCLDistanceFilterNone]; [locationManager setDesiredAccuracy:kCLLocationAccuracyBest]; [locationManager startUpdatingLocation]; Am I right in thinking that in the example code above that the instance of CLLocationManager is created on a new thread so that it can get on with trying to find the location information it needs. When it completes its task (or encounters an error) it calls-back using the appropriate methods located in self e.g. locationManager:didUpdateToLocation:fromLocation: Essentially locationManager sends messages to self (which conforms to the correct delegate protocol) when things happen cheers gary

    Read the article

  • ASIHTTPRequest code design

    - by nico
    I'm using ASIHTTPRequest to communicate with the server asynchronously. It works great, but I'm doing requests in different controllers and now duplicated methods are in all those controllers. What is the best way to abstract that code (requests) in a single class, so I can easily re-use the code, so I can keep the controllers more simple. I can put it in a singleton (or in the app delegate), but I don't think that's a good approach. Or maybe make my own protocol for it with delegate callback. Any advice on a good design approach would be helpful. Thanks.

    Read the article

  • Subversion import error 200030 (accessed from SCM in Xcode 3.2.2, OS X 10.6.3)

    - by Global nomad
    Hi, I'm encountering the following error when attempting to (svn) import from within Xcode). Import Failed Error: 200030 (SQLite error) Description: no such table: rep_cache This is a new repository.The svnserve process runs normally. Existing repositories work fine (import, commit, and export) from within Xcode. Neither MacPorts nor Fink are installed. The binaries in /usr/bin comes with Mac OS X 10.6. I've googled but am unable to find others encountering the same issue. Thanks in advance for any shared insights.

    Read the article

  • Pulling and pushing between two google code repositories

    - by Kim L
    I'll start by quoting google's blog Project owners can now create multiple repositories for their project, and they can choose to make any of those new repositories a clone of any of the project's other repositories. These project clones share the same commit access permissions as the original project and make it easier for project members to work together on new features. A common pattern in the Mercurial world is to place each "official" branch into a separate repository with naming conventions like "project-crew", "project-stable", and so on. I've done exactly this. I have my default repository and then I've cloned that repository to a repo named "dev". I intend to use the default repository as my stable repo and then the dev repo as my primary development repo. Now I'm just wondering how on earth I should go about to pull and push between the default and the dev repositories?

    Read the article

  • MySQL: Can the table comment length be increased?

    - by Victor Kimura
    I read the MySQL comment length questions on StackOverflow here: http://stackoverflow.com/questions/391323/table-comment-length-in-mysql http://stackoverflow.com/questions/2473934/how-to-increase-mysql-table-comments-length The first link suggests that it can be done and the second suggests it cannot. I don't know why there is this limitation as the comments are very useful. Imagine if there was a limit of 60 characters for your programs. I wrote about this on my site and have some snapshots to the phpMyAdmin and Dbforge MySQL IDEs: http://mysql.tutorialref.com/mysql-table-comment-length-limit.html Is there a way to change this in phpMyAdmin or perhaps even on the CLI? There is a bug commit report from MySQL on this particular problem (follow the first StackOverflow link). It seems to state that the length problem is fixed. I have MySQL 5.1.42. Thank you, Victor

    Read the article

  • JEE Web Applications vs Web Services

    - by Zac
    Can someone confirm or clarify for me: From what I can tell, JEE web apps consist of a Servlet and/or JSP driven dynamic web page being fed back in the HTTP response, triggered by the JEE server receiving a HTTP GET or POST request. From what I can tell, JEE web services also make use of Servlets as the web tier components, however a WS Servlet receives a SOAP message and validates the contents of those messages with whatever WSDL the Servlet is WARed with. The response is also packaged in SOAP and sent back to the requestor. So, from what I can tell, both JEE web apps and WSes use Servlets as the web components, with the only real difference being the protocol used (raw HTTP vs SOAP, which is an extension of HTTP). This is the best I could come up with - am I right? Totally wrong? Close?

    Read the article

  • Hook Response.Cache to memcache

    - by dvr
    Has anyone done this before? I have a 32 bit win 2003 server running 2.0 and have read the ms engineers' blog about min(60%, 1800mb) for cache limits and our site (asp.net 2.0 / 3.5) is caching alot. It throws system outofmemory exceptions when wp is around 1.3gb (unfortunately it is the 2.0 apps) and I would like to push alot over to memcache but worried that at the moment the site is efficient using response.cache as is (though memory is an issue). I want to move most items over to memcache and have concerns on a – how to do this (implementation of response.cache to read/write from memcache) and b – what will performance be like? Before I commit to doing this and possibly spending a few days running tests I would like to hear from you if this has been done already and get some feedback. (and please don’t tell me to buy a x64 machine – I have already requested this!), by the way I ran a test requesting a single image 1000 times and response.cache was over 50% quicker than using application cache. Does response.cache bypass the page lifecycle?

    Read the article

  • Ubuntu to Ubuntu VNC over SSH tunnel

    - by rxt
    I have a Linux Ubuntu desktop at home, ssh enabled, vnc server installed, router rule configured. It all works, and at home I can connect via the local network from my Mac. From the outside I can login via ssh. I've configured putty as follows: session: host name and port number connection ssh tunnel: forwarded ports: L5900|192.168.0.23 the local address is: 192.168.1.45 When I make the connection I can login to the remote machine. Then I open Remote Desktop Viewer. I click connect protocol: vnc host: ? use host as ssh tunnel: ? I don't know what to use for the last two options. Which ip-addresses should I use?

    Read the article

  • TortoiseGit - representing branches in a tree - visual issue

    - by richard
    This is a little hard to explain with text, but I'll do my best, and you try to keep up; if something isn't clear at first, don't hesitate to ask, and I'll try to clarify. When TortoiseGit has one branch it looks approximately like this: o | o <-- a commit sign | x When I split my work into a new branch, it looks like this: o | o--o | x when I split, from the master to another new branch it looks like this: o--o | o--o | x Is there a way for every new branch that I make, and work on, to have its own "line" ... what I mean: o-----o | o--o | x so they don't "vertically overlap". So that every branch, has its own vertical line I can follow (for some reason this looks rather confusing to me, the way it's done now). Do any other Git clients for Windows do this differently ?

    Read the article

  • bbcode hyperlink issue (help!!)

    - by Jorm
    I'm having an annoying :) I use regexes from this: http://forums.codecharge.com/posts.php?post_id=77123 if you enter [url]www.bob.com[/url] it leads too http://localhost/test/www.bobsbar.com So I added before http://$1 in the replacement. That fix it but then [url]http://www.bob.com[/url] will lead to http://http://www.bobsbar.com How would you fix this? I want my users to be able to post links with AND without http:// and i want it to redirect to the site -_- Hope you understand this. Jorm Edit function bbcode_format($str) { $str = htmlentities($str); $find = array( '/\[url\](.*?)\[\/url\]/is', // hyperlink '/\[url\](http[s]?:\/\/)(.*?)\[\/url\]/is' // hyperlink http-protocol ); $replace = array( '<a href="$1" rel="nofollow" title="$1">$1</a>', '<a href="$1$2" rel="nofollow" title="$2">$2 THIS WORKS</a>' ); $str = preg_replace($find, $replace, $str); return $str; } both www.bob.com and http://www.bob.com uses the first replacement

    Read the article

< Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >