Search Results

Search found 23341 results on 934 pages for 'command history'.

Page 418/934 | < Previous Page | 414 415 416 417 418 419 420 421 422 423 424 425  | Next Page >

  • Looking for all-in-one drm/installer/CD creation kit.

    - by user30997
    The company I work for has a download manager in place that handles distribution, DRM, installation of our products - when a user gets them off our website. However, we're using an clunky system for packaging and protecting our products when we do press releases or make retail CDs. Part of the antiquation problem is the fact that the automated system that works with the installer- and DRM-creation software we have is a disaster that needs to be put out of my misery. The list of products that we currently produce, that I would like a new system MUST be capable of producing: Retail CDs, with a certain level of obfuscation to make copying difficult. Downloadable installers that time out after a few hours of use of the product. After the time has expired, removing and reinstalling the product will leave you still blocked from use. Installers that will fail to work after a certain date. I'd love to be able to just feed a tool the directory where a complete product resides and have the installer generated with a couple command-line operations. (The command-line issue is non-negotiable this well be called by an automated tool.) A single-solution package would be far preferable. Software with royalty-based or per-unit based licensing is not an option.

    Read the article

  • How to replace pairs of strings in two files to identical IDs?

    - by Péter Török
    Sorry if the title is not very intelligible, I couldn't come up with anything better. Hopefully my explanation is clear enough: I have a pair of rather large log files with very similar content, except that some strings are different between the two. A couple of examples: UnifiedClassLoader3@19518cc | UnifiedClassLoader3@d0357a JBossRMIClassLoader@13c2d7f | JBossRMIClassLoader@191777e That is, wherever the first file contains UnifiedClassLoader3@19518cc, the second contains UnifiedClassLoader3@d0357a, and so on. [Update] There are about 40 distinct pairs of such identifiers.[/Update] I want to replace these with identical IDs so that I can spot the really important differences between the two files. I.e. I want to replace all occurrences of both UnifiedClassLoader3@19518cc in file1 and UnifiedClassLoader3@d0357a in file2 with UnifiedClassLoader3@1; all occurrences of both JBossRMIClassLoader@13c2d7f in file1 and JBossRMIClassLoader@191777e in file2 with JBossRMIClassLoader@2 etc. Using the Cygwin shell, so far I managed to list all different identifiers occurring in one of the files with grep -o -e 'ClassLoader[0-9]*@[0-9a-f][0-9a-f]*' file1.log | sort | uniq However, now the original order is lost, so I don't know which is the pair of which ID in the other file. With grep -n I can get the line number, so the sort would preserve the order of appearance, but then I can't weed out the duplicate occurrences. Unfortunately grep can not print only the first match of a pattern. I figured I could save the list of identifiers produced by the above command into a file, then iterate over the patterns in the file with grep -n | head -n 1, concatenate the results and sort them again. The result would be something like 2 ClassLoader3@19518cc 137 ClassLoader@13c2d7f 563 ClassLoader3@1267649 ... Then I could (either manually or with sed itself) massage this into a sed command like sed -e 's/ClassLoader3@19518cc/ClassLoader3@2/g' -e 's/ClassLoader@13c2d7f/ClassLoader@137/g' -e 's/ClassLoader3@1267649/ClassLoader3@563/g' file1.log > file1_processed.log and similarly for file2. However, before I start, I would like to verify that my plan is the simplest possible working solution to this. Is there any flaw in this approach? Is there a simpler way?

    Read the article

  • capturing CMD batch file parameter list; write to file for later processing

    - by BobB
    I have written a batch file that is launched as a post processing utility by a program. The batch file reads ~24 parameters supplied by the calling program, stores them into variables, and then writes them to various text files. Since the max input variable in CMD is %9, it's necessary to use the 'shift' command to repeatedly read and store these individually to named variables. Because the program outputs several similar batch files, the result is opening several CMD windows sequentially, assigning variables and writing data files. This ties up the calling program for too long. It occurs to me that I could free up the calling program much faster if maybe there's a way to write a very simple batch file that can write all the command parameters to a text file, where I can process them later. Basically, just grab the parameter list, write it and done. Q: Is there some way to treat an entire series of parameter data as one big text string and write it to one big variable... and then echo the whole big thing to one text file? Then later read the string into %n variables when there's no program waiting to resume? Parameter list is something like 25 - 30 words, less than 200 characters. Sample parameter list: "First Name" "Lastname" "123 Steet Name Way" "Cityname" ST 12345 1004968 06/01/2010 "Firstname+Lastname" 101738 "On Account" 20.67 xy-1z 1 8.95 3.00 1.39 0 0 239 8.95 Items in quotes are processed as string variables. List is space delimited. Any suggestions?

    Read the article

  • How well do (D)VCS cooperate with workflows involving several people editing files in the same direc

    - by frankster
    Imagine because of tradition that your team's preferred development method involved several people with a shared login, editing files on a build server using vim. [Note that there are well known issues to do with only one person being able to edit a file at once, people going away from their desk and leaving the file locked in vim, system builds/restarts requiring everybody to stop debugging while this occurs. This is not what the question is about] If source control was to be introduced without changing the workflow, would there be much benefit? I am guessing that the commit history won't be much use as it will contain all changes by everybody in big lumps. So it wouldn't really be possible to rewind individual changes apart from at a really big level.

    Read the article

  • Cann Boost Program_options separate comma separated argument values

    - by lrm
    If my command line is: > prog --mylist=a,b,c Can Boost's program_options be setup to see three distinct argument values for the mylist argument? I have configured program_options as: namespace po = boost::program_options; po::options_description opts("blah") opts.add_options() ("mylist", std::vector<std::string>>()->multitoken, "description"); po::variables_map vm; po::store(po::parse_command_line(argc, argv, opts), vm); po::notify(vm); When I check the value of the mylist argument, I see one value as a,b,c. I'd like to see three distinct values, split on comma. This works fine if I specify the command line as: > prog --mylist=a b c or > prog --mylist=a --mylist=b --mylist=c Is there a way to configure program_options so that it sees a,b,c as three values that should each be inserted into the vector, rather than one? I am using boost 1.41, g++ 4.5.0 20100520, and have enabled c++0x experimental extensions.

    Read the article

  • SQL Server Query Slow from PHP, but FAST from SQL Mgt Studio - WHY???

    - by Ray
    I have a fast running query (sub 1 sec) when I execute the query in SQL Server Mgt Studio, but when I run the exact same query in PHP (on the same db instace) using FreeTDS v8, mssql_query(), it takes much longer (70+ seconds). The tables I'm hitting have an index on a date field that I'm using in the Where clause. Could it be that PHP's mssql functions aren't utilizing the index? I have also tried putting the query inside a stored procedure, then executing the SP from PHP - the same results in time difference occurs. I have also tried adding a WITH ( INDEX( .. ) ) clause on the table where that has the date index, but no luck either. Here's the query: SELECT 1 History, h.CUSTNMBR CustNmbr, CONVERT(VARCHAR(10), h.ORDRDATE, 120 ) OrdDate, h.SOPNUMBE OrdNmbr, h.SUBTOTAL OrdTotal, h.CSTPONBR PONmbr, h.SHIPMTHD Shipper, h.VOIDSTTS VoidStatus, h.BACHNUMB BatchNmbr, h.MODIFDT ModifDt FROM SOP30200 h WITH (INDEX (AK2SOP30200)) WHERE h.SOPTYPE = 2 AND h.DOCDATE >= DATEADD(dd, -61, GETDATE()) AND h.VOIDSTTS = 0 AND h.MODIFDT = CONVERT(VARCHAR(10), DATEADD(dd, -1*@daysAgo, GETDATE()) , 120 ) ;

    Read the article

  • How do I install websocket module for Node.js on Debian VPS?

    - by Ollie Shaw
    I currently am renting a VPS from Dreamhost which runs Debian. I am still learning command line on this OS, but fast! I have successfully installed Node.js, now I want to install the websocket module found here: https://github.com/Worlize/WebSocket-Node From the root user, I have run the following command: npm install websocket The error thrown is: [websocket v1.0.7] Native code compile failed!! On Windows, native extensions require Visual Studio and Python. On Unix, native extensions require Python, make and a C++ compiler. Start npm with --websocket:verbose to show compilation output (if any). What commands should I issue to install this websocket module and its requirements? Thanks very much! Edit: When I run sudo apt-get install gcc make I get this message: Reading package lists... Done Building dependency tree Reading state information... Done gcc is already the newest version. gcc set to manually installed. make is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 44 not upgraded. And the same error when trying to install WebSocket.

    Read the article

  • Windows Service Installation

    - by Goober
    Scenario I have a server, that has NO Visual Studio Installed. It literally has a normal command prompt and nothing installed yet. We don't want to install anything (except the .Net framework which we have already done). We just want to install a bunch of C# Windows Services that we have written. So far I have been installing and running the windows service on my local machine using a "setup and deploy" project that I built into the application, which I could then use to install the service locally. Question How can I install the service on the server? I imagine it can be done from the command prompt only, but what else do I need? - If anything? and where do I put the files that I want to install BEFORE I install them? I imagine I will have to compile the application on my local machine in Visual Studio, then copy it over to the server, and then run an install utility to install it on the server? Any help would be greatly appreciated.

    Read the article

  • Ruby on Rails Mongrel web server stuck when MySQL service is running

    - by Marcos Buarque
    Hi, I am a Ruby on Rails newbie and already have a problem. I have started the Mongrel web server and it works fine when MySQL service isn't running. But when MySQL is on, Mongrel stucks. It ceases from serving the pages. So far, I have tested the localhost:3000 URL. When MySQL is off, it serves the page. When I click "about application's environment", I get the messasge (of course) "Can't connect to MySQL server on 'localhost' (10061)". After starting the MySQL service and refreshing, I get no more answer and Mongrel does not serve the webpage. It gets stuck with no answer to the browser. Then I have to stop the webserver and restart it. I have installed mysql2 gem with the command gem install mysql2. I was able to create the _test and _development databases with the command line rake db:create. I have tested with MySQL root user and blank password and also tried with a superuser user I have created. No success. Here is the server log: ======================== Started GET "/rails/info/properties" for 127.0.0.1 at Fri Dec 24 17:41:25 -0200 2010 Mysql2::Error (Can't connect to MySQL server on 'localhost' (10061)): Rendered C:/Ruby187/lib/ruby/gems/1.8/gems/actionpack-3.0.3/lib/action_dispatch/middleware/templates/rescues/_trace.erb (1.0ms) Rendered C:/Ruby187/lib/ruby/gems/1.8/gems/actionpack-3.0.3/lib/action_dispatch/middleware/templates/rescues/_request_and_response.erb (5.0ms) Rendered C:/Ruby187/lib/ruby/gems/1.8/gems/actionpack-3.0.3/lib/action_dispatch/middleware/templates/rescues/diagnostics.erb within rescues/layout (35.0ms) ================= I am running on a Windows 7 environment with firewall down.

    Read the article

  • Getting table schema from a query

    - by Appu
    As per MSDN, SqlDataReader.GetSchemaTable returns column metadata for the query executed. I am wondering is there a similar method that will give table metadata for the given query? I mean what tables are involved and what aliases it has got. In my application, I get the query and I need to append the where clause programically. Using GetSchemaTable(), I can get the column metadata and the table it belongs to. But even though table has aliases, it still return the real table name. Is there a way to get the aliase name for that table? Following code shows getting the column metadata. const string connectionString = "your_connection_string"; string sql = "select c.id as s,c.firstname from contact as c"; using(SqlConnection connection = new SqlConnection(connectionString)) using(SqlCommand command = new SqlCommand(sql, connection)) { connection.Open(); SqlDataReader reader = command.ExecuteReader(CommandBehavior.KeyInfo); DataTable schema = reader.GetSchemaTable(); foreach (DataRow row in schema.Rows) { foreach (DataColumn column in schema.Columns) { Console.WriteLine(column.ColumnName + " = " + row[column]); } Console.WriteLine("----------------------------------------"); } Console.Read(); } This will give me details of columns correctly. But when I see BaseTableName for column Id, it is giving contact rather than the alias name c. Is there any way to get the table schema and aliases from a query like the above? Any help would be great!

    Read the article

  • Newb Question: scanf() in C

    - by riemannliness
    So I started learning C today, and as an exercise i was told to write a program that asks the user for numbers until they type a 0, then adds the even ones and the odd ones together. Here is is (don't laugh at my bad style): #include <stdio.h>; int main() { int esum = 0, osum = 0; int n, mod; puts("Please enter some numbers, 0 to terminate:"); scanf("%d", &n); while (n != 0) { mod = n % 2; switch(mod) { case 0: esum += n; break; case 1: osum += n; } scanf("%d", &n); } printf("The sum of evens:%d,\t The sum of odds:%d", esum, osum); return 0; } My question concerns the mechanics of the scanf() function. It seems that when you enter several numbers at once separated by spaces (eg. 1 22 34 2 8), the scanf() function somehow remembers each distinct numbers in the line, and steps through the while loop for each one respectively. Why/how does this happen? Example interaction within command prompt: - Please enter some numbers, 0 to terminate: 42 8 77 23 11 (enter) 0 (enter) - The sum of evens:50, The sum of odds:111 I'm running the program through the command prompt, it's compiled for win32 platforms with visual studio.

    Read the article

  • How can I mark a file as descended from 2 other files in Mercurial?

    - by Matt Joiner
    I had 2 Python similar scripts, that I've since merged into one (and now takes some parameters to differ the behaviour appropriately). Both of the previous files are in the tip of my Mercurial repository. How can I indicate that the new file, is a combination of the 2 older files that I intend to remove? Also note, that 1 file has been chosen in favor of the other, and some code moved across, so if it's not possible to create a version controlled file with a new name, then assimilating one file's history into the other will suffice.

    Read the article

  • Regex: match a non nested code block

    - by Sylvanas Garde
    I am currently writing a small texteditor. With this texteditor users are able to create small scripts for a very simple scripting engine. For a better overview I want to highlight codeblocks with the same command like GoTo(x,y) or Draw(x,y). To achieve this I want to use Regular Expresions (I am already using it to highlight other things like variables) Here is my Expression (I know it's very ugly): /(?<!GoTo|Draw|Example)(^(?:GoTo|Draw|Example)\(.+\)*?$)+(?!GoTo|Draw|Example)/gm It matches the following: lala GoTo(5656) -> MATCH 1 sdsd GoTo(sdsd) --comment -> MATCH 2 GoTo(23329); -> MATCH 3 Test() GoTo(12) -> MATCH 4 LALA Draw(23) -> MATCH 5 Draw(24) -> MATCH 6 Draw(25) -> MATCH 7 But what I want to achieve is, that the complete "blocks" of the same command are matched. In this case Match 2 & 4 and Match 5 & 6 & 7 should be one match. Tested with http://regex101.com/, the programming lanuage is vb.net. Any advise would be very useful, Thanks in advance!

    Read the article

  • Moving information between databases

    - by williamjones
    I'm on Postgres, and have two databases on the same machine, and I'd like to move some data from database Source to database Dest. Database Source: Table User has a primary key Table Comments has a primary key Table UserComments is a join table with two foreign keys for User and Comments Dest looks just like Source in structure, but already has information in User and Comments tables that needs to be retained. I'm thinking I'll probably have to do this in a few steps. Step 1, I would dump Source using the Postgres Copy command. Step 2, In Dest I would add a temporary second_key column to both User and Comments, and a new SecondUserComments join table. Step 3, I would import the dumped file into Dest using Copy again, with the keys input into the second_key columns. Step 4, I would add rows to UserComments in Dest based on the contents of SecondUserComments, only using the real primary keys this time. Could this be done with a SQL command or would I need a script? Step 5, delete the SecondUserComments table and remove the second_key columns. Does this sound like the best way to do this, or is there a better way I'm overlooking?

    Read the article

  • Android depth buffer issue: Advice for anyone experiencing problem

    - by Andrew Smith
    I've wasted around 30 hours this week writing and re-writing code, believing that I had misunderstood how the OpenGL depth buffer works. Everything I tried, failed. I have now resolved my problem by finding what may be an error in the Android implementation of OpenGL. See this API entry: http://www.opengl.org/sdk/docs/man/xhtml/glClearDepth.xml void glClearDepth(GLclampd depth); Specifies the depth value used when the depth buffer is cleared. The initial value is 1. Android's implementation has two versions of this command: glClearDepthx which takes an integer value, clamped 0-1 glClearDepthf which takes a floating point value, clamped 0-1 If you use glClearDepthf(1) then you get the results you would expect. If you use glClearDepthx(1), as I was doing then you get different results. (Note that 1 is the default value, but calling the command with the argument 1 produces different results than not calling it at all.) Quite what is happening I do not know, but the depth buffer was being cleared to a value different from what I had specified.

    Read the article

  • Is there a better tool than postcat for viewing postfix mail queue files?

    - by Geekman
    So I got a call early this morning about a client needing to see what email they have waiting to be delivered sitting in our secondary mail server. Their link for the main server had (still is) been down for two days and they needed to see their email. So I wrote up a quick Perl script to use mailq in combination with postcat to dump each email for their address into separate files, tar'd it up and sent it off. Horrible code, I know, but it was urgent. My solution works OK in that it at least gives a raw view, but I thought tonight it would be nice if I had a solution where I could provide their email attachments and maybe remove some "garbage" header text as well. Most of the important emails seem to have a PDF or similar attached. I've been looking around but the only method of viewing queue files I can see is the postcat command, and I really don't want to write my own parser - so I was wondering if any of you have already done so, or know of a better command to use? Here's the code for my current solution: #!/usr/bin/perl $qCmd="mailq | grep -B 2 \"someemailaddress@isp\" | cut -d \" \" -f 1"; @data = split(/\n/, `$qCmd`); $i = 0; foreach $line (@data) { $i++; $remainder = $i % 2; if ($remainder == 0) { next; } if ($line =~ /\(/ || $line =~ /\n/ || $line eq "") { next; } print "Processing: " . $line . "\n"; `postcat -q $line > $line.email.txt`; $subject=`cat $line.email.txt | grep "Subject:"`; #print "SUB" . $subject; #`cat $line.email.txt > \"$subject.$line.email.txt\"`; } Any advice appreciated.

    Read the article

  • Gacutil.exe successfully adds assembly, but assembly missing from GAC. Why?

    - by Ben McCormack
    I'm running GacUtil.exe from within Visual Studio Command Prompt 2010 to register a dll (CatalogPromotion.dll) to the GAC. After running the utility, it says Assembly Successfully added to the cache, and running gacutil /l CatalogPromotionDll shows that the GAC contains the assembly, but I can't see the assembly when I navigate to C:\WINDOWS\assembly from Windows Explorer. Why can't I see the assembly in WINDOWS\assembly from Windows Explorer but I can see it using gacutil.exe? Background: Here's what I typed into the command prompt for VS Tools: C:\_Dev Projects\VS Projects\bmccormack\CatalogPromotion\CatalogPromotionDll\bin \Debuggacutil /i CatalogPromotionDll.dll Microsoft (R) .NET Global Assembly Cache Utility. Version 4.0.30319.1 Copyright (c) Microsoft Corporation. All rights reserved. Assembly successfully added to the cache C:\_Dev Projects\VS Projects\bmccormack\CatalogPromotion\CatalogPromotionDll\bin \Debuggacutil /l CatalogPromotionDll Microsoft (R) .NET Global Assembly Cache Utility. Version 4.0.30319.1 Copyright (c) Microsoft Corporation. All rights reserved. The Global Assembly Cache contains the following assemblies: CatalogPromotionDll, Version=1.0.0.0, Culture=neutral, PublicKeyToken=9188a175 f199de4a, processorArchitecture=MSIL Number of items = 1 However, the assembly doesn't show up in C:\WINDOWS\assembly.

    Read the article

  • What does it mean when git pull causes a conflict but git pull --rebase doesn't?

    - by Jason Baker
    I'm pulling from a repository that only I have access to. As far as I know, I've only pushed to it from one repository. A couple of times, I've pulled from it and gotten this: To [email protected]:tsched_dev.git ! [rejected] master -> master (non-fast-forward) error: failed to push some refs to '[email protected]:tsched_dev.git' To prevent you from losing history, non-fast-forward updates were rejected Merge the remote changes before pushing again. See the 'Note about fast-forwards' section of 'git push --help' for details. Generally, that just means that I have to do a git pull (although all the changes should be fast-forwardable). When I do a git pull, I get conflicts. If I do a git pull --rebase, it works fine. What am I doing wrong?

    Read the article

  • What is a fast way to set debugging code at a given line in a function?

    - by Josh O'Brien
    Preamble: R's trace() is a powerful debugging tool, allowing users to "insert debugging code at chosen places in any function". Unfortunately, using it from the command-line can be fairly laborious. As an artificial example, let's say I want to insert debugging code that will report the between-tick interval calculated by pretty.default(). I'd like to insert the code immediately after the value of delta is calculated, about four lines up from the bottom of the function definition. (Type pretty.default to see where I mean.) To indicate that line, I need to find which step in the code it corresponds to. The answer turns out to be step list(c(12, 3, 3)), which I zero in on by running through the following steps: as.list(body(pretty.default)) as.list(as.list(body(pretty.default))[[12]]) as.list(as.list(as.list(body(pretty.default))[[12]])[[3]]) as.list(as.list(as.list(body(pretty.default))[[12]])[[3]])[[3]] I can then insert debugging code like this: trace(what = 'pretty.default', tracer = quote(cat("\nThe value of delta is: ", delta, "\n\n")), at = list(c(12,3,3))) ## Try it a <- pretty(c(1, 7843)) b <- pretty(c(2, 23)) ## Clean up untrace('pretty.default') Questions: So here are my questions: Is there a way to print out a function (or a parsed version of it) with the lines nicely labeled by the steps to which they belong? Alternatively, is there another easier way, from the command line, to quickly set debugging code for a specific line within a function? Addendum: I used the pretty.default() example because it is reasonably tame, but with real/interesting functions, repeatedly using as.list() quickly gets tiresome and distracting. Here's an example: as.list(as.list(as.list(as.list(as.list(as.list(as.list(as.list(as.list(body(# model.frame.default))[[26]])[[3]])[[2]])[[4]])[[3]])[[4]])[[4]])[[4]])[[3]]

    Read the article

  • Whatfor Visual Studio?! ml, cl, and link exe-cutables would suffice

    - by AntonIO
    It says in /library article /9s7c9wdw : "You can start this tool [cl.exe] only from the Visual Studio command prompt. You cannot start it from a system command prompt or from Windows Explorer." The corresponding (v=VS.80) page geared towards Visual Studio 2005 makes no such mention. Moreover, there is this Q&A. Thing is: Why should anybody spend anything on VS? ml is provided free of charge- necessarily so since it poses no value addition. The combined size of the other two is 895kb. Uncompressed. The GUI is a disservice. I myself have found half a dozen bugs. However, if the above is true, you'd need the IDE. MSFT fanboys, please step up. Background is that I have the 2008 Pro ed. The official Firefox builds use VS 2005 which I have on another system. To me no redundancy is acceptable. That's when I started pondering about boiling down VS and merely copying over the essential binaries. Then extended the thought to synthetically updating V$.

    Read the article

  • Overriding content_type for Rails Paperclip plugin

    - by Fotios
    I think I have a bit of a chicken and egg problem. I would like to set the content_type of a file uploaded via Paperclip. The problem is that the default content_type is only based on extension, but I'd like to base it on another module. I seem to be able to set the content_type with the before_post_process class Upload < ActiveRecord::Base has_attached_file :upload before_post_process :foo def foo logger.debug "Changing content_type" #This works self.upload.instance_write(:content_type,"foobar") # This fails because the file does not actually exist yet self.upload.instance_write(:content_type,file_type(self.upload.path) end # Returns the filetype based on file command (assume it works) def file_type(path) return `file -ib '#{path}'`.split(/;/)[0] end end But...I cannot base the content type on the file because Paperclip doesn't write the file until after_create. And I cannot seem to set the content_type after it has been saved or with the after_create callback (even back in the controller) So I would like to know if I can somehow get access to the actual file object (assume there are no processors doing anything to the original file) before it is saved, so that I can run the file_type command on that. Or is there a way to modify the content_type after the objects have been created.

    Read the article

  • Git-Based Source Control in the Enterprise: Suggested Tools and Practices?

    - by Bob Murphy
    I use git for personal projects and think it's great. It's fast, flexible, powerful, and works great for remote development. But now it's mandated at work and, frankly, we're having problems. Out of the box, git doesn't seem to work well for centralized development in a large (20+ developer) organization with developers of varying abilities and levels of git sophistication - especially compared with other source-control systems like Perforce or Subversion, which are aimed at that kind of environment. (Yes, I know, Linus never intended it for that.) But - for political reasons - we're stuck with git, even if it sucks for what we're trying to do with it. Here are some of the things we're seeing: The GUI tools aren't mature Using the command line tools, it's far to easy to screw up a merge and obliterate someone else's changes It doesn't offer per-user repository permissions beyond global read-only or read-write privileges If you have a permission to ANY part of a repository, you can do that same thing to EVERY part of the repository, so you can't do something like make a small-group tracking branch on the central server that other people can't mess with. Workflows other than "anything goes" or "benevolent dictator" are hard to encourage, let alone enforce It's not clear whether it's better to use a single big repository (which lets everybody mess with everything) or lots of per-component repositories (which make for headaches trying to synchronize versions). With multiple repositories, it's also not clear how to replicate all the sources someone else has by pulling from the central repository, or to do something like get everything as of 4:30 yesterday afternoon. However, I've heard that people are using git successfully in large development organizations. If you're in that situation - or if you generally have tools, tips and tricks for making it easier and more productive to use git in a large organization where some folks are not command line fans - I'd love to hear what you have to suggest. BTW, I've asked a version of this question already on LinkedIn, and got no real answers but lots of "gosh, I'd love to know that too!"

    Read the article

  • Direct web URL to PayPal transaction

    - by tags2k
    Having implemented PayPal's Website Payments Standard, I'd like to link to the details view of a transaction from my site's back end - just a simple direct web URL to the PayPal side. I don't know why this is tricky but when I try to get it from being logged in to the PayPal system it seems very obfuscated, in this form: history.paypal.com/uk/cgi-bin/webscr?cmd=_history-details&info=[looks like some kind of GUID]&ptype=4&history_cache=[huge encoded string] I'm guessing it's by design but it's not very helpful if you want a quick way to jump to a transaction's details. I've tried the https://www.paypal.com/vst/id=1234 form (also with co.uk as I am UK-based) recommended on a few sites I saw in my search, but I am told that: The transaction ID in your link is invalid. This happens even when copying the transaction ID directly from PayPal's back-end order listing. Is there a reliable way to directly link to an order / transaction details page in PayPal?

    Read the article

  • How can the Three-Phase Commit Protocol (3PC) guarantee atomicity?

    - by AndiDog
    I'm currently exploring worst case scenarios of atomic commit protocols like 2PC and 3PC and am stuck at the point that I can't find out why 3PC can guarantee atomicity. That is, how does it guarantee that if cohort A commits, cohort B also commits? Here's the simplified 3PC from the Wikipedia article: Now let's assume the following case: Two cohorts participate in the transaction (A and B) Both do their work, then vote for commit Coordinator now sends precommit messages... A receives the precommit message, acknowledges, and then goes offline for a long time B doesn't receive the precommit message (whatever the reason might be) and is thus still in "uncertain" state The results: Coordinator aborts the transaction because not all precommit messages were sent and acknowledged successfully A, who is in precommit state, is still offline, thus times out and commits B aborts in any case: He either stays offline and times out (causes abort) or comes online and receives the abort command from the coordinator And there you have it: One cohort committed, another aborted. The transaction is screwed. So what am I missing here? In my understanding, if the automatic commit on timeout (in precommit state) was replaced by infinitely waiting for a coordinator command, that case should work fine.

    Read the article

  • What are the uses of svn copy?

    - by nav.jdwdw
    Example: $ svn copy foo.txt bar.txt A bar.txt When would you use this technique, and why? Will this command (taken from svn's "red book") creates a copy of <foo.txt> while preserving the history of it to be shared with <bar.txt>? If I'm changing <bar.txt>, what will happen to <foo.txt>? What are the equivalents to this in other modern systems (Clearcase, Accurev, Perforce)? Clarification: Let me emphasize the point I'm searching for: Is this kind of branching out on a file level? What happens if you use it in the same branch, i.e. create a copy of a file and than start changing that new file. all in the same branch? I understand that it is also used for tagging but what is interesting me is what to expect when performing <svn copy> On the file level

    Read the article

< Previous Page | 414 415 416 417 418 419 420 421 422 423 424 425  | Next Page >