Search Results

Search found 21331 results on 854 pages for 'require once'.

Page 544/854 | < Previous Page | 540 541 542 543 544 545 546 547 548 549 550 551  | Next Page >

  • Excel validation range limits

    - by richardtallent
    When Excel saves a file, it attempts to combine identical Validation settings into a single rule with multiple ranges. This creates one of three issues, depending on the file type you choose to save: When saving as a standard Excel file (Office 2000 BIFF), a maximum of 1024 non-contiguous ranges that can have the same validation setting. When saving as a SpreadsheetML (Office 2002/2003 XML) file, you are limited to the number of non-contiguous ranges that can be represented, comma-delimited in R1C1 format, in 1024 characters. When saving as an Open Office XML (Office 2007 *.xlsx), there is a maximum of 511 non-contiguous ranges that can have the same validation setting. (I don't have Office 2007, I'm using the file converter for Office 2003). Once you bust any of these limits, the remaining ranges with the same Validation settings have their Validation settings wiped. For (1) and (3), Excel warns you that it can't save all of the formatting, but for (2) it does not.

    Read the article

  • I dont understand how Westpac Payway API and NET works

    - by spirytus
    Been googling all day, reading numerous pdf's and still getting confused with the concepts of sending data to Payway system from Westpac (bank in Australia, link text). They offer access via API but also give access via what they call NET. The way I understand is that when client want to pay on my website, in case of NET, client gets to the page (hosted by a bank or hosted by me) where is provided with form to enter credit card info details. Then this form is submitted via normal POST call to Payway's specific https address. It is processed then and browser returns to url I specified as one of the parameters I sent in hidden field. In case of API story is similar, so user receives form, fills in the data and then data is send to my backend (not Payway's). My backend then calls payway API with data provided and once answer received returns confirmation page to the client. Is my understanding right? Please explain as I have a feeling I am missing something basic here.

    Read the article

  • Dynamically determining table name given field name in SQL server

    - by Salman A
    Strange situation: I am trying to remove some hard coding from my code. There is a situation where I have a field, lets say "CityID", and using this information, I want to find out which table contains a primary key called CityID. Logically, you'd say that it's probably a table called "City" but it's not... that table is called "Cities". There are some other inconsistencies in database naming hence I can never be sure if removing the string "ID" and finding out the plural will be sufficient. Note: Once I figure out that CityID refers to a table called Cities, I will perform a join to replace CityID with city name on the fly. I will appreciate if someonw can also tell me how to find out the first varchar field in a table given its name.

    Read the article

  • WCF client proxy initialization

    - by 123Developer
    I am consuming a WCF service and created its proxy using the VS 2008 service reference. I am looking for the best pattern to call WCF service method Should I create the client proxy instance every time I call the service method and close the client as soon as I am done with that? When I profiled my client application, I could see that it is taking lot of time to get the Channel while initializing the proxy client Should I use a Singleton pattern for the client proxy so that I can use the only once instance and get rid of the re-initializing overhead? Is there any hidden problem with this approach? I am using .Net framework 3.5 SP1, basicHttp binding with little customization.

    Read the article

  • sending control+c (SIGINT) to NSPIPE in objective-c

    - by Ron
    Hello, I am trying to terminate an openvpn task, spawned via NSTask. My question: Should I send ctrl+c (SIGINT) to the input NSPipe for my NSTask? inputPipe = [NSPipe pipe]; taskInput = [inputPipe fileHandleForWriting]; NSString dataString = @"\cC"; [taskInput writeData:[dataString dataUsingEncoding: [NSString defaultCStringEncoding]]]; Alternatively, I was thinking about using kill( pid, SIGINT ); but it would be much more complicated since the process ID has to be determined via a detour ([task processIdentifier] does not help here) - the original NSTask calls: /bin/bash -c sudo -S | mypassword .... That's not nice, I know but it is only called once and the sudo password has been entered in that case already. thanks for any suggestions/opinions/etc. Ron

    Read the article

  • How to skip "Loose Object" popup when running 'git gui'

    - by Michael Donohue
    When I run 'git gui' I get a popup that says This repository currently has approximately 1500 loose objects. It then suggests compressing the database. I've done this before, and it reduces the loose objects to about 250, but that doesn't suppress the popup. Compressing again doesn't change the number of loose objects. Our current workflow requires significant use of 'rebase' as we are transitioning from Perforce, and Perforce is still the canonical SCM. Once Git is the canonical SCM, we will do regular merges, and the loose objects problem should be greatly mitigated. In the mean time, I'd really like to make this 'helpful' popup go away.

    Read the article

  • Scraping paginated items from a website using scrapy

    - by Mridang Agarwalla
    I'm using scrapy to scrape items from a site. I'm not being able to implement this scraping pattern. The site I'm trying to scrape is a forum and I scrape the site once a day. Each page has a table containing posts. New posts are added to the top of the table and as more and more posts are posted to the site, the older posts go further into the pages due to pagination. This is a very simple scenario and we will assume that the order of the posts never change. I would like to scrape this site and scrape all the "new" records until the last scraped post from yesterday is encountered. I have configured my spider to paginate endlessly and when it encounters yesterday's last scraped post, it should stop. How can implement this? (My Scrapy installation works with my Django installation using django-dynamic-scraper )

    Read the article

  • Customised email alerts through MailChimp API

    - by user1293351
    I am building a site that runs an automated process every 30 minutes to match up new flights with their respective user. Once this process is completed I want to email the flight details out to the respective user. However the flight info will be different for every single user with their being 0-300+ potential emails. Is this something that the MailChimp API will allow or do? I found this page http://apidocs.mailchimp.com/api/how-to/transactional-campaigns.php which I am not sure if this effects me. Is the STS more suited to this? http://apidocs.mailchimp.com/sts/1.0/ Thanks Alex

    Read the article

  • How can I have MySQL write outfiles as a different user?

    - by David Locke
    I'm working with a MySQL query that writes into an outfile. I run this query once every day or two and so I want to be able to remove the outfile without having to resort to su or sudo. The only way I can think of making that happen is to have the outfile written as owned by someone other than the mysql user. Is this possible? Edit: I am not redirecting output to a file, I am using the INTO OUTFILE part of a select query to output to a file. If it helps: mysql --version mysql Ver 14.12 Distrib 5.0.32, for pc-linux-gnu (x86_64) using readline 5.2

    Read the article

  • How to convert JavaScript dictionary into Python syntax

    - by Sputnix
    Writing out javascript dictionary from inside of JavaScript- enabled application (such as Adobe) into external .jsx file (or any other .txt file) the context of resulted file dictionary looks like: ({one:"1", two:"2"}) (Please note that each dictionary keys are written as they are the variables name (which is not true). A next step is to read this .jsx file with Python. I need to find a way to convert ({one:"1", two:"2"}) into Python dictionary syntax such as: {'one':"1", 'two':"2"} It has been already suggested that instead of using JavaScript's built-in dict.toSource() it would make more sense to use JSON which would write a dictionary content in similar to Python syntax. But unfortunately using JSON is not an option for me. I need to find a way to convert ({one:"1", two:"2"}) into {'one':"1", 'two':"2"} using Python alone. Any suggestions on how to achieve it? Once again, the problem mostly in dictionary keys syntax which inside of Python look like variable names instead of strings-like dictionary keys names: one vs "one"

    Read the article

  • Running pl/sql in Korn Shell(AIX)

    - by learner135
    I have a file to execute in Ksh written by someone. It has a set of commands to execute in sqlplus. It starts with, sqlplus -s $UP <<- END followed by a set of ddl commands such as create,drop,etc., When I execute the file in the shell, I get the error in the starting line quoted above. I understand "-s" starts the sqlplus in silent mode and $UP is the connection string with username/password. But I couldn't make heads or tails of "<<- END" part(Many sites from google says input redirection is "<<" not "<<-"). So I presumed the error must be in that part and removed it from the file. Now it reads, sqlplus -s $UP But once I execute the file, It waits for input from the shell, instead of reading the rest of the lines from the file. How would I make sqlplus to execute the ddl commands in the rest of the file?. Thanks in advance.

    Read the article

  • "Don't Allow" in LocationManager keeps logging errors

    - by stephanie.moreau
    I have an app that checked for location. It asks the user to use location and if the user says no on the menu there is an issue when i load the mapview. Once i select the mapView it asks for the user location again. If the user says no again my console keeps displaying errors/warning as well as my NSLog from the "didFailWithError" of my location Manager class. Is there a way of stopping the LocationManage:didFailWithErrors if the user has already said no? I don't think Apple would accept my app if the Log file gets filled up my the LocationManager Here is an example of what gets repeated in the console ERROR,Time,290362745.002,Function,"void CLClientHandleDaemonDataRegistration(__CLClient*, const CLDaemonCommToClientRegistration*, const __CFDictionary*)",server did not accept client registration 1 WARNING,Time,290362745.005,Function,"void CLClientHandleDaemonInvalidation(__CFMessagePort*, void*)",client 1035.0 has been disconnected from daemon 2010-03-15 12:19:05.002 SAQ[1035:207] LocationManager Error Denied by user

    Read the article

  • BizTalk: Sample: Context routing and Throttling with orchestration

    - by Leonid Ganeline
    The sample demonstrates using orchestration for throttling and using context routing. Usually throttling is implemented on the host level (in BizTalk 2010 we can also using the host instance level throttling). Here is demonstrated the throttling with orchestration convoy that slows down message flow from some customers. Sample implements sort of quality service agreement layer for different kind of customers. The sample demonstrates the context routing between orchestrations. It has several advantages over the content routing. For example, we don’t have to create the property schema and promote properties on the schemas; we don’t have to change the message content to change routing. Use case:  The BizTalk application has a main processing orchestration that process all input messages. The application usually works as an OLTP application. Input messages came in random order without peaks, typical scenario for the on-line users. But sometimes the big data batch payloads come. These batches overload processing orchestrations. All processes, activated by on-line users after the payload, come to the same queue and are processed only after the payload. Result is on-line users can see significant delay in processing. It can be minutes or hours, depending of the batch size. Requirements: On-line user’s processing should work without delays. Big batches cannot disturb on-line users. There should be higher priority for the on-line users and the lower priority for the batches. Design: Decision is to divide the message flow in two branches, one for on-line users and second for batches. Branch with batches provides messages to the processing line with low priority, and the on-line user’s branch – with high priority. All messages are provided by hi-speed receive port. BTS.ReceivePortName context property is used for routing. The Router orchestration separates messages sent from on-line users and from the batch messages. But the Router does not use the BizTalk provided value of this property, the Router set up this value by itself. Router uses the content of the messages to decide if it is from on-line users or from batches. The message context property the BTS.ReceivePortName is changed respectively, its value works as a recipient address, as the “To” address for the next recipient orchestrations. Those next orchestrations are the BatchBottleneck and the MainProcess orchestrations. Messages with context equal “ToBatch” are filtered up by the BatchBottleneck orchestration. It is a unified convoy orchestration and it throttles the message flow, delaying the message delivery to the MainProcess orchestration. The BatchBottleneck orchestration changes the message context to the “ToProcess” and sends messages one after another with small delay in between. Delay can be configured in the BizTalk config file as:                 <appSettings>                                 <add key="GLD_Tests_TwoWayRouting_BatchBottleneck_DelayMillisec" value="100"/>                 </appSettings>   Of course, messages with context equal “ToProcess” are filtered up by the MainProcess orchestration.   NOTES: Filters with string values: In Orchestrations (the first Receive shape in orchestration) use string values WITH quotes; in Send Ports use string values WITHOUT quotes. Filters on the Send Ports are dynamic; we can change them in run-time. Filters on the Orchestrations are static; we can change them only in design-time. To check the existence of the promoted property inside orchestration use the Expression shape with construction like this:       if (BTS.ReceivePortName exists myMessage) { …; } It is not possible in the Message Assignment shape because using the “if” statement inside Message Assignment is prohibited. Several predefined context properties can behave in specific way. Say MessageTracking.OriginatingMessage or XMLNORM.DocumentSpecName, they are required some internal rules should be applied to the format or usage of this properties. MessageTracking.* parameters require you have to use tracking and you can get unexpected run-time errors in some cases. My recommendation is - use very limited set of the predefined context properties. To “attach” the new promoted property to the message, we have to use correlation. The correlation type should include this property. [Here is a good explanation by Saravana ] The sample code is here [sorry, temporary trubles with CodePlex].

    Read the article

  • Oracle NoSQL Database Exceeds 1 Million Mixed YCSB Ops/Sec

    - by Charles Lamb
    We ran a set of YCSB performance tests on Oracle NoSQL Database using SSD cards and Intel Xeon E5-2690 CPUs with the goal of achieving 1M mixed ops/sec on a 95% read / 5% update workload. We used the standard YCSB parameters: 13 byte keys and 1KB data size (1,102 bytes after serialization). The maximum database size was 2 billion records, or approximately 2 TB of data. We sized the shards to ensure that this was not an "in-memory" test (i.e. the data portion of the B-Trees did not fit into memory). All updates were durable and used the "simple majority" replica ack policy, effectively 'committing to the network'. All read operations used the Consistency.NONE_REQUIRED parameter allowing reads to be performed on any replica. In the past we have achieved 100K ops/sec using SSD cards on a single shard cluster (replication factor 3) so for this test we used 10 shards on 15 Storage Nodes with each SN carrying 2 Rep Nodes and each RN assigned to its own SSD card. After correcting a scaling problem in YCSB, we blew past the 1M ops/sec mark with 8 shards and proceeded to hit 1.2M ops/sec with 10 shards.  Hardware Configuration We used 15 servers, each configured with two 335 GB SSD cards. We did not have homogeneous CPUs across all 15 servers available to us so 12 of the 15 were Xeon E5-2690, 2.9 GHz, 2 sockets, 32 threads, 193 GB RAM, and the other 3 were Xeon E5-2680, 2.7 GHz, 2 sockets, 32 threads, 193 GB RAM.  There might have been some upside in having all 15 machines configured with the faster CPU, but since CPU was not the limiting factor we don't believe the improvement would be significant. The client machines were Xeon X5670, 2.93 GHz, 2 sockets, 24 threads, 96 GB RAM. Although the clients had 96 GB of RAM, neither the NoSQL Database or YCSB clients require anywhere near that amount of memory and the test could have just easily been run with much less. Networking was all 10GigE. YCSB Scaling Problem We made three modifications to the YCSB benchmark. The first was to allow the test to accommodate more than 2 billion records (effectively int's vs long's). To keep the key size constant, we changed the code to use base 32 for the user ids. The second change involved to the way we run the YCSB client in order to make the test itself horizontally scalable.The basic problem has to do with the way the YCSB test creates its Zipfian distribution of keys which is intended to model "real" loads by generating clusters of key collisions. Unfortunately, the percentage of collisions on the most contentious keys remains the same even as the number of keys in the database increases. As we scale up the load, the number of collisions on those keys increases as well, eventually exceeding the capacity of the single server used for a given key.This is not a workload that is realistic or amenable to horizontal scaling. YCSB does provide alternate key distribution algorithms so this is not a shortcoming of YCSB in general. We decided that a better model would be for the key collisions to be limited to a given YCSB client process. That way, as additional YCSB client processes (i.e. additional load) are added, they each maintain the same number of collisions they encounter themselves, but do not increase the number of collisions on a single key in the entire store. We added client processes proportionally to the number of records in the database (and therefore the number of shards). This change to the use of YCSB better models a use case where new groups of users are likely to access either just their own entries, or entries within their own subgroups, rather than all users showing the same interest in a single global collection of keys. If an application finds every user having the same likelihood of wanting to modify a single global key, that application has no real hope of getting horizontal scaling. Finally, we used read/modify/write (also known as "Compare And Set") style updates during the mixed phase. This uses versioned operations to make sure that no updates are lost. This mode of operation provides better application behavior than the way we have typically run YCSB in the past, and is only practical at scale because we eliminated the shared key collision hotspots.It is also a more realistic testing scenario. To reiterate, all updates used a simple majority replica ack policy making them durable. Scalability Results In the table below, the "KVS Size" column is the number of records with the number of shards and the replication factor. Hence, the first row indicates 400m total records in the NoSQL Database (KV Store), 2 shards, and a replication factor of 3. The "Clients" column indicates the number of YCSB client processes. "Threads" is the number of threads per process with the total number of threads. Hence, 90 threads per YCSB process for a total of 360 threads. The client processes were distributed across 10 client machines. Shards KVS Size Clients Mixed (records) Threads OverallThroughput(ops/sec) Read Latencyav/95%/99%(ms) Write Latencyav/95%/99%(ms) 2 400m(2x3) 4 90(360) 302,152 0.76/1/3 3.08/8/35 4 800m(4x3) 8 90(720) 558,569 0.79/1/4 3.82/16/45 8 1600m(8x3) 16 90(1440) 1,028,868 0.85/2/5 4.29/21/51 10 2000m(10x3) 20 90(1800) 1,244,550 0.88/2/6 4.47/23/53

    Read the article

  • Would Mercurial help me work from 2 PCs?

    - by rikh
    I currently use Perforce for source control, but want to start working on the code from 2 different PCs at the same time (desktop and laptop). The laptop would not be able to access the perforce server very often, which makes Perforce a poor choice in this setup. Distributed source control tools like Mercurial seem better suited to the task, but I am still not clear if this would work or not. Does anyone have any experience of using Mercurial to work on 2 machines at once (eg desktop in the week, laptop in evening and weekends). Does it help, or is it still a pain the butt keeping everything in sync and knowing what is going on.

    Read the article

  • Need help with foreach and XML

    - by danit
    I have the following output (via link) which displays the var_dump of some XML im generating: http://bit.ly/aoA3qY At the very bottom of the page you will see some output, generated by this code: foreach ($xml->feed as $entry) { $title = $entry->title; $title2 = $entry->entry->title; } echo $title; echo $title2; For some reason $title2 only outputs once, where there are multiple entries? Im using $xml = simplexml_load_string($data); to create the xml.

    Read the article

  • Reinstall TeamCity when Tomcat becomes corrupt

    - by dodegaard
    I've got a TeamCity 4 installation where tomcat has bit the dust with the following error "The APR Based Apache Tomcat Native library which allows for optimal performance in production environments was not found in java.library path". It appears this started happening once the JDK was installed on the server to allow for a compile. The JDK has been removed and the JRE reinstalled but still no go. My question is should I reinstall TeamCity completely or is there a way to simply reinstall tomcat so I don't hose the configuration? Your help is greatly appreciated.

    Read the article

  • CPU Usage in Very Large Coherence Clusters

    - by jpurdy
    When sizing Coherence installations, one of the complicating factors is that these installations (by their very nature) tend to be application-specific, with some being large, memory-intensive caches, with others acting as I/O-intensive transaction-processing platforms, and still others performing CPU-intensive calculations across the data grid. Regardless of the primary resource requirements, Coherence sizing calculations are inherently empirical, in that there are so many permutations that a simple spreadsheet approach to sizing is rarely optimal (though it can provide a good starting estimate). So we typically recommend measuring actual resource usage (primarily CPU cycles, network bandwidth and memory) at a given load, and then extrapolating from those measurements. Of course there may be multiple types of load, and these may have varying degrees of correlation -- for example, an increased request rate may drive up the number of objects "pinned" in memory at any point, but the increase may be less than linear if those objects are naturally shared by concurrent requests. But for most reasonably-designed applications, a linear resource model will be reasonably accurate for most levels of scale. However, at extreme scale, sizing becomes a bit more complicated as certain cluster management operations -- while very infrequent -- become increasingly critical. This is because certain operations do not naturally tend to scale out. In a small cluster, sizing is primarily driven by the request rate, required cache size, or other application-driven metrics. In larger clusters (e.g. those with hundreds of cluster members), certain infrastructure tasks become intensive, in particular those related to members joining and leaving the cluster, such as introducing new cluster members to the rest of the cluster, or publishing the location of partitions during rebalancing. These tasks have a strong tendency to require all updates to be routed via a single member for the sake of cluster stability and data integrity. Fortunately that member is dynamically assigned in Coherence, so it is not a single point of failure, but it may still become a single point of bottleneck (until the cluster finishes its reconfiguration, at which point this member will have a similar load to the rest of the members). The most common cause of scaling issues in large clusters is disabling multicast (by configuring well-known addresses, aka WKA). This obviously impacts network usage, but it also has a large impact on CPU usage, primarily since the senior member must directly communicate certain messages with every other cluster member, and this communication requires significant CPU time. In particular, the need to notify the rest of the cluster about membership changes and corresponding partition reassignments adds stress to the senior member. Given that portions of the network stack may tend to be single-threaded (both in Coherence and the underlying OS), this may be even more problematic on servers with poor single-threaded performance. As a result of this, some extremely large clusters may be configured with a smaller number of partitions than ideal. This results in the size of each partition being increased. When a cache server fails, the other servers will use their fractional backups to recover the state of that server (and take over responsibility for their backed-up portion of that state). The finest granularity of this recovery is a single partition, and the single service thread can not accept new requests during this recovery. Ordinarily, recovery is practically instantaneous (it is roughly equivalent to the time required to iterate over a set of backup backing map entries and move them to the primary backing map in the same JVM). But certain factors can increase this duration drastically (to several seconds): large partitions, sufficiently slow single-threaded CPU performance, many or expensive indexes to rebuild, etc. The solution of course is to mitigate each of those factors but in many cases this may be challenging. Larger clusters also lead to the temptation to place more load on the available hardware resources, spreading CPU resources thin. As an example, while we've long been aware of how garbage collection can cause significant pauses, it usually isn't viewed as a major consumer of CPU (in terms of overall system throughput). Typically, the use of a concurrent collector allows greater responsiveness by minimizing pause times, at the cost of reducing system throughput. However, at a recent engagement, we were forced to turn off the concurrent collector and use a traditional parallel "stop the world" collector to reduce CPU usage to an acceptable level. In summary, there are some less obvious factors that may result in excessive CPU consumption in a larger cluster, so it is even more critical to test at full scale, even though allocating sufficient hardware may often be much more difficult for these large clusters.

    Read the article

  • Unable to call WMP's controls.play() function in VisualBasic

    - by A.J.
    I have the following code: http://pastebin.com/EgjbzqA2 which is basically just a stripped down version of http://www.dreamincode.net/forums/topic/57357-mymusic-player/. I want the program to play one file repeatedly, however, this function doesn't work for some reason. The program plays each file once and then stops. Private Sub Player3_PlayStateChange(ByVal NewState As Integer) Handles Player3.PlayStateChange Static Dim PlayAllowed As Boolean = True Select Case CType(NewState, WMPLib.WMPPlayState) Case WMPLib.WMPPlayState.wmppsReady If PlayAllowed Then Player3.controls.play() End If Case WMPLib.WMPPlayState.wmppsMediaEnded ' Start protection (without it next wouldn't play PlayAllowed = False ' Play track Player3.controls.play() ' End Protection PlayAllowed = True updatePlayer() End Select End Sub

    Read the article

  • I want to use 960 or Blueprint, but I also want to use lots of Padding and Borders, is it a good fit

    - by viatropos
    I started using 960 today and thought it would be really easy. However, trying to translate a site to 960 quickly proved tough for many reasons. The first is that I can't use any padding or borders. Unless of course I add many more divs. Same thing with borders. Question is, if I want to use lots of padding and borders (where padding and borders are either 5px "thin" or 10px "thick" styles), are 960 and blueprint overkill? It seems pretty easy to create a custom grid, but once I add padding and borders, 99% of the work is making sure the grid doesn't break. I still am going to end up lining everything up to a 960 grid with 12 columns, but I want to have padding and borders included in the width, and it seems that's not easily possible with 960 or blueprint. What are your thoughts?

    Read the article

  • Jasper Reports Crosstab Query

    - by Sean McDaid
    I'm using Jasper Reports/iReports crosstabs to create a matrix of student and results. So for example Jim is doing subjects A, B, C and Sally is doing A, C What I want is something like: Subj-A Subj-B Subj-C Jim P M D Sally D D But as my SQL orders by name then subject I get: Subj-A Subj-B Subj-C Subj-A Subj-C Jim P M D Sally D D As you can see in the above the results are correct but the formatting is woeful. Is there anyway I can generate the reports to use names and subject only once and filling in the values from here? This probably isn't clear.

    Read the article

  • How to rebase onto a private branch with conflicts in gerrit/git?

    - by edwardmlyte
    Aim: I want to rebase commit G from "bravo", onto commit F from "alpha". From this: G bravo / D--E--F alpha / A--B--C mainline To this: G bravo / D--E--F alpha / A--B--C mainline "alpha" has been successfully rebased onto the latest mainline work. I cherry-pick "alpha" onto C. And when I cherry-pick "bravo", it comes up with all the merge conflicts. Once I fix those, if I do commit --amend The commit message just has all the information for alpha, whereas I'd expect the information for bravo. So I tried again after hard resetting to C, doing pull (as oppose to cherry-pick) for alpha and then pull bravo. Fixed the conflicts and just ran: commit The commit message just lists it as a merge and has merge information. Though the commit succeeds, I can't push this to gerrit as it says I don't have the rights to push merges. When I've read about rebase, it's always just to mainline, but I want to rebase private branches. Where am I going wrong?

    Read the article

  • jquery fadeout, load, fadein

    - by John
    Hi, I am using JQuery and what I want to happen is. Div fades out using the fadeOut command. It then loads content from a url using the load command. Then once content loaded it fades back in using the fadeIn command. The code I have is: $("#myDiv").fadeOut().load('www.someurl.com').fadeIn() However this does not work. It kind of flashes then loads out then loads in. I think the problem is that the fading is happening before the load is complete. What should I do Thanks

    Read the article

  • Duplicate all rows in sql database table

    - by Andrew Welch
    I have a table which contains house details called property. I am creating a localised application, and I have a db table called propertylocalised. In this table is held duplicates of the data and culture column e.g. key culture propertyname 1 en helloproperty 1 fr bonjourproperty At the moment I have all my en culture inserted but I want to duplicate all of those rows and then for every other row insert fr into culture. I obviously only want to do this once, for the purpose of setting up the localisation. Thanks Andy

    Read the article

  • Visual Studio 2008 adding incorrect working folders to TFS Workspace

    - by Bryan Rowe
    I am using Visual Studio 2008 with TFS. I have one workspace set up with one working folder. I map the root source control folder $/ to C:\TFS and get all code. When working on any project under the root, Visual Studio will randomly add incorrectly mapped working folders to my workspace. For example, it might map $/WebProject/ to C:\TFS\WebProject\DataAccess -- where the real files exist at C:\TFS\WebProject. Once it incorrectly adds these working folders, I can no longer open the solution. I am forced to remove the working folders that Visual Studio added and get latest from TFS. Has anyone experienced this? Is there something I can do to avoid running into this?

    Read the article

< Previous Page | 540 541 542 543 544 545 546 547 548 549 550 551  | Next Page >