Search Results

Search found 4995 results on 200 pages for 'svn merge reintegrate'.

Page 185/200 | < Previous Page | 181 182 183 184 185 186 187 188 189 190 191 192  | Next Page >

  • Using Mercurial in a Large Organization

    - by Kristopher Johnson
    I've been using Mercurial for my own personal projects for a while, and I love it. My employer is considering a switch from CVS to SVN, but I'm wondering whether I should push for Mercurial (or some other DVCS) instead. One wrinkle with Mercurial is that it seems to be designed around the idea of having a single repository per "project". In this organization, there are dozens of different executables, DLLs, and other components in the current CVS repository, hierarchically organized. There are a lot of generic reusable components, but also some customer-specific components, and customer-specific configurations. The current build procedures generally get some set of subtrees out of the CVS repository. If we move from CVS to Mercurial, what is the best way to organize the repository/repositories? Should we have one huge Mercurial repository containing everything? If not, how fine-grained should the smaller repositories be? I think people will find it very annoying if they have to pull and push updates from a lot of different places, but they will also find it annoying if they have to pull/push the entire company codebase. Anybody have experience with this, or advice?

    Read the article

  • entity framework navigation property further filter without loading into memory

    - by cellik
    Hi, I've two entities with 1 to N relation in between. Let's say Books and Pages. Book has a navigation property as Pages. Book has BookId as an identifier and Page has an auto generated id and a scalar property named PageNo. LazyLoading is set to true. I've generated this using VS2010 & .net 4.0 and created a database from that. In the partial class of Book, I need a GetPage function like below public Page GetPage(int PageNumber) { //checking whether it exist etc are not included for simplicity return Pages.Where(p=>p.PageNo==PageNumber).First(); } This works. However, since Pages property in the Book is an EntityCollection it has to load all Pages of a book in memory in order to get the one page (this slows down the app when this function is hit for the first time for a given book). i.e. Framework does not merge the queries and run them at once. It loads the Pages in memory and then uses LINQ to objects to do the second part To overcome this I've changed the code as follows public Page GetPage(int PageNumber) { MyContainer container = new MyContainer(); return container.Pages.Where(p=>p.PageNo==PageNumber && p.Book.BookId==BookId).First(); } This works considerably faster however it doesn't take into account the pages that have not been serialized to the db. So, both options has its cons. Is there any trick in the framework to overcome this situation. This must be a common scenario where you don't want all of the objects of a Navigation property loaded in memory when you don't need them.

    Read the article

  • Handling Denormalized Schema with Eclipselink

    - by iamrohitbanga
    Hello All I have a denormalized table containing employee information. The fields are employee id, name and department name. The primary key is a composite one consisting of all three fields. An employee can belong to multiple departments. I want to read/write the objects in the table using the Eclipselink Dynamic Persistence API (which is infact a wrapper on top of JPA descriptors etc.). Example Data: 1 e1 dep1 2 e1 dep2 3 e2 dep1 4 e2 dep3 5 e3 dep1 5 e3 dep2 5 e3 dep3 A normal ReadAllQuery (select query) on the table returns a DynamicEntity corresponding to each row in the table. However I want to club all entities based on the emp id and return all the departments he belongs to as a list. I can merge the entities after retrieving them but if I can use some Eclipselink feature out of the box then it would be better. One way to do the read is the following: I create two dynamic types corresponding to employee: Having id,name as the primary key Having id, department as the primary key, I create a OneToManyMapping from the first type to the second one. Then when I query the first type it does return the departments to which employee belongs as a list of DynamicEntity of the second type. This satisfies the read scenario. Is there a better way of doing this? Is this inherently supported by Eclipselink or JPA? I cannot get the same dynamic type configuration working for the write scenario. This is because when I write the changes using the writeObject method of UnitOfWork, it generates insert queries which enter the following entries in the table id name department 102 emp_102 102 st 102 dep_102 102 dep_102 102 dep_102 instead of: id name department 102 emp_102 st 102 emp_102 dep_102 102 emp_102 dep_102 102 emp_102 dep_102 Is there any way I can get write to work with this schema using eclipselink? I want to avoid doing the heavy lifting of merging the rows for such a denormalized schema or generating each row before doing a write. Is there no clean way of doing this using Eclipselink or JPA? Thanks in Advance.

    Read the article

  • Is there a free, smale-scale, not web-based issue/bug tracking system?

    - by Doc Brown
    I know, there were posts before here on SO before concerning issue or bug tracking systems, like this one, but the given answers point either to commercial systems or web-based systems, which both seem to be oversized for our needs. What I am looking for is a non-commercial tool for a team of 3 to 4 developers, which can be used on an existing fileserver, without the need of installing additional server software like a C/S database or a web server. Some things I expect from such a system: allows to remember bugs (with a priority) and issues / ideas for new features (mostly without a priority) description of the issue, perhaps some additional remarks short info who entered the bug/issue entry one or more tags allowing us to group or filter the list Any suggestions? EDIT: I should have said that, but we are using MS Windows clients, Visual Studio development, Tortoise SVN (the latter works fine without a subversion server). And yes, I am strict on "no server software", since all server based solutions I have seen so far seem much to oversized/heavy weighted/too-much-effort-to-be-worth-it. In fact, if no one has a better idea, we are going to use a spreadsheet, but I can't believe there are no ready-made, light weight solutions.

    Read the article

  • Bazaar offline + branches

    - by cheez
    I have a Bazaar repository on Host A with multiple branches. This is my main repository. Until now, I have been doing checkouts on my other machines and committing directly to the main repository. However, now I am consolidating all my work to my laptop and multiple VMs. I need to be working offline regularly. In particular, I need to create/delete/merge branches all while offline. I was thinking of continuing to have the master on Host A with a clone of the repository on the laptop with each vms doing checkouts of the clone. Then, when I go offline, I could do bzr unbind on the clone and bzr bind when I am back online. This failed as soon as I tried to bzr clone since bzr clone only clones a branch(!!!!) I need some serious help. If Hg would handle this better please let me know (I need Windows support.) However, at this moment I cannot switch from Bazaar as it is too close to some important deadlines. Thanks in advance!

    Read the article

  • Implementation help... Subclass NSManagedObject?

    - by Canada Dev
    I'm working on an app where I have some products that I download in a list. The downloaded products are displayed in a table and each will is showing a detail view with more information. These same products can be saved as a favorite, and for this I am using Core Data. I'd like to be able to re-use a bunch of views for displaying the products, which means the stores object and the downloaded object would have to be the same kind. Now, how would I go about best implementing the objects? Can I make a class such as this: FavoriteProduct : NSManageObject // implementation and then subclass Product : FavoriteProduct // implementation ? The CD class just doesn't give me everything. What would be the best way to merge these two object classes so I have as little work ahead of me in terms of implementing the different views for each object? Basically, I just want to be able to call the same methods, etc. on the Product objects as I would on the ones that are FavoriteProduct objects, and re-use views for both kinds. There's only a bit of difference between the two (one is of course stored as a favorite and has some extra values such as notes, tags, while the Product one doesn't). Thanks in advance

    Read the article

  • Generate regular expression to match strings from the list A, but not from list B

    - by Vlad
    I have two lists of strings ListA and ListB. I need to generate a regular expression that will match all strings in ListA and will not match any string in ListB. The strings could contain any combination of characters, numbers and punctuation. If a string appears on ListA it is guaranteed that it will not be in the ListB. If a string is not in either of these two lists I don't care what the result of the matching should be. The lists typically contain thousands of strings, and strings are fairly similar to each other. I know the trivial answer to this question, which is just generate a regular expression of the form (Str1)|(Str2)|(Str3) where StrN is the string from ListA. But I am looking for a more efficient way to do this. Ideal solution would be some sort of tool that will take two lists and generate a Java regular expression for this. Update 1: By "efficient", I mean to generate expression that is shorter than trivial solution. The ideal algorithm would generate the shorted possible expression. Here are some examples. ListA = { C10 , C15, C195 } ListB = { Bob, Billy } The ideal expression would be /^C1.+$/ Another example, note the third element of ListB ListA = { C10 , C15, C195 } ListB = { Bob, Billy, C25 } The ideal expression is /^C[^2]{1}.+$/ The last example ListA = { A , D ,E , F , H } ListB = { B , C , G , I } The ideal expression is the same as trivial solution which is /^(A|D|E|F|H)$/ Also, I am not looking for the ideal solution, anything better than trivial would help. I was thinking along the lines of generating the list of trivial solutions, and then try to merge the common substrings while watching that we don't wander into ListB territory. *Update 2: I am not particularly worried about the time it takes to generate the RegEx, anything under 10 minutes on the modern machine is acceptable

    Read the article

  • How to format dates in Jahia 6 CMS?

    - by dpb
    I am helping a friend of mine put up a site for his business. I’ve read different posts and sites trying to find the ideal CMS tool, but people have different views of what is the best, so I finally just picked one of them at random. So I went for an evaluation of Jahia 6.0-CE. As you’ve probably guessed by now, I don’t have so much experience with CMS tools. I just want to setup the CMS, write the templates for the site and let my friend manage the content from there on. So I extracted the sources from SVN and went for a test drive. I managed to create some simple templates to get a hang of things but now I have an issue with a date format. In my definitions.cnd I declared the field like so: date myDateField (datetimepicker[format='dd.MM.yyyy']) This is formatted in the page and the selector also presents this in the dd.MM.yyyy format when inserting the content. But how about sites in other countries, countries that represent the date as MM.dd.yyyy for example? If I specify the format in the CND, hard coded, how can I change this later on so that it adapts based on the browser’s language? Do I extract the content from the repository and format it by hand in the JSP template based on a Locale, or is there a better way? Thank you.

    Read the article

  • Newbie - eclipse workflow (PHP development)

    - by engil
    Hi all - this is a bit of a newbie question but hoping I can get some guidance. I've been playing around with Eclipse for a couple months yet I'm still not completely comfortable with my setup and it seems like every time I install it to a new system I end up with different results. What I'm hoping to achieve is (I think) fairly standard. In my environment I'd like SVN (currently using Subclipse), FTP support (currently using Aptana plugin), debugging (going to use XDebug) and all the usual bells and whistles of development (code completion, refactoring, etc.) My biggest current issue is how to set up my environment to support both a 'development' and 'production' server. Optimally I would be able to work directly against the dev server (Eclipse on my Vista desktop against the VM Ubuntu dev server) and then push to production server (shared hosting). I'd prefer to work directly against the dev server (with no local project files, just using the Connections provided by Aptana) but I'm guessing this won't allow for code-completoin or all the other bells and whistles provided for development. Any thoughts? Kind of an open ended question, but maybe this could be an opportunity for some of you with a great deal of experience using Eclipse to describe your setups so people like me can get some insight into good ways to get set up.

    Read the article

  • Persist data when the table was not mapped (JPA EclipseLink)

    - by enrique
    Hi everybody I need some help in persisting data into a table that has not been mapped... The issue is that the database we have has a table which all of its columns are foreign keys so by mapping the whole database all of the tables are correctly mapped. However that table called "category" is not mapped. The way in which we can browse the data is by passing for the table I mentioned using the @jointable annotation which was set by the system in the other tables with which "category" has a relation. So we can go ahead and using the collections and perform a query. But the issue comes when I want to persist data into that table because there´s no any entity. We tried to persist through the collections but no luck. Then I have just tried by creating the entity with its PK and Facade all by hand. However when I try to persist using the Merge method the system tries to perform an Insert when it is supposed to perform an Update. So obviously it returns an error. Does anybody have an idea on this situation? Thanks.-

    Read the article

  • Building a J2EE dev/test setup on a single PC

    - by John
    It's been a while since I did Java work, and even then I was never responsible for starting a large project from the very start... there were test/staging/production systems already running, etc, etc. Now I am looking to start a J2EE project from scratch on my trusty workstation, which has never been used for Java development and runs Windows 7 64bit. First of all, I'll be getting Eclipse. As far as writing the code goes I'm pretty happy. And running it through Eclipse is OK, but what I'd really want is to have a VM running MySQL and TomCat on which I can properly deploy my project and run/debug it 'remotely' from my dev PC. And I guess this should be done using Ant instead of letting Eclipse build the WAR for me, so that I don't end up with a dependence on Eclipse. I'm certain Eclipse can do this, so you hit a button and it runs Ant scripts, deploys and debugs for instance, but very hazy on it. Are there any good guides on this? I don't want to be taught Java, or even Ant, but rather the 'glue' parts like getting my test VM up and running under Windows, getting a build/test/deploy/run pipeline running through Eclipse, etc. One point, I only plan to use Windows... hosting a Windows VM on my Windwos desktop. And while I can use command-line tools like ant/svn, I'm much more a GUI person who loves IDE integration... I'd rather this didn't end up an argument about Linux or Vi, etc! I am looking for free, but am a MAPS subscriber, and run Win7 Ultimate in case that makes a difference as far as free VM solutions.

    Read the article

  • How to run stored procedures and ad-hoc scripts asynchronously with "loosely" connected SQL Server 2

    - by sanga
    Is there a way to initiate a script against an instance of SQL server when it is not connected then have it run on the instance the next time it connects? This needs to happen without any intervention from me. Background situation if you are interested: We have about 120 machines each with their own instance of SQL Server 2000. Most of them are laptops. We have merge replication set up with each one. From time to time, there is a need to delete "rogue" guids from some tables in some instances that overwrite legitimate records on the main publisher as well as perform administrative tasks via stored procedure or adhoc sql statements. The problem is there is no telling when each machine is going to be connected to the network. Some folks turn their machines completely off at the end of the day. Others disconnect their machines and take them on business trips, home for the weekend etc. Did I mention that about 35 of these machines are in utility trucks and "attempt" to sync over a wireless connection. Thanks in advance for any assistance or suggestions. Sanga

    Read the article

  • git: How to move last N commits made to master, into own branch?

    - by amn
    Hi all, I have a repository where I had been working on master branch having last committed some 10 or so commits which I now wish were in another branch, as they describe work that I now consider experimental (i am still learning good git practices). Basically I would like to have these last 10 commits starting from a point in master to form another branch instead, so that I can have my master in a release state (which is what I strive for.) So, this is what I have (rightmost X is the last commit good for release): b--b (feature B) / X--X--X--Z--Z--Z--Z--Z--Z (master) \ a--a--a (feature A) You can see that both X and Z are on master, while I want commits marked by Z (my feature Z work) to lie on their own feature branch, and so that rightmost X is at the tip of master forming a good master branch tip. I guess this is what I want: b--b (feature B) / X--X--X (master) \ \ \ Z--Z--Z--Z--Z--Z (feature Z - the branch I want Z on) a--a--a (feature A) That way I will have my master always ready for release, and merge A, B and Z features when the time comes. Hope I am making sense here...

    Read the article

  • [PHP] Local/Dev/Live deployment - best workflow

    - by Adam Kiss
    Hello, situation We our little company with 3 people, each has a localhost webserver and most projects (previous and current) are on one PC network shared disk. We have virtual server, where some of our clients' sites and our site. Our standard workflow is: Coder PC ? Programmer localhost ? dev domain (client.company.com) ? live version (client.com) It often happens, that there are two or three guys working on same projects at the same time - one is on dev version, two are on localhost. When finished, we try to synchronize the files on dev version and ideally not to mess up any files, which *knock knock * doesn't happen often. And then one of us deploys dev version on live webserver. question we are looking for a way to simplify this workflow while updating websites - ideally some sort of diff uploader or VCS probably (Git/SVN/VCS/...), but we are not completely sure where to begin or what way would be ideal, therefore I ask you, fellow stackoverflowers for your experience with website / application deployment and recommended workflow. We probably will also need to use Mac in process, so if it won't be a problem, that would be even better. Thank you

    Read the article

  • Need to sort 3 arrays by one key array

    - by jeff6461
    I am trying to get 3 arrays sorted by one key array in objective c for the iphone, here is a example to help out... Array 1 Array 2 Array 3 Array 4 1 15 21 7 3 12 8 9 6 7 8 0 2 3 4 8 When sorted i want this to look like Array 1 Array 2 Array 3 Array 4 1 15 21 7 2 3 4 8 3 12 8 9 6 7 8 0 So array 2,3,4 are moving with Array 1 when sorted. Currently i am using a bubble sort to do this but it lags so bad that it crashes by app. The code i am using to do this is int flag = 0; int i = 0; int temp = 0; do { flag=1; for(i = 0; i < distancenumber; i++) { if(distance[i] > distance[i+1]) { temp = distance[i]; distance[i]=distance[i + 1]; distance[i + 1]=temp; temp = FlowerarrayNumber[i]; FlowerarrayNumber[i] = FlowerarrayNumber[i+1]; FlowerarrayNumber[i + 1] = temp; temp = BeearrayNumber[i]; BeearrayNumber[i] = BeearrayNumber[i + 1]; BeearrayNumber[i + 1] = temp; flag=0; } } }while (flag==0); where distance number is the amount of elements in all of the arrays, distance is array 1 or my key array. and the other 2 are getting sorted. If anyone can help me get a merge sort(or something faster, it is running on a iPhone so it needs to be quick and light) to do this that would be great i cannot figure out how the recursion works in this method and so having a hard time to get the code to work. Any help would be greatly appreciated

    Read the article

  • Mercurial repository narrow clone?

    - by Berry Langerak
    Hi. I'm currently in the process of moving from Subversion to Mercurial, and I have to say I don't regret that decision. However, when trying to convert my project, I ran into a problem of Mercurial, which I can't seem to get fixed. I have two distinct projects: one is a framework, and the other is an application that relies on that framework. Here's what the repositories look like: The Framework repository: docs/ deploy/ lib/ tests/ The Application repository: application/ config/ lib/ tests/ www/ What I'd like is for the application's lib directory to contain a copy of the frameworks' lib/ directory. I used to do this using svn:externals. Now, I am aware that Mercurial supports the concept of subrepositories, but that doesn't seem like the "correct" solution, as it doesn't actually pull in the lib/ directory like I wanted, as you'll still have to pull and push changes manually. That, plus once you clone the framework repository, you'll get all of it, not just the lib/ directory. I only need the lib/ directory, not the tests, or the docs. Now, I thought up two different solutions to this problem, but I wonder which is the best. The first solution would be to clone the framework in a different directory altogether and create symlink in the application's lib/ directory which points to the framework's lib/ directory. Putting the symlink in .hgignore should make sure all is well, I think? That means that you could edit the frameworks code, and commit that, and you could edit the application's code and commit that, too. The other option is to have multiple repositories. The framework gets pulled as a whole, which means you'll get the docs/, deploy/, test/ etc. directories, which are not needed for usage of the framework. I thought maybe creating a repository purely for the library might be a solution, although I sincerely doubt it, as the Unit Tests are very dependant upon the library itself. Does anyone know a decent solution for this problem?

    Read the article

  • Setting up a NAS with Citrix XenServer

    - by JasonBrown
    Just a quick query on anyone whos worked with XenServer, I want to setup a NAS at home but with virtualization (I've looked into VMWare Server and KVM, I quite like KVM!) but I was told about XenServer 5.5. I have comomodity hardware (ASUS board, dual core 2.66Ghz CPU with 8Gb RAM), I need to setup a fileserver to house about 2-3Tb worth of data (big chunky video - not porn!). Need to run Linux (preferably CentOS) but also run Windows virtualised for testing. I was thinking of going the XenServer route, however I want to be able to offer a VM access to the 2-3Tb of HDDs (5 HDD drives) directly so it can do its thing (maybe using FreeNAS). Would this be possible with XenServer? Or will I have to do more work - and another box - to offer this? My goals are to use FreeNAS (ZFS!) for the filesserver, CentOS for SVN and aother bits we need to use (LAMP Stack), Windows for our win32 testing all on one box. I see this iSCSI target bits and get scared.

    Read the article

  • Why isn't obliterate an essential feature of Subversion?

    - by Dimitri C.
    For some years now, I'm waiting for Subversion to feature a "delete permanently" (obliterate) function. I hesitate to make the transition to Subversion (coming from Visual SourceSafe :p), because I think this is an essential feature, as otherwise I'd expect the repository to grow unstopably. However, for one reason or the other, the feature gets postponed over and over again. So I begin wondering if there is some other feature or workaround which makes the obliterate function dispensable. What do you do when you want to shrink the SVN central repository? Example 1: I check in a large third party library, and after a few weeks I realize it is not suited for my needs. I don't want that to store and backup that large amount of data forever. Example 2: I have 10 versions of 10 big third party libraries in the repository, but I only use the latest versions. Example 3: I accidentally checked in sensitive information (as suggested by John). Example 4: I accidentally checked in some big files that were never meant to be put in the repository.

    Read the article

  • I have a slight confusion with setting up Mercurial on my webserver...

    - by littlejim84
    I'm starting to use Mercurial on my web server (in this case MediaTemple's Grid). I've used SVN previously, though I'm not an expert of version control systems. I'm just needing a little help with clearing up some confusion with getting it set up optimally. I have a 'data' folder which is outside the web server root and that the browser cannot access. It was recommended to me before to have my Mercurial repositories setup here, then I would clone from here locally on my computer. I would also have a 'domains' folder that is basically the web server root and inside there is my actual domains where my websites are actually served to the browser - these would need to be updated from the 'data' repositories too. But with this in mind, after setting it up, it seems inefficient... I'm cloning to my local (that makes sense), adding, committing, pushing. That's fine... But then I'm then updating in my data repository folder and then updating in my domains folder to actually update my websites. Surely, I don't actually need this 'data' folder for repositories? Wouldn't my actual live 'domains' folders be the main repositories themselves? So I'm cloning locally and updating from these? Please help me clear some confusions with all this (if you can).

    Read the article

  • Sorting a very large text file in Java

    - by Alice
    Hi, I have a large text file I need to sort in Java. The format is: word [tab] frequency [new line] The algorithm for sorting is: Read some of the file, filtering for purlely alphabetic words. Once you have X number of alphabetic words, call Collections.sort and write the result to a file. Repeat until you have finished reading the file. Start reading two sorted files, comparing line by line for the word with higher frequency, and writing at the same time to a new file as to not load much into your memory Repeat until all files are merged into one large file Right now I've divided the large file into smaller ones (sorted by descending frequency) with 10,000 lines each. I know I need to somehow merge these files back together, but I'm not sure how to go about this. I've created a LinkedList to keep track of all the files created. The algorithm says to compare each line in the two files, except I've tried a case where , say file1 = 8,6,5,3,1 and file2 = 9,8,8,8,8. Then if I compare them line by line I would get file3 = 9,8,8,6,8,5,8,3,8,1 which is incorrectly sorted (they should be in decreasing order). I think I'm misunderstanding some part of the algorithm. If someone could point out what I should do instead, I'd greatly appreciate it. Thanks. edit: Yes this is an assignment. We aren't allowed to increase memory unfortunately :(

    Read the article

  • What do I need to do besides code?

    - by user151841
    I'm a single-person operation for my small employer. I'm working on a couple of web applications that have grown to medium-size. We have backups going and everything is in version control with Subversion. I have comments in my code, but documentation outside of code is "spotty at best", and frequently things change. What do I need to do to bring it to the next level, beyond a pile of ( version-controlled, well-commented ) code? What would you say is required to have a robust set of documentation outside of the codebase itself, where the project is at, and where it's going? Ideally I would like some integrated system that would go from brainstorm, to requirements, to tracking bugs and features in svn check-in messages, to documentation. Would trac or redmine do something like this? I would like to show to my boss, "This is the prioritized list of features, this is where we are now, this is how long I spend on this feature, how long I spent on this bug" and I'd like to spend the minimum amount of time managing the projects :) What about ERD and UML diagrams? Is a project incomplete without them?

    Read the article

  • Will array_unique work also with array of objects?

    - by Richard Knop
    Is there a better way than using array-walk in combination with unserialize? I have two arrays which contain objects. The objects can be same or can be different. I want to merge both arrays and keep only unique objects. This seems to me like a very long solution for something so trivial. Is there any other way? class Dummy { private $name; public function __construct($theName) {$this->name=$theName;} } $arr = array(); $arr[] = new Dummy('Dummy 1'); $arr[] = new Dummy('Dummy 2'); $arr[] = new Dummy('Dummy 3'); $arr2 = array(); $arr2[] = new Dummy('Dummy 1'); $arr2[] = new Dummy('Dummy 2'); $arr2[] = new Dummy('Dummy 3'); function serializeArrayWalk(&$item) { $item = serialize($item); } function unSerializeArrayWalk(&$item) { $item = unserialize($item); } $finalArr = array_merge($arr, $arr2); array_walk($finalArr, 'serializeArrayWalk'); $finalArr = array_unique($finalArr); array_walk($finalArr, 'unSerializeArrayWalk'); var_dump($finalArr);

    Read the article

  • Why is joining two vectors simply not working?

    - by Jim
    I have two vectors of MyObj structs. MyObj is defined as follows: struct MyObj { float x, y; unsigned int data[8]; unsigned int tmp[1]; MyObj(const MyObj &m) { x = m.x; y = m.y; tmp[0] = 0; for (int i = 0; i < 8; ++i) { data[i] = m.data[i]; } } }; I then have two vectors... vector<MyObj> v1; vector<MyObj> v2; // both get data eventually. v1.insert(v1.end(), v2.begin(), v2.end()); v2 has 3535004 elements in my experiment. v1 is similarly sized. I've also tried building a new vector and just using .push_back to build it from both vectors. Essentially, when I try to merge the two vectors I just get an error from visual studio saying "Debug error! R6010, abort() has been called". Very non-useful... So my question is: what could be causing this error, and how can I solve it? Thank you

    Read the article

  • Speeding up a group by date query on a big table in postgres

    - by zaius
    I've got a table with around 20 million rows. For arguments sake, lets say there are two columns in the table - an id and a timestamp. I'm trying to get a count of the number of items per day. Here's what I have at the moment. SELECT DATE(timestamp) AS day, COUNT(*) FROM actions WHERE DATE(timestamp) >= '20100101' AND DATE(timestamp) < '20110101' GROUP BY day; Without any indices, this takes about a 30s to run on my machine. Here's the explain analyze output: GroupAggregate (cost=675462.78..676813.42 rows=46532 width=8) (actual time=24467.404..32417.643 rows=346 loops=1) -> Sort (cost=675462.78..675680.34 rows=87021 width=8) (actual time=24466.730..29071.438 rows=17321121 loops=1) Sort Key: (date("timestamp")) Sort Method: external merge Disk: 372496kB -> Seq Scan on actions (cost=0.00..667133.11 rows=87021 width=8) (actual time=1.981..12368.186 rows=17321121 loops=1) Filter: ((date("timestamp") >= '2010-01-01'::date) AND (date("timestamp") < '2011-01-01'::date)) Total runtime: 32447.762 ms Since I'm seeing a sequential scan, I tried to index on the date aggregate CREATE INDEX ON actions (DATE(timestamp)); Which cuts the speed by about 50%. HashAggregate (cost=796710.64..796716.19 rows=370 width=8) (actual time=17038.503..17038.590 rows=346 loops=1) -> Seq Scan on actions (cost=0.00..710202.27 rows=17301674 width=8) (actual time=1.745..12080.877 rows=17321121 loops=1) Filter: ((date("timestamp") >= '2010-01-01'::date) AND (date("timestamp") < '2011-01-01'::date)) Total runtime: 17038.663 ms I'm new to this whole query-optimization business, and I have no idea what to do next. Any clues how I could get this query running faster?

    Read the article

  • Merging multiple docx files to one

    - by coding
    I am developing a desktop application in C#. I have coded a function to merge multiple docx files but it does not work as expected. I don't get the content exactly as how it was in the source files. A few blank lines are added in between. The content extends to the next pages, header and footer information is lost, page margins gets changed, etc.. How can I concatenate docs as it is without and change in it.Any suggestions will be helpful. This is my code. public bool CombineDocx(string[] filesToMerge, string destFilepath) { Application wordApp = null; Document wordDoc = null; object outputFile = destFilepath; object missing = Type.Missing; object pageBreak = WdBreakType.wdPageBreak; try { wordApp = new Application { DisplayAlerts = WdAlertLevel.wdAlertsNone, Visible = false }; wordDoc = wordApp.Documents.Add(ref missing, ref missing, ref missing, ref missing); Selection selection = wordApp.Selection; foreach (string file in filesToMerge) { selection.InsertFile(file, ref missing, ref missing, ref missing, ref missing); selection.InsertBreak(ref pageBreak); } wordDoc.SaveAs( ref outputFile, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing); return true; } catch (Exception ex) { Msg.Log(ex); return false; } finally { if (wordDoc != null) { wordDoc.Close(); } if (wordApp != null) { wordApp.DisplayAlerts = WdAlertLevel.wdAlertsAll; wordApp.Quit(); Marshal.FinalReleaseComObject(wordApp); } } }

    Read the article

< Previous Page | 181 182 183 184 185 186 187 188 189 190 191 192  | Next Page >