Search Results

Search found 4640 results on 186 pages for 'unique'.

Page 154/186 | < Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >

  • Avoid an "out of memory error" in Java(eclipse), when using large data structure?

    - by gnomed
    OK, so I am writing a program that unfortunately needs to use a huge data structure to complete its work, but it is failing with a "out of memory error" during its initialization. While I understand entirely what that means and why it is a problem, I am having trouble overcoming it, since my program needs to use this large structure and I don't know any other way to store it. The program first indexes a large corpus of text files that I provide. This works fine. Then it uses this index to initialize a large 2D array. This array will have nXn entries, where "n" is the number of unique words in the corpus of text. For the relatively small chunk I am testing it on(about 60 files) it needs to make approximately 30,000x30,000 entries. this will probably be bigger once I run it on my full intended corpus too. It consistently fails every time, after it indexes, while it is initializing the data structure(to be worked on later). Things I have done include: revamp my code to use a primitive "int[]" instead of a "TreeMap" eliminate redundant structures, etc... Also, I have run eclipse with "eclipse -vmargs -Xmx2g" to max out my allocated memory I am fairly confident this is not going to be a simple line of code solution, but is most likely going to require a very new approach. I am looking for what that approach is, any ideas? Thanks, B.

    Read the article

  • Protecting critical sections based on a condition in C#

    - by NAADEV
    Hello, I'm dealing with a courious scenario. I'm using EntityFramework to save (insert/update) into a SQL database in a multithreaded environment. The problem is i need to access database to see whether a register with a particular key has been already created in order to set a field value (executing) or it's new to set a different value (pending). Those registers are identified by a unique guid. I've solved this problem by setting a lock since i do know entity will not be present in any other process, in other words, i will not have same guid in different processes and it seems to be working fine. It looks something like that: static readonly object LockableObject = new object(); static void SaveElement(Entity e) { lock(LockableObject) { Entity e2 = Repository.FindByKey(e); if (e2 != null) { Repository.Insert(e2); } else { Repository.Update(e2); } } } But this implies when i have a huge ammount of requests to be saved, they will be queued. I wonder if there is something like that (please, take it just as an idea): static void SaveElement(Entity e) { (using ThisWouldBeAClassToProtectBasedOnACondition protector = new ThisWouldBeAClassToProtectBasedOnACondition(e => e.UniqueId) { Entity e2 = Repository.FindByKey(e); if (e2 != null) { Repository.Insert(e2); } else { Repository.Update(e2); } } } The idea would be having a kind of protection that protected based on a condition so each entity e would have its own lock based on e.UniqueId property. Any idea?

    Read the article

  • How can I use SQL to select duplicate records, along with counts of related items?

    - by mipadi
    I know the title of this question is a bit confusing, so bear with me. :) I have a (MySQL) database with a Person record. A Person also has a slug field. Unfortunately, slug fields are not unique. There are a number of duplicate records, i.e., the records have different IDs but the same first name, last name, and slug. A Person may also have 0 or more associated articles, blog entries, and podcast episodes. If that's confusing, here's a diagram of the structure: I would like to produce a list of records that match this criteria: duplicate records (i.e., same slug field) for people who also have at least 1 article, blog entry, or podcast episode. I have a SQL query that will list all records with the same slug fields: SELECT id, first_name, last_name, slug, COUNT(slug) AS person_records FROM people_person GROUP BY slug HAVING (COUNT(slug) > 1) ORDER BY last_name, first_name, id; But this includes records for people that may not have at least 1 article, blog entry, or podcast. Can I tweak this to fit the second criteria?

    Read the article

  • XML jquery shortcuts

    - by Llamabomber
    I am writing a bit of code that appends my site nav with and extra ul that gives a description about where that link takes you. I need to use our CMS's built in Nav structure so appending via jQuery was the best solution, and XML makes the data easier to manage. My question is this: is there a more efficient way to write out the js? What I have so far is this: $(document).ready(function() { $.ajax({ type: "GET", url: "/js/sitenav.xml", dataType: "xml", success: function parseXml(xml) { // WORK $(xml).find("CaseStudies").each(function() { $("li#case_studies").append('<ul><li>' + $(this).find("NavImage").text() + $(this).find("NavHeader").text() + $(this).find("NavDescription").text() + $(this).find("NavLink").text() + "</li></ul>"); }); }; }); }); and the xml structure resembles this: <SiteNav> <Work> <CaseStudies> <NavImage></NavImage> <NavHeader></NavHeader> <NavDescription></NavDescription> <NavLink></NavLink> </CaseStudies> </Work> </SiteNav> I'm happy with my xml structure, but is there a more compact/efficient method of writing out the code for the jqeury? Every li in the nav has a unique id as well in case that helps...

    Read the article

  • How to split up levels? (cocos2d,box2d,iphone) to save CPU and memory?

    - by cocos2dbeginner
    Hi, so I'm going to create large levels. But there's a problem: There's much unseen space (it's a jump'n run like mario bros.) and this will use memory + cpu. so how could I split up my levels? I'm using Box2D+ cocos2d for iphone. Any ideas? Mayby just set the visible property to NO? But it would be still in the memory :(. But what with the box2d bodies? Destroy and recreate them would be to heavy for the FPS, because I have physics built in which should not be recreated. Should I make fix points where i want to split the level up, than if the player is 200 px away it should preload it. and if the player is 200 px away from the last part of the level I unload it. But there would be the problem with the physics, because on the start of the level it has a unique movement and later if i destroy and recreate it it would do the same. but i don't want that. other ideas?

    Read the article

  • Is it possible in Perl to require a subroutine call is made?

    - by MitchelWB
    I don't know enough about Perl to even know what I'm asking for exactly, but I'm writing a series of subroutines to be available for many individual scripts that all process different incoming flat files. The process is far from perfect, but it's what I've got to deal with and I'm trying to build myself a small library of subs that make it easier for me to manage it all. Each script handles a different incoming flat file with it's own formatting, sorting, grouping and outputting requirements. One common aspect is that we have small text files that house counters that are used to name the output files so that we have no duplicate file names. Because the processing of the data is different for each file, I need to open the file to get my counter value, because this is a common operation, I'd like to put it in a sub to retrieve the counter. But then need to write specific code to process the data. And would like a second sub that allows me to update the counter with the counter once I've processed the data. Is there a way to make the second sub call a requirement if the first one is called? Ideally if it could even be an error that would prevent the script from running at all much like a syntax error. EDIT: Here is a little [ugly and simplified] psuedo-code to give a better feel for what the current process is: require "importLibrary.plx"; #open data source file open DataIn, $filename; #call getCounterInfo from importLibrary.plx to get the counter value from counter file $counter = &getCounterInfo($counterFileName); while (<DataIn>) { #Process data based on unique formatting and requirements #output to task files based on requirements and name files using the $counter #increment $counter } #update counter file with new value of $counter &updateCounterInfo($counter);

    Read the article

  • Aggregate path counts using HierarchyID

    - by austincav
    Business problem - understand process fallout using analytics data. Here is what we have done so far: Build a dictionary table with every possible process step Find each process "start" Find the last step for each start Join dictionary table to last step to find path to final step In the final report output we end up with a list of paths for each start to each final step: User Fallout Step HierarchyID.ToString() A 1/1/1 B 1/1/1/1/1 C 1/1/1/1 D 1/1/1 E 1/1 What this means is that five users (A-E) started the process. Assume only User B finished, the other four did not. Since this is a simple example (without branching) we want the output to look as follows: Step Unique Users 1 5 2 5 3 4 4 2 5 1 The easiest solution I could think of is to take each hierarchyID.ToString(), parse that out into a set of subpaths, JOIN back to the dictionary table, and output using GROUP BY. Given the volume of data, I'd like to use the built-in HierarchyID functions, e.g. IsAncestorOf. Any ideas or thoughts how I could write this? Maybe a recursive CTE?

    Read the article

  • How to make write operation idempotent?

    - by Morgan Cheng
    I'm reading article about recently release Gizzard sharding framework by twitter(http://engineering.twitter.com/2010/04/introducing-gizzard-framework-for.html). It mentions that all write operations must be idempotent to make sure high reliability. According to wikipedia, "Idempotent operations are operations that can be applied multiple times without changing the result." But, IMHO, in Gazzard case, idempotent write operation should be operations that sequence doesn't matter. Now, my question is: How to make write operation idempotent? The only thing I can image is to have a version number attached to each write. For example, in blog system. Each blog must have a $blog_id and $content. In application level, we always write a blog content like this write($blog_id, $content, $version). The $version is determined to be unique in application level. So, if application first try to set one blog to "Hello world" and second want it to be "Goodbye", the write is idempotent. We have such two write operations: write($blog_id, "Hello world", 1); write($blog_id, "Goodbye", 2); These two operations are supposed to changed two different records in DB. So, no matter how many times and what sequence these two operations executed, the results are same. This is just my understanding. Please correct me if I'm wrong.

    Read the article

  • Dependent Dropdowns (Linked Selects) with single class (no other selectors)

    - by AJ
    I am trying to create multiple dependent dropdowns (selects) with a unique way. I want to restrict the use of selectors and want to achieve this by using a single class selectors on all SELECTs; by figuring out the SELECT that was changed by its index. Hence, the SELECT[i] that changed will change the SELECT[i+1] only (and not the previous ones like SELECT[i-1]). For html like this: <select class="someclass"> <option value="volvo">Volvo</option> <option value="saab">Saab</option> <option value="mercedes">Mercedes</option> <option value="audi">Audi</option> </select> <select class="someclass"> </select> <select class="someclass"> </select> where the SELECTs other than the first one will get something via AJAX. I see that the following Javascript gives me correct value of the correct SELECT and the value of i also corresponds to the correct SELECT. $(function() { $(".someclass").each(function(i) { $(this).change(function(x) { alert($(this).val() + i); }); }); }); Please note that I really want the minimum selector approach. I just cannot wrap my head around on how to approach this. Should I use Arrays? Like store all the SELECTS in an Array first and then select those? I believe that since the above code already passes the index i, there should be a way without Arrays also. Thanks a bunch. AJ

    Read the article

  • Improve Efficiency in Array comparison in Ruby

    - by user2985025
    Hi I am working on Ruby /cucumber and have an requirement to develop a comparison module/program to compare two files. Below are the requirements The project is a migration project . Data from one application is moved to another Need to compare the data from the existing application against the new ones. Solution : I have developed a comparison engine in Ruby for the above requirement. a) Get the data, de duplicated and sorted from both the DB's b) Put the data in a text file with "||" as delimiter c) Use the key columns (number) that provides a unique record in the db to compare the two files For ex File1 has 1,2,3,4,5,6 and file2 has 1,2,3,4,5,7 and the columns 1,2,3,4,5 are key columns. I use these key columns and compare 6 and 7 which results in a fail. Issue : The major issue we are facing here is if the mismatches are more than 70% for 100,000 records or more the comparison time is large. If the mismatches are less than 40% then comparison time is ok. Diff and Diff -LCS will not work in this case because we need key columns to arrive at accurate data comparison between two applications. Is there any other method to efficiently reduce the time if the mismatches are more thatn 70% for 100,000 records or more. Thanks

    Read the article

  • MySQL help, counting information on last records

    - by ee12csvt
    I need some advice I have two tables, one holds unique serial numbers of items (items) and the other holds status changes and other information for these items (details) The Tables are set up as follows Item itemID itemName itemDate details detID itemID modlvl status detDate All items have at least one record in the details table, but over time the status has changed or the modification level has changed (Both of these are identified by numbers which are held in other appropriate tables) and a new record is created each time the status/modlvl changes I want to display a table on my webpage using php that identifies the different mod levels of the items and shows a count of each of the current status of the items EDIT Hi Ronnis, This is an example of the data in the tables and what I want to achieve The current Mod Levels range from 1 to 3 Status representations are 1 In Use 2 In Store 3 Being repaired 4 In Transit 5 For Disposal 6 Disposed 7 Lost Item itemID OrigMod created 1000 1 2009-10-01 22:12:12 1001 1 2009-10-01 22:12:12 1002 1 2009-10-01 22:12:12 1003 1 2009-10-01 22:12:12 1004 1 2009-10-01 22:12:12 1005 1 2009-10-01 22:12:12 1006 1 2009-10-01 22:12:12 1007 1 2009-10-01 22:12:12 1008 1 2009-10-01 22:12:12 1009 1 2009-10-01 22:12:12 1010 1 2009-10-01 22:12:12 Details detID itemID modlvl detDate status 1 1000 1 2009-10-01 1 2 1001 1 2009-10-01 1 3 1002 1 2009-10-01 1 4 1003 1 2009-10-01 1 5 1004 1 2009-10-01 1 6 1005 1 2009-10-01 1 7 1006 1 2009-10-01 1 8 1007 1 2009-10-01 1 9 1008 1 2009-10-01 1 10 1009 1 2009-10-01 1 11 1010 1 2009-10-01 1 12 1001 1 2010-02-01 2 13 1001 1 2010-02-03 4 14 1001 1 2010-03-01 3 15 1000 1 2010-03-14 2 16 1001 2 2010-04-01 4 17 1006 1 2010-04-01 2 18 1001 2 2010-04-03 2 19 1006 1 2010-04-14 4 20 1006 1 2010-05-01 5 21 1002 1 2010-05-02 2 22 1003 1 2010-05-10 2 23 1010 1 2010-06-01 2 24 1006 1 2010-06-18 6 25 1010 1 2010-07-01 7 26 1007 1 2010-07-02 2 27 1007 1 2010-07-04 4 28 1003 1 2010-07-10 2 29 1007 1 2010-07-11 3 30 1007 2 2010-07-12 4 31 1007 2 2010-07-15 2 32 1001 2 2010-08-31 1 33 1001 2 2010-09-10 2 34 1001 2 2010-10-01 4 35 1008 1 2010-10-01 2 36 1001 2 2010-10-05 3 37 1008 1 2010-10-05 4 38 1008 1 2010-10-10 3 39 1001 3 2010-10-20 4 40 1001 3 2010-10-25 2 Using the tables above I want to get this result MoLvl Use Store Repd Transit Displ Dispd Lost Total 1 3 3 1 0 0 1 1 9 2 0 1 0 0 0 0 0 1 3 0 1 0 0 0 0 0 1 Total 3 5 1 0 0 1 1 11

    Read the article

  • In mysql, is "explain ..." always safe?

    - by tye
    If I allow a group of users to submit "explain $whatever" to mysql (via Perl's DBI using DBD::mysql), is there anything that a user could put into $whatever that would make any database changes, leak non-trivial information, or even cause significant database load? If so, how? I know that via "explain $whatever" one can figure out what tables / columns exist (you have to guess names, though) and roughly how many records are in a table or how many records have a particular value for an indexed field. I don't expect one to be able to get any information about the contents of unindexed fields. DBD::mysql should not allow multiple statements so I don't expect it to be possible to run any query (just explain one query). Even subqueries should not be executed, just explained. But I'm not a mysql expert and there are surely features of mysql that I'm not even aware of. In trying to come up with a query plan, might the optimizer actual execute an expression in order to come up with the value that an indexed field is going to be compared against? explain select * from atable where class = somefunction(...) where atable.class is indexed and not unique and class='unused' would find no records but class='common' would find a million records. Might 'explain' evaluate somefunction(...)? And then could somefunction(...) be written such that it modifies data?

    Read the article

  • Does Microsoft Access use the PK fields for anything?

    - by chrismay
    Ok this is going to sound strange, but I have inherited an app that is an Access front end with a SQL Server backend. I am in the process of writing a new front end for it, but... we need to continue using the access front end for a while even after we deploy my new front end for reasons I won't go into. So both the existing Access app and my new app will need to be able to access and work with the data. The problem is the database design is a nightmare. For example some simple parent-child table relationships have like 4 and 5 part composite primary keys. I would REALLY like to remove these PKs and replace them with unique constraints or whatever, and add a new column to each of these tables called ID that is just an identity. If I change the PK and FKs on these tables to more managable fields, will the Access app have problems? What I mean is, does access use the meta data from the tables (PK and FK info) in such a way that it would break the app to change these?

    Read the article

  • Spring Security ACL: NotFoundException from JDBCMutableAclService.createAcl

    - by user340202
    Hello, I've been working on this task for too long to abandon the idea of using Spring Security to achieve it, but I wish that the community will provide with some support that will help reduce the regret that I have for choosing Spring Security. Enough ranting and now let's get to the point. I'm trying to create an ACL by using JDBCMutableAclService.createAcl as follows: [code] public void addPermission(IWFArtifact securedObject, Sid recipient, Permission permission, Class clazz) { ObjectIdentity oid = new ObjectIdentityImpl(clazz.getCanonicalName(), securedObject.getId()); this.addPermission(oid, recipient, permission); } @Override @Transactional(propagation = Propagation.REQUIRED, isolation = Isolation.READ_UNCOMMITTED, readOnly = false) public void addPermission(ObjectIdentity oid, Sid recipient, Permission permission) { SpringSecurityUtils.assureThreadLocalAuthSet(); MutableAcl acl; try { acl = this.mutableAclService.createAcl(oid); } catch (AlreadyExistsException e) { acl = (MutableAcl) this.mutableAclService.readAclById(oid); } // try { // acl = (MutableAcl) this.mutableAclService.readAclById(oid); // } catch (NotFoundException nfe) { // acl = this.mutableAclService.createAcl(oid); // } acl.insertAce(acl.getEntries().length, permission, recipient, true); this.mutableAclService.updateAcl(acl); } [/code] The call throws a NotFoundException from the line: [code] // Retrieve the ACL via superclass (ensures cache registration, proper retrieval etc) Acl acl = readAclById(objectIdentity); [/code] I believe this is caused by something related to Transactional, and that's why I have tested with many TransactionDefinition attributes. I have also doubted the annotation and tried with declarative transaction definition, but still with no luck. One important point is that I have used the statement used to insert the oid in the database earlier in the method directly on the database and it worked, and also threw a unique constraint exception at me when it tried to insert it in the method. I'm using Spring Security 2.0.8 and IceFaces 1.8 (which doesn't support spring 3.0 but definetely supprorts 2.0.x, specially when I keep caling SpringSecurityUtils.assureThreadLocalAuthSet()). My AppServer is Tomcat 6.0, and my DB Server is MySQL 6.0 I wish to get back a reply soon because I need to get this task off my way

    Read the article

  • ColdFusion Session issue - multiple users behind one proxy IP -- cftoken and cfid seems to be shared

    - by smoothoperator
    Hi Everyone, I have an application that uses coldfusion's session management (instead of the J2EE) session management. We have one client, who has recently switched their company's traffic to us to come viaa proxy server in their network. So, to our Coldfusion server, it appears that all traffic is coming from this one IP Address, for all of the accounts of this one company.. Of the session variables, Part 1 is kept in a cflock, and Part 2 is kept in editable session variables. I may be misundestanding, but we have done it this way as we modify some values as needed throughout the application's usage. We are now running into an issue of this client having their session variables mixed up (?). We have one case where we set a timestamp.. and when it comes time to look it up, it's empty. From the looks of it this is happening because of another user on the same token. My initial thoughts are to look into modifying our existing session management to somehow generate a unique cftoken/cfid, or to start using jsession_ID, if this solves the problem at all. I have done some basic research on this issue and couldn't find anything similar, so I thought I'd ask here. Thanks!

    Read the article

  • Random forests for short texts

    - by Jasie
    Hi all, I've been reading about Random Forests (1,2) because I think it'd be really cool to be able to classify a set of 1,000 sentences into pre-defined categories. I'm wondering if someone can explain to me the algorithm better, I think the papers are a bit dense. Here's the gist from 1: Overview We assume that the user knows about the construction of single classification trees. Random Forests grows many classification trees. To classify a new object from an input vector, put the input vector down each of the trees in the forest. Each tree gives a classification, and we say the tree "votes" for that class. The forest chooses the classification having the most votes (over all the trees in the forest). Each tree is grown as follows: If the number of cases in the training set is N, sample N cases at random - but with replacement, from the original data. This sample will be the training set for growing the tree. If there are M input variables, a number m « M is specified such that at each node, m variables are selected at random out of the M and the best split on these m is used to split the node. The value of m is held constant during the forest growing. Each tree is grown to the largest extent possible. There is no pruning. So, does this look right? I'd have N = 1,000 training cases (sentences), M = 100 variables (let's say, there are only 100 unique words across all sentences), so the input vector is a bit vector of length 100 corresponding to each word. I randomly sample N = 1000 cases at random (with replacement) to build trees from. I pick some small number of input variables m « M, let's say 10, to build a tree off of. Do I build tree nodes randomly, using all m input variables? How many classification trees do I build? Thanks for the help!

    Read the article

  • Mysql query in drupal database - groupwise maximum with duplicate data

    - by nselikoff
    I'm working on a mysql query in a Drupal database that pulls together users and two different cck content types. I know people ask for help with groupwise maximum queries all the time... I've done my best but I need help. This is what I have so far: # the artists SELECT users.uid, users.name AS username, n1.title AS artist_name FROM users LEFT JOIN users_roles ur ON users.uid=ur.uid INNER JOIN role r ON ur.rid=r.rid AND r.name='artist' LEFT JOIN node n1 ON n1.uid = users.uid AND n1.type = 'submission' WHERE users.status = 1 ORDER BY users.name; This gives me data that looks like: uid username artist_name 1 foo Joe the Plumber 2 bar Jane Doe 3 baz The Tooth Fairy Also, I've got this query: # artwork SELECT n.nid, n.uid, a.field_order_value FROM node n LEFT JOIN content_type_artwork a ON n.nid = a.nid WHERE n.type = 'artwork' ORDER BY n.uid, a.field_order_value; Which gives me data like this: nid uid field_order_value 1 1 1 2 1 3 3 1 2 4 2 NULL 5 3 1 6 3 1 Additional relevant info: nid is the primary key for an Artwork every Artist has one or more Artworks valid data for field_order_value is NULL, 1, 2, 3, or 4 field_order_value is not necessarily unique per Artist - an Artist could have 4 Artworks all with field_order_value = 1. What I want is the row with the minimum field_order_value from my second query joined with the artist information from the first query. In cases where the field_order_value is not valuable information (either because the Artist has used duplicate values among their Artworks or left that field NULL), I would like the row with the minimum nid from the second query.

    Read the article

  • Using entity framework to connect to multiple similar tables in .net MVC.

    - by Dite
    A relative newcomer to .net MVC2 and the entity framework, I am working on a project which requires a single web application, (C# .net 4), to connect to multiple different databases depending on the route of access, (ie subdomain). No problem with this in principle and all the logic is written to transform the subdomain into an entity connection and pass this through to the Entity Model. The problem comes with the fact that the different database whilst being largely similar in structure contain 3 or 4 unique tables bespoke to that instance. To my mind there are two ways to solve this issue, neither of which i am sure will be possible. 1/ Use a separate entity model for each database. -Attempts down this route have through up conflicts where table/sp names are the same across differnt db's, or implicit conversion errors when I try and put the different models in different namespaces. or 2/ Overwrite the classes which refer to the changeable database objects based on the value of a base controller property. -I have found nothing to suggest i can even do this. My question is if either of theser routes can ever work in principle or if i should just give up on the EF and connect to the dtabases directlky using ADO. Perhaps there is another way to solve this problem i haven't thought of? Thanks for any help...

    Read the article

  • Django Aggregation Across Reverse Relationship

    - by Tom
    Given these two models: class Profile(models.Model): user = models.ForeignKey(User, unique=True, verbose_name=_('user')) about = models.TextField(_('about'), blank=True) zip = models.CharField(max_length=10, verbose_name='zip code', blank=True) website = models.URLField(_('website'), blank=True, verify_exists=False) class ProfileView(models.Model): profile = models.ForeignKey(Profile) viewer = models.ForeignKey(User, blank=True, null=True) created = models.DateTimeField(auto_now_add=True) I want to get all profiles sorted by total views. I can get a list of profile ids sorted by total views with: ProfileView.objects.values('profile').annotate(Count('profile')).order_by('-profile__count') But that's just a dictionary of profile ids, which means I then have to loop over it and put together a list of profile objects. Which is a number of additional queries and still doesn't result in a QuerySet. At that point, I might as well drop to raw SQL. Before I do, is there a way to do this from the Profile model? ProfileViews are related via a ForeignKey field, but it's not as though the Profile model knows that, so I'm not sure how to tie the two together. As an aside, I realize I could just store views as a property on the Profile model and that may turn out to be what I do here, but I'm still interested in learning how to better use the Aggregation functions.

    Read the article

  • Easiest way to remove Keys from a 2D Array?

    - by dbemerlin
    Hi, I have an Array that looks like this: array( 0 => array( 'key1' => 'a', 'key2' => 'b', 'key3' => 'c' ), 1 => array( 'key1' => 'c', 'key2' => 'b', 'key3' => 'a' ), ... ) I need a function to get an array containing just a (variable) number of keys, i.e. reduce_array(array('key1', 'key3')); should return: array( 0 => array( 'key1' => 'a', 'key3' => 'c' ), 1 => array( 'key1' => 'c', 'key3' => 'a' ), ... ) What is the easiest way to do this? If possible without any additional helper function like array_filter or array_map as my coworkers already complain about me using too many functions. The source array will always have the given keys so it's not required to check for existance. Bonus points if the values are unique (the keys will always be related to each other, meaning that if key1 has value a then the other key(s) will always have value b). My current solution which works but is quite clumsy (even the name is horrible but can't find a better one): function get_unique_values_from_array_by_keys(array $array, array $keys) { $result = array(); $found = array(); if (count($keys) > 0) { foreach ($array as $item) { if (in_array($item[$keys[0]], $found)) continue; array_push($found, $item[$keys[0]]); $result_item = array(); foreach ($keys as $key) { $result_item[$key] = $item[$key]; } array_push($result, $result_item); } } return $result; } Addition: PHP Version is 5.1.6.

    Read the article

  • Database cache that I'm not aware of?

    - by Martin
    I'm using asp.net mvc, linq2sql, iis7 and sqlserver express 2008. I get these intermittent server errors, primary key conflicts on insertion. I'm using a different setup on my development computer so I can't debug. After a while they go away. Restarting iis helps. I'm getting the feeling there is cache somewhere that I'm not aware of. Can somebody help me sort out these errors? Cannot insert duplicate key row in object 'dbo.EnquiryType' with unique index 'IX_EnquiryType'. Edits regarding Venemos answer Is it possible that another application is also accessing the same database simultaneously? Yes there is, but not this particular table and no inserts or updates. There is one other table with which I experience the same problem but it has to do with a different part of the model. How often an in what context do you create a new DataContext instance? Only once, using the singleton pattern. Are the primary keys generated by the database or by the application? Database. Which version of ASP.NET MVC and which version of .NET are you using? RC2 and 3.5.

    Read the article

  • Insert or Update using Oracle and PL/SQL

    - by Shane
    I have a PL/SQL function that performs an update/insert on an Oracle database that maintains a target total and returns the difference between the existing value and the new value. Here is the code I have so far: FUNCTION calcTargetTotal(accountId varchar2, newTotal numeric ) RETURN number is oldTotal numeric(20,6); difference numeric(20,6); begin difference := 0; begin select value into oldTotal from target_total WHERE account_id = accountId for update of value; if (oldTotal != newTotal) then update target_total set value = newTotal WHERE account_id = accountId difference := newTotal - oldTotal; end if; exception when NO_DATA_FOUND then begin difference := newTotal; insert into target_total ( account_id, value ) values ( accountId, newTotal ); -- sometimes a race condition occurs and this stmt fails -- in those cases try to update again exception when DUP_VAL_ON_INDEX then begin difference := 0; select value into oldTotal from target_total WHERE account_id = accountId for update of value; if (oldTotal != newTotal) then update target_total set value = newTotal WHERE account_id = accountId difference := newTotal - oldTotal; end if; end; end; end; return difference end calcTargetTotal; This works as expected in unit tests with multiple threads never failing. However when loaded on a live system we have seen this fail with a stack trace looking like this: ORA-01403: no data found ORA-00001: unique constraint () violated ORA-01403: no data found The line numbers (which I have removed since they are meaningless out of context) verify that the first update fails due to no data, the insert fail due to uniqueness, and the 2nd update is failing with no data, which should be impossible. From what I have read on other thread a MERGE statement is also not atomic and could suffer similar problems. Does anyone have any ideas how to prevent this from occurring?

    Read the article

  • Building a system that allows users to see a video only once

    - by Bart van Heukelom
    My client wants to distribute a video to some people, specifically car dealers, but he doesn't want the video to end up on Youtube or something like that. Therefore he wants the recipients of the video to be able to see it only once. My idea to implement this is: Generate a unique key per viewer Send each viewer a link to a page with a Flash based video player, with their key in the URL Have Flash get the video from the server. On the server the key is checked and the file sent (using php's readfile or something equivalent). Then the key is invalidated. I was thinking this wouldn't take more than a day to build. I know that if you want somebody to be able to play something, you implicitly give them the power to record it as well, but the client just wants me to make it as hard as possible. Do you think this is secure enough for the intended audience? Anything else I can do to add some security without exceeding the development time of 1 day? I'm also interested in ready made solutions, if they exist.

    Read the article

  • Storing a jpa entity where only the timestamp changes results in updates rather than inserts (desire

    - by David Schlenk
    I have a JPA entity that stores a fk id, a boolean and a timestamp: @Entity public class ChannelInUse implements Serializable { @Id @GeneratedValue private Long id; @ManyToOne @JoinColumn(nullable = false) private Channel channel; private boolean inUse = false; @Temporal(TemporalType.TIMESTAMP) private Date inUseAt = new Date(); ... } I want every new instance of this entity to result in a new row in the table. For whatever reason no matter what I do it always results in the row getting updated with a new timestamp value rather than creating a new row. Even tried to just use a native query to run an insert but channel's ID wasn't populated yet so I gave up on that. I've tried using an embedded id class consisting of channel.getId and inUseAt. My equals and hashcode for are: public boolean equals(Object obj){ if(this == obj) return true; if(!(obj instanceof ChannelInUse)) return false; ChannelInUse ciu = (ChannelInUse) obj; return ( (this.inUseAt == null ? ciu.inUseAt == null : this.inUseAt.equals(ciu.inUseAt)) && (this.inUse == ciu.inUse) && (this.channel == null ? ciu.channel == null : this.channel.equals(ciu.channel)) ); } /** * hashcode generated from at, channel and inUse properties. */ public int hashCode(){ int hash = 1; hash = hash * 31 + (this.inUseAt == null ? 0 : this.inUseAt.hashCode()); hash = hash * 31 + (this.channel == null ? 0 : this.channel.hashCode()); if(inUse) hash = hash * 31 + 1; else hash = hash * 31 + 0; return hash; } } I've tried using hibernate's Entity annotation with mutable=false. I'm probably just not understanding what makes an entity unique or something. Hit the google pretty hard but can't figure this one out.

    Read the article

  • Is this an acceptable use of "ASCII arithmetic"?

    - by jmgant
    I've got a string value of the form 10123X123456 where 10 is the year, 123 is the day number within the year, and the rest is unique system-generated stuff. Under certain circumstances, I need to add 400 to the day number, so that the number above, for example, would become 10523X123456. My first idea was to substring those three characters, convert them to an integer, add 400 to it, convert them back to a string and then call replace on the original string. That works. But then it occurred to me that the only character I actually need to change is the third one, and that the original value would always be 0-3, so there would never be any "carrying" problems. It further occurred to me that the ASCII code points for the numbers are consecutive, so adding the number 4 to the character "0", for example, would result in "4", and so forth. So that's what I ended up doing. My question is, is there any reason that won't always work? I generally avoid "ASCII arithmetic" on the grounds that it's not cross-platform or internationalization friendly. But it seems reasonable to assume that the code points for numbers will always be sequential, i.e., "4" will always be 1 more than "3". Anybody see any problem with this reasoning? Here's the code. string input = "10123X123456"; input[2] += 4; //Output should be 10523X123456

    Read the article

< Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >