Search Results

Search found 886 results on 36 pages for 'duplicates'.

Page 10/36 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • How to check duplicates for 5 textfields with PHP?

    - by jl
    Hi, I have 5 textfields for user input in a form namely: username1, username2, username3, username4 and username5 I would like to know how should I write my php code such that I will be able to check if there is any duplicates between the 5 textfields during POST? I can only think of comparing (username1 !== username2) and so on, but I think there should be simpler way to do it right? How should I be able to do it? Thank you very much.

    Read the article

  • Convert filenames to their checksum before saving to prevent duplicates. Is is a smart thing to do?

    - by Xananax
    TL;DR:what the title says I am developing some sort of image board in PHP. I was thinking of changing each image's filename to it's checksum prior to saving it. This way, I might be able to prevent duplicates. I know this wouldn't work for two images that are the same but differ in size or level of compression or whatnot, but this method would allow for an early check. What bugs me is that I never saw this method implemented anywhere, so I was wondering if there is a catch to it. Maybe it is just more efficient to keep the original filename and store the hash in DB? Maybe the whole method is just not useful and my question is moot? What do you think? On a side note, I don't really get how hashes are calculated so I was wondering, if my first question checks out, if it would be possible to calculate the likeness that two images are similar by comparing hashes (levenshtein or something of the sort).

    Read the article

  • How to stop Visual Studio from returning duplicates answers?

    - by Franck Mesirard
    In my team, we put all our projects (only 7 large ones) in the same solution. And since some code is common between project we tend to have the same file included in each project. This is fine and compiles/runs well. But when I do a global search in my solution, VS does a "stupid" search and goes through all the files in each project, without checking if a file has already been searched. This leads to longer searches whose results have duplicates. Do anyone know a fix for this issue?

    Read the article

  • Oracle: Insertion on an indexed table, avoiding duplicates. Looking for tips and advice.

    - by Tom
    Hi everyone, Im looking for the best solution (performance wise) to achieve this. I have to insert records into a table, avoiding duplicates. For example, take table A Insert into A ( Select DISTINCT [FIELDS] from B,C,D.. WHERE (JOIN CONDITIONS ON B,C,D..) AND NOT EXISTS ( SELECT * FROM A ATMP WHERE ATMP.SOMEKEY = A.SOMEKEY ) ); I have an index over A.SOMEKEY, just to optimize the NOT EXISTS query, but i realize that inserting on an indexed table will be a performance hit. So I was thinking of duplicating Table A in a Global Temporary Table, where I would keep the index. Then, removing the index from Table A and executing the query, but modified Insert into A ( Select DISTINCT [FIELDS] from B,C,D.. WHERE (JOIN CONDITIONS ON B,C,D..) AND NOT EXISTS ( SELECT * FROM GLOBAL_TEMPORARY_TABLE_A ATMP WHERE ATMP.SOMEKEY = A.SOMEKEY ) ); This would solve the "inserting on an index table", but I would have to update the Global Temporary A with each insertion I make. I'm kind of lost here, Is there a better way to achieve this? Thanks in advance,

    Read the article

  • Picasa duplicates everything into a `$My Pictures` folder.

    - by Barend
    My parents have a Windows XP laptop running Picasa, Dutch version of both. It's configured to put imported photos in C:\Documents and Settings\user\Mijn documenten\Mijn afbeeldingen, which is Windows XP's default My Pictures folder localized to Dutch. A while back I noticed disk space was going much faster than it should be. Turns out there was a Documents and Settings\user\Mijn documenten\$My Pictures pretty much the same size as the original one. This was well over a year ago. I figured it to be an internationalization bug, the $My Pictures thing looks like a placeholder that didn't get resolved. I figured it'd get fixed soon. It didn't. I threw out Picasa, tediously cleaned up the mess it left behind and replaced it by Windows Live Photo Gallery. My parents found WLPG to be unworkable and asked to get Picasa back. I reinstalled it, by now it's at 3.8.0 (build 117.29, 0). Wouldn't you have it, the mystery $My Pictures folder is back, 25GB of disk space has evaporated and it's the same mess it used to be. What's going on here? How do I stop it doing this?

    Read the article

  • Getting duplicate count when executing INSERT IGNORE via JDBC

    - by Nickolay Komar
    Is it possible to get the duplicate count when executing MySQL "INSERT IGNORE" statement via JDBC? For example, when I execute an INSERT IGNORE statement on the mysql command line, and there are duplicates I get something like Query OK, 0 rows affected (0.02 sec) Records: 1 Duplicates: 1 Warnings: 0 Note where it says "Duplicates: 1", indicating that there were duplicates that were ignored. Is it possible to get the same information when executing the query via JDBC? Thanks.

    Read the article

  • WPF DataGrid duplicates new row when new item is attached to the source collection.

    - by Shimmy
    <Page> <Page.Resources> <data:Quote x:Key="Quote"/> </Page.Resources> <tk:DataGrid DataContext="{Binding Quote}" ItemsSource="{Binding Rooms}"> <tk:DataGrid/> </Page> Code: Private Sub InitializingNewItem _ (sender As DataGrid, _ ByVal e As InitializingNewItemEventArgs) _ Handles dgRooms.InitializingNewItem Dim room = DirectCast(e.NewItem, Room) 'Room is subclass of EntityObject Dim state = room.EntityState 'Detached Dim quote = Resources("Quote") state = quote.EntityState 'Unchanged 'either one of these lines causes the new row to go duplicated: quote.Rooms.Add(room) room.Quote = quote 'I tried: sender.Items.Refresh 'I also tried to remove the detached entity from the DataGrid and create a 'new item but it they throw exceptions saying the the Items is untouchable. End If

    Read the article

  • How can I 'transpose' my data using SQL and remove duplicates at the same time?

    - by Remnant
    I have the following data structure in my database: LastName FirstName CourseName John Day Pricing John Day Marketing John Day Finance Lisa Smith Marketing Lisa Smith Finance etc... The data shows employess within a business and which courses they have shown a preference to attend. The number of courses per employee will vary (i.e. as above, John has 3 courses and Lisa 2). I need to take this data from the database and pass it to a webpage view (asp.net mvc). I would like the data that comes out of my database to match the view as much as possible and want to transform the data using SQl so that it looks like the following: LastName FirstName Course1 Course2 Course3 John Day Pricing Marketing Finance Lisa Smith Marketing Finance Any thoughts on how this may be achieved? Note: one of the reasons I am trying this approach is that the original data structure does not easily lend itself to be iterated over using the typical mvc syntax: <% foreach (var item in Model.courseData) { %> Because of the duplication of names in the orignal data I would end up with lots of conditionals in my View which I would like to avoid. I have tried transforming the data using c# in my ViewModel but have found it tough going and feel that I could lighten the workload by leveraging SQL before I return the data. Thanks.

    Read the article

  • How many times can you randomly generate a GUID before you risk duplicates? (.NET)

    - by SLC
    Mathematically I suppose it's possible that even two random GUIDs generated using the built in method in the .NET framework are identical, but roughly how likely are they to clash if you generate hundreds or thousands? If you generated one for every copy of Windows in the world, would they clash? The reason I ask is because I have a program that creates a lot of objects, and destroys some too, and I am wondering about the likelihood of any of those objects (including the destroyed ones) having identical GUIDs.

    Read the article

  • Is it valid for Hibernate list() to return duplicates?

    - by skaffman
    Is anyone aware of the validity of Hibernate's Criteria.list() and Query.list() methods returning multiple occurrences of the same entity? Occasionally I find when using the Criteria API, that changing the default fetch strategy in my class mapping definition (from "select" to "join") can sometimes affect how many references to the same entity can appear in the resulting output of list(), and I'm unsure whether to treat this as a bug or not. The javadoc does not define it, it simply says "The list of matched query results." (thanks guys). If this is expected and normal behaviour, then I can de-dup the list myself, that's not a problem, but if it's a bug, then I would prefer to avoid it, rather than de-dup the results and try to ignore it. Anyone got any experience of this?

    Read the article

  • MS Access Mark Duplicates in order of appearance - using the function RankOfDup: (SELECT Count(*) ...)

    - by veska stoyanova
    I'm trying to create Ranking that shows the sequence of agreements for the two fields Customers and Agreements. The number for agreements must be unique, whereas customers can repeat. The formula RankOfDup: (SELECT Count(*) FROM Data a WHERE a.customer=Data.customer And a.agreement >= Data.agreement) Works beautifully but after this query with columns Agreement, Customer and RankofDup, I need to create crosstab that transposes the RankofDub. It works when I make the table first and then create query but my data is too large so I'm trying to put the select query with the ranking in a crosstab query. However, when I try to do this Access gives error message that microsoft jet ... doesn't recognise Data.customer? Any ideas how I can fix this?

    Read the article

  • What is the fastest way to find duplicates in multiple BIG txt files?

    - by user2950750
    I am really in deep water here and I need a lifeline. I have 10 txt files. Each file has up to 100.000.000 lines of data. Each line is simply a number representing something else. Numbers go up to 9 digits. I need to (somehow) scan these 10 files and find the numbers that appear in all 10 files. And here comes the tricky part. I have to do it in less than 2 seconds. I am not a developer, so I need an explanation for dummies. I have done enough research to learn that hash tables and map reduce might be something that I can make use of. But can it really be used to make it this fast, or do I need more advanced solutions? I have also been thinking about cutting up the files into smaller files. To that 1 file with 100.000.000 lines is transformed into 100 files with 1.000.000 lines. But I do not know what is best: 10 files with 100 million lines or 1000 files with 1 million lines? When I try to open the 100 million line file, it takes forever. So I think, maybe, it is just too big to be used. But I don't know if you can write code that will scan it without opening. Speed is the most important factor in this, and I need to know if it can be done as fast as I need it, or if I have to store my data in another way, for example, in a database like mysql or something. Thank you in advance to anybody that can give some good feedback.

    Read the article

  • How to search array for duplicates using a single array?

    - by patrick
    I am checking a list of 10 spots, each spot w/ 3 top users to see if a user is in an array before echoing and storing. foreach($top_10['top_10'] as $top10) //loop each spot { $getuser = ltrim($top10['url'], " users/" ); //strip url if ($usercount < 3) { if ((array_search($getuser, $array) !== true)) { echo $getuser; $array[$c++] = $getuser; } else { echo "duplicate <br /><br />"; } } } The problem I am having is that for every loop, it creates a multi-dimensional array for some reason which only allows array_search to search the current array and not all combined. I am wanting to store everything in the same $array. This is what I see after a print_r($array) Array ( [0] => 120728 [1] => 205247 ) Array ( [0] => 232123 [1] => 091928 )

    Read the article

  • Why does XCode keep downloading old deleted profiles and duplicates of the same profile?

    - by Piepants
    If I refresh the profiles in XCode it: a) Pulls down ones that no longer exist. Profiles that have been deleted from the portal and are no longer there b) Pulls down multiple copies of the same profile. If I add a new device and then update the profiles to include that new device. Then XCode will pull down the new updated profile but also the same profile with an older date (even though the portal only shows one, the latest). If I delete them them XCode they re-appear. I'm having problems getting push notifications to work with an ad-hoc distrubtion and so want to ensure I am build with the latest profiles. This behaviour of XCode is irritating at least, and possibly the source of my problems at worst.

    Read the article

  • How do I filter or retain duplicates in Perl?

    - by manu
    I have one text string which is having some duplicate characters (FFGGHHJKL). These can be made unique by using the positive lookahead: $ perl -pe 's/(.)(?=.*?\1)//g'] For example, with "FFEEDDCCGG", the output is "FEDCG". My question is how to make it work on the numbers (Ex. 212 212 43 43 5689 6689 5689 71 81 === output should be 212 43 5689 6689 71 81) ? Also if we want to have only duplicate records to be given as the output from a file having n rows 212 212 43 43 5689 6689 5689 71 81 66 66 67 68 69 69 69 71 71 52 .. Output: 212 212 43 43 5689 5689 66 66 69 69 69 71 71 How can I do this?

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >