Search Results

Search found 61631 results on 2466 pages for 'duplicate data'.

Page 737/2466 | < Previous Page | 733 734 735 736 737 738 739 740 741 742 743 744  | Next Page >

  • NOT LIKE not working on comparison to a column

    - by rodling
    Data is fairly large and takes few minutes to run it every time, so its taking a lot of time debugging this problem. When I run like concat('%',T.item,'%') on smaller data it seems to identify items properly. However, when I run it on the main DB (the code shown), it still shows many(maybe even all) of the exceptions. EDIT: it seems when i add NOT it stops identifying items select distinct T.comment from (select comment, source, item from data, non_informative where ticker != "O" and source != 7 and source != 6) as T where T.comment not like concat('%',T.item,'%') order by T.comment; comment and source are in data, item is in non_informative Some items from T.item: 'Stock Analysis -', '#InsideTrades', 'IIROC Trade' Example comment which should be removed '#InsideTrades #4 | MACNAB CRAIG (Director,Officer,Chief Executive Officer): Filed Form 4 for $NNN (NATIONAL RETA' Can't seem to figure out it why shows all the items

    Read the article

  • Combine MD5 hashes of multiple files

    - by user685869
    I have 7 files that I'm generating MD5 hashes for. The hashes are used to ensure that a remote copy of the data store is identical to the local copy. Unfortunately, the link between these two copies of the data is mind numbingly slow. Changes to the data are very rare but I have a requirement that the data be synchronized at all times (or as soon as possible). Rather than passing 7 different MD5 hashes across my (extremely slow) communications link, I'd like to generate the hash for each file and then combine these hashes into a single hash which I can then transfer and then re-calculate/use for comparison on the remote side. If the "combined hash" differs, then I'd start sending the 7 individual hashes to determine exactly which file(s) have been changed. For example, here are the MD5 hashes for the 7 files as of last week: 0709d609d69385255c496436eb50402c 709465a74411bd596595c7b9b158ae6a 4ab657320ef33e3d5eb498e4c13d41b7 3b49c6ab199994fd776bb63761414e72 0fc28c5a010fc3c06c0c930c88e31a15 c4ecd214662cac5aae0e53f6f252bf0e 8b086431e43148a2c2d943ba30d31cc6 I'd like to combine these hashes together such that I get a single unique value (perhaps another MD5 hash?) that I can then send to the remote system. On the remote system, I'd then perform the same calculation to determine if the data as a whole has been changed. If it has, then I'd start sending the individual hashes, etc. The most important factor is that my "combined hash" be short enough so that it uses less bandwidth than just sending all 7 hashes in the first place. I thought of writing the 7 MD5 hashes to a file and then hashing that file but is there a better way?

    Read the article

  • How to create unique user key

    - by Grayson Mitchell
    Scenario: I have a fairly generic table (Data), that has an identity column. The data in this table is grouped (lets say by city). The users need an identifier in order for printing on paper forms, etc. The users can only access their cites data, so if they use the identity column for this purpose they will see odd numbers (e.g. a 'New York' user might see 1,37,2028... as the listed keys. Idealy they would see 1,2,3... (or something similar) The problem of course is concurrency, this being a web application you can't just have something like: UserId = Select Count(*)+1 from Data Where City='New York' Has anyone come up with any cunning ways around this problem?

    Read the article

  • SQL Server float datatype

    - by Martin Smith
    The documentation for SQL Server Float says Approximate-number data types for use with floating point numeric data. Floating point data is approximate; therefore, not all values in the data type range can be represented exactly. Which is what I expected it to say. If that is the case though why does the following return 'Yes' in SQL Server DECLARE @D float DECLARE @E float set @D = 0.1 set @E = 0.5 IF ((@D + @D + @D + @D +@D) = @E) BEGIN PRINT 'YES' END ELSE BEGIN PRINT 'NO' END but the equivalent C++ program returns "No"? #include <iostream> using namespace std; int main() { float d = 0.1F; float e = 0.5F; if((d+d+d+d+d) == e) { cout << "Yes"; } else { cout << "No"; } }

    Read the article

  • Best Practices For Secure APIs?

    - by Ferrett Steinmetz
    Let's say I have a website that has a lot of information on our products. I'd like some of our customers (including us!) to be able to look up our products for various methods, including: 1) Pulling data from AJAX calls that return data in cool, JavaScripty-ways 2) Creating iPhone applications that use that data; 3) Having other web applications use that data for their own end. Normally, I'd just create an API and be done with it. However, this data is in fact mildly confidential - which is to say that we don't want our competitors to be able to look up all our products every morning and then automatically set their prices to undercut us. And we also want to be able to look at who might be abusing the system, so if someone's making ten million complex calls to our API a day and bogging down our server, we can cut them off. My next logical step would be then to create a developers' key to restrict access - which would work fine for web apps, but not so much for any AJAX calls. (As I see it, they'd need to provide the key in the JavaScript, which is in plaintext and easily seen, and hence there's actually no security at all. Particularly if we'd be using our own developers' keys on our site to make these AJAX calls.) So my question: after looking around at Oauth and OpenID for some time, I'm not sure there is a solution that would handle all three of the above. Is there some sort of canonical "best practices" for developers' keys, or can Oauth and OpenID handle AJAX calls easily in some fashion I have yet to grok, or am I missing something entirely?

    Read the article

  • Clientside Javascript --> Serverside Java --> user is served a .doc

    - by ignorantslut
    I am helping someone out with a javascript-based web app (even though I know next to nothing about web development) and we are unsure about the best way to implement a feature we'd like to have. Basically, the user will be using our tool to view all kinds of boring data in tables, columns, etc. via javascript. We want to implement a feature where the user can click a button or link that then allows the user to download the displayed data in a .doc file. Our basic idea so far is something like: call a Java function on the server with the desired data passed in as a String when the link is clicked generate the .doc file on the server automatically "open" a link to the file in the client's browser to initiate the download Is this possible? If so, is it feasible? Or, can you recommend a better solution? edit: the data does not reside on the server; rather, it is queried from a SQL database

    Read the article

  • Concurrent Threads in C# using BackgroundWorker

    - by Jim Fell
    My C# application is such that a background worker is being used to wait for the acknowledgement of some transmitted data. Here is some psuedo code demonstrating what I'm trying to do: UI_thread { TransmitData() { // load data for tx // fire off TX background worker } RxSerialData() { // if received data is ack, set ack received flag } } TX_thread { // transmit data // set ack wait timeout // fire off ACK background worker // wait for ACK background worker to complete // evaluate status of ACK background worker as completed, failed, etc. } ACK_thread { // wait for ack received flag to be set } What happens is that the ACK BackgroundWorker times out, and the acknowledgement is never received. I'm fairly certain that it is being transmitted by the remote device because that device has not changed at all, and the C# application is transmitting. I have changed the ack thread from this (when it was working)... for( i = 0; (i < waitTimeoutVar) && (!bAckRxd); i++ ) { System.Threading.Thread.Sleep(1); } ...to this... DateTime dtThen = DateTime.Now(); DateTime dtNow; TimeSpan stTime; do { dtNow = DateTime.Now(); stTime = dtNow - dtThen; } while ( (stTime.TotalMilliseconds < waitTimeoutVar) && (!bAckRxd) ); The latter generates a very acurate wait time, as compared to the former. However, I am wondering if removal of the Sleep function is interferring with the ability to receive serial data. Does C# only allow one thread to run at a time, that is, do I have to put threads to sleep at some time to allow other threads to run? Any thoughts or suggestions you may have would be appreciated. I am using Microsoft Visual C# 2008 Express Edition. Thanks.

    Read the article

  • Update existing columns and rows within csv file using Python

    - by wilbev
    So I've been attempting to use the csv module in Python to add data to existing rows and columns, but only specific columns of each row. So for examples let's say my existing csv file has the following: id, name, city, age 1, Ed,, 34 2, Pat,, 23 So basically the city of each person is missing, so I would like to update each row with that person's city. However, the writerow method only seems replace the existing data within the csv file. Changing the open file to append mode just adds the data to a new row. Is there any way to skip the existing data, and only add the city to each row? Thanks

    Read the article

  • How can I make this jQuery plugin chainable after all image load events have completed?

    - by BumbleB2na
    [UPDATE] Solution I decided on: Decided that passing in a callback to the plugin will take care of firing an event once all images have completed loading. Chaining is also still possible. Updated Fiddle I am building a chainable jQuery plugin that can load images dynamically. (View the following code as a JSFiddle) Html: <img data-img-src="http://www.lubasf.com/blog/wp-content/uploads/2009/03/gnome.jpg" style="display: none" /> <img data-img-src="http://buffered.io/uploads/2008/10/gnome.jpg" style="display: none" /> Instead of adding in a src attribute, I give these images a data-img-src attribute. My plugin uses the value of that to fill the src. Also, these images are hidden to begin with. jQuery plugin: (function(jQuery) { jQuery.fn.loadImages = function() { var numToLoad = jQuery(this).length; var numLoaded = 0; jQuery(this).each(function() { if(jQuery(this).attr('src') == undefined) { return jQuery(this).load(function() { numLoaded++; if(numToLoad == numLoaded) return this; // attempt at making this plugin // chainable, after all .load() // events have completed. }).attr('src', jQuery(this).attr('data-img-src')); } else { numLoaded++; if(numToLoad == numLoaded) return this; // attempt at making this plugin // chainable, after all .load() // events have completed. } }); // this works if uncommented, but returns before all .load() events have completed //return this; }; })(jQuery); // I want to chain a .fadeIn() after all images have completed loading $('img[data-img-src]').loadImages().fadeIn(); Is there a way to make this plugin chainable, and have my fadeIn() happen after all images have loaded?

    Read the article

  • How to tell if JSON object is empty in jQuery

    - by GrantU
    I have the following JSON: { "meta": { "limit": 20, "next": null, "offset": 0, "previous": null, "total_count": 0 }, "objects": [] } I'm interested in objects: I want to know if objects is empty and show an alert: something like this: success: function (data) { $.each(data.objects, function () { if data.objects == None alert(0) else :alert(1) });

    Read the article

  • show() doesn't redraw anymore

    - by Abruzzo Forte e Gentile
    Hi All I am working in linux and I don't know why using python and matplotlib commands draws me only once the chart I want. The first time I call show() the plot is drawn, wihtout any problem, but not the second time and the following. I close the window showing the chart between the two calls. Do you know why and hot to fix it? Thanks AFG from numpy import * from pylab import * data = array( [ 1,2,3,4,5] ) plot(data) [<matplotlib.lines.Line2D object at 0x90c98ac>] show() # this call shows me a plot #..now I close the window... data = array( [ 1,2,3,4,5,6] ) plot(data) [<matplotlib.lines.Line2D object at 0x92dafec>] show() # this one doesn't shows me anything

    Read the article

  • Table sorter causing slow script

    - by Sarah
    Hello guys i have a problem i wish i could find a solution i created this instant chat system works fine i created this seTimeout to retrieve data when the messages exceeded an extent slow script alert started to appear when i tracked down the problem i found out that the data is retrieved in time longer than the call to setTimeout and since i use jquery table sorter don't realize that the data is a table until it is all retrieved so continuous slow script alerts are displayed till all table is retrieved.one more thing i retrieve data using an ajax request. Note: The number of rows to be retrieved is 257 rows How could i solve this problem?Any ideas .

    Read the article

  • How to read a XML format file to memory in C#?

    - by Nano HE
    // .net 2.0 and vs2005 used. I find some code below. I am not sure I can extended the sample code or not? thank you. if (radioButton.Checked) { MemoryStream ms=new MemoryStream(); byte[] data=ASCIIEncoding.ASCII.GetBytes(textBox1.Text); ms.Write(data,0,data.Length); reader = new XmlTextReader(ms); //some procesing code ms.Close(); reader.Close(); } BTW, Could you please help me to do some dissection about the line below. byte[] data=ASCIIEncoding.ASCII.GetBytes(textBox1.Text);

    Read the article

  • Page_Load after Modal Popup

    - by n0chi
    I have a page with a user control which gets some data updated via a modal popup. Upon clicking "ok" on the modal popup - the new data is being written to the database - but the base page doesnt "reload" to show the updated data. How do I get that to happen?

    Read the article

  • How to convert a byte array of 19200 bytes in size where each byte represents 4 pixels (2 bits per p

    - by Klinger
    I am communicating with an instrument (remote controlling it) and one of the things I need to do is to draw the instrument screen. In order to get the screen I issue a command and the instrument replies with an array of bytes that represents the screen. Below is what the instrument manual has to say about converting the response to the actual screen: The command retrieves the framebuffer data used for the display. It is 19200 bytes in size, 2-bits per pixel, 4 pixels per byte arranged as 320x240 characteres. The data is sent in RLE encoded form. To convert this data into a BMP for use in Windows, it needs to be turned into a 4BPP. Also note that BMP files are upside down relative to this data, i.e. the top display line is the last line in the BMP. I managed to unpack the data, but now I am stuck on how to actually go from the unpacked byte array to a bitmap. My background on this is pretty close to zero and my searches have not revealed much either. I am looking for directions and/or articles I could use to help me undestand how to get this done. Any code or even pseudo code would also help. :-) So, just to summarize it all: How to convert a byte array of 19200 bytes in size, where each byte represents 4 pixels (2 bits per pixel), to a bitmap arranged as 320x240 characters. Thanks in advance.

    Read the article

  • Detecting scroll in Jquery Mobile

    - by noway
    I can't detect whether the page is scrolled or not with jquery mobile. Scrolltop always return 0 in any case. <script> var interval = setInterval(function() { alert($("#articlecontent").scrollTop()); //alert($(window).scrollTop()); //alert($("#maindiv").scrollTop()); } }, 3000); </script> <div data-role="page" id="maindiv"> <div class="ui-bar ui-bar-b"> </div> <div id='articlecontent' data-role="content" data-iscroll> sldfjlkjsl lksjd kls df hjks djkh sdjfkh sjkf jksd jkhsdf jkhsd hjwiuhhfg skd jkshd fkj fkjsg kjhsdkjf sldfjlkjsl lksjd kls df hjks djkh sdjfkh sjkf jksd jkhsdf jkhsd hjwiuhhfg skd jkshd fkj fkjsg kjhsdkjf sldfjlkjsl lksjd kls df hjks djkh sdjfkh sjkf jksd jkhsdf jkhsd hjwiuhhfg skd jkshd fkj fkjsg kjhsdkjf </div> <div data-role="footer" data-id="foo1"> </div> </div>

    Read the article

  • Architecture for analysing search result impressions/clicks to improve future searches

    - by Hais
    We have a large database of items (10m+) stored in MySQL and intend to implement search on metadata on these items, taking advantage of something like Sphinx. The dataset will be changing slightly on a daily basis so Sphinx will be re-indexing daily. However we want the algorithm to self-learn and improve search results by analysing impression and click data so that we provide better results for our customers on that search term, and possibly other similar search terms too. I've been reading up on Hadoop and it seems like it has the potential to crunch all this data, although I'm still unsure how to approach it. Amazon has tutorials for compiling impression vs click data using MapReduce but I can't see how to get this data in a useable format. My idea is that when a search term comes in I query Sphinx to get all the matching items from the dataset, then query the analytics (compiled on an hourly basis or similar) so that we know the most popular items for that search term, then cache the final results using something like Memcached, Membase or similar. Am I along the right lines here?

    Read the article

  • Zend Framework Handle One to Many

    - by user192344
    I have 2 tables "user", "contact", the relation between two tables is one user has many contact Table member m_id name ------------ Table Contact c_id c_m_id value in zend model class, i do it in this way /* Member.php */ class Default_Model_DbTable_Member extends Zend_Db_Table_Abstract { protected $_name = 'member'; protected $_dependentTables = array('Default_Model_DbTable_Contact'); } /* Contact.php */ class Default_Model_DbTable_Contact extends Zend_Db_Table_Abstract { protected $_name = 'contact'; protected $_referenceMap = array( 'Member' => array( 'columns'=> array('c_id'), 'refTableClass'=> 'Default_Model_DbTable_Member', 'refColumns'=> array('c_m_id') ) ); /* IndexController.php */ class IndexController extends Zend_Controller_Action { public function indexAction() { $m= new Default_Model_DbTable_Member(); $row = $m->find(1); $data = $row->current(); $data = $data->findDependentRowset('Default_Model_DbTable_Contact'); print_r($data->toArray()); } } But i just get Invalid parameter number: no parameters were bound , my goal is to search a member detail record, and it also contains a array which store all contact info (i can use join method to do that, but i just want to try zend feature)

    Read the article

  • C++ calculation evaluated to 0

    - by Alexander Stolz
    I'm parsing a file and trying to decode coordinates to the right unit. What happens is that this code is evaluated to 0. If I type it into gdb the result is correct. int pLat = (int)( (argv[6].data() == "plus" ? 1 : -1) * ( atoi(argv[7].data()) + atoi(argv[8].data()) / 60. + atoi(argv[9].data()) / 36000.) * 2.145767 * 0.0001); P.s. If someone knows a better title for this question, please change it

    Read the article

  • jQuery::Ajax success never occurs

    - by Legend
    I have an ajax call in the head section of my index.html $.ajax({ method: 'get', url : 'php/getRecord.php?color=red', dataType: "json", success: function (data) { alert(data); } }); For some reason, that alert never gets called. Am I doing something wrong? The PHP file does give me data when testing it directly.

    Read the article

  • Persisting and applying Linq Query to collection

    - by MattiasK
    I have a scenario where I would want a plugin to construct a LINQ (to objects) query, send it across an appdomain and then apply and run it against a collection of my choosing Is it possible, how? If not, perhaps I could send a whole method (interface) across the appdomain and run it against the data on the appside? The main thing is that I want the data to reside in the CurrentDomain and the logic for operating on it in the plugin so I don't have to send the data across the boundary...

    Read the article

  • Lookups in Multi-Tenant Database

    - by Huthaifa Afanah
    I am developing a SaaS application and I am looking for the best way to design lookup tables, taking in consideration: The look-up tables will have predefined data shared among all the tenants Each tenant must have the ability to extend the look-up table with his own data e.g adding a car class not defined I am thinking about adding TenantID column to each lookup and add the predefined data with setting that column to some value which represents the "Super Tenant" that belongs to the system itself

    Read the article

  • retriving row of grid

    - by madhu
    i have data grid to which data is getting from database.after getting the data i hvve to show entire row information in an alert box.can any one help me thanks in advance. my function code is private function fetch(event:Event):void { var selectedRow:Object = event.currentTarget.selectedItem; Alert.show(""+selectedRow.Details); } iam calling this method on click event of grid

    Read the article

  • @Resource annotation is null at run-time.

    - by Andrew
    I'm using GlassFish v3. The following field is declared in a class: @Resource private javax.sql.DataSource _data_source; The following is declare in web.xml: <data-source <namejava:app/env/data</name <class-namecom.mysql.jdbc.Driver</class-name <server-namelocalhost</server-name <port-number3306</port-number <usermyUser</user <passwordmyPass</password </data-source At run-time _data_source is empty. What am I doing wrong?

    Read the article

  • Symmetric Encryption: Performance Questions

    - by cam
    Does the performance of a symmetric encryption algorithm depend on the amount of data being encrypted? Suppose I have about 1000 bytes I need to send over the network rapidly, is it better to encrypt 50 bytes of data 20 times, or 1000 bytes at once? Which will be faster? Does it depend on the algorithm used? If so, what's the highest performing, most secure algorithm for amounts of data under 512 bytes?

    Read the article

< Previous Page | 733 734 735 736 737 738 739 740 741 742 743 744  | Next Page >