Search Results

Search found 59643 results on 2386 pages for 'data migration'.

Page 1177/2386 | < Previous Page | 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184  | Next Page >

  • Preon library problem

    - by Kamahire
    I am using preon lib to parse binary data it contain short, int data The structure as follows @BoundNumber(size="32", byteOrder=ByteOrder.BigEndian) public int time;// @BoundString(size="2") public String alphaChar;// 2 byte array @BoundNumber(size="16", byteOrder=ByteOrder.BigEndian) public int code1;//short @BoundNumber(size="16", byteOrder=ByteOrder.BigEndian) public int code2;//short @BoundNumber(size="16", byteOrder=ByteOrder.BigEndian) public int code3;//short @BoundString(size="8") public String firstName;// 8 byte array @BoundString(size="8") public String middleName;// 8 byte array @BoundString(size="8") public String lastName;// 8 byte array @BoundNumber(size="16", byteOrder=ByteOrder.BigEndian) public int code4;//short I am getting correct values for code1, code2, code3 but for code4 it not giving me correct value. It always gives me 0(Zero); When I checked with position of byte array; it shows me correct value. Is there any kind padding require?

    Read the article

  • Class basic operators

    - by swan
    Hi, Is it necessary to have a copy constructor, destructor and operator= in a class that have only static data member, no pointer class myClass{ int dm; public: myClass(){ dm = 1; } ~myClass(){ } // Is this line usefull ? myClass(const myClass& myObj){ // and that operator? this->dm = myObj.dm; } myClass& operator=(const myClass& myObj){ // and that one? if(this != &myObj){ this->dm = myObj.dm; } return *this; } }; I read that the compiler build one for us, so it is better to not have one (when we add a data member we have to update the operators)

    Read the article

  • Continuously reading from a stream in C#?

    - by Damien Wildfire
    I have a Stream object that occasionally gets some data on it, but at unpredictable intervals. Messages that appear on the Stream are well-defined and declare the size of their payload in advance (the size is a 16-bit integer contained in the first two bytes of each message). I'd like to have a StreamWatcher class which detects when the Stream has some data on it. Once it does, I'd like an event to be raised so that a subscribed StreamProcessor instance can process the new message. Can this be done with C# events without using Threads directly? It seems like it should be straightforward, but I can't get quite get my head around the right way to design this.

    Read the article

  • Looping through a SimpleXML object

    - by Aditya
    I have a simpleXml object and want to read the data from the object , I am new to PHP and dont quite know how to do this. The object details are as follows. I need to read [description] and [hours]. Thankyou. SimpleXMLElement Object ( [@attributes] = Array ( [type] = array ) [time-entry] = Array ( [0] = SimpleXMLElement Object ( [date] = 2010-01-26 [description] = TCDM1 data management: sort & upload NFP SubProducers list [hours] = 1.0 [id] = 21753865 [person-id] = 350501 [project-id] = 4287373 [todo-item-id] = SimpleXMLElement Object ( [@attributes] = Array ( [type] = integer [nil] = true ) ) ) [1] = SimpleXMLElement Object ( [date] = 2010-01-27 [description] = PDCH1: HTML [hours] = 0.25 [id] = 21782012 [person-id] = 1828493 [project-id] = 4249185 [todo-item-id] = SimpleXMLElement Object ( [@attributes] = Array ( [type] = integer [nil] = true ) ) ). Please help me. I tries a lot of stuff , but not getting the syntax right.

    Read the article

  • To use an api or store a large dataset in a rails app?

    - by Dave
    Hi all- I am working on a site that has the potential to need a LOT of space. Basically we hope to have every video game every created stored in a database along with an image of the cover. There are some api's out there that might be able to help, like GiantBomb's (www.giantbomb.com). We are trying to decide whether to store the data locally and if so where to find that comprehensive a list, or make calls to the api on demand. The problem with the latter is likely latency and also downtime problems. Assuming we want to store it locally here are the questions: 1) Where can we find this kind of data (yes, I looked on google, and no I couldnt find anything:)) 2) What is the most efficient way to encode and store the images? Thanks!

    Read the article

  • Database table schema design - varchar(n). Suitable choice of N

    - by morpheous
    Coming from a C background, I may be getting too anal about this and worrying unnecessarily about bits and bytes here. Still, I cant help thinking how the data is actually stored and that if I choose an N which is easily factorizable into a power of 2, the database will be more effecient in how it packs data etc. Using this "logic", I have a string field in a table which is a variable length up to 21 chars. I am tempted to use 32 instead of 21, for the reason given above - however now I am thinking that I am wasting disk space because there will be space allocated for 11 extra chars that are guaranteed to be never used. Since I envisage storing several tens of thousands of rows a day, it all adds up. Question: Mindful of all of the above, Should I declare varchar(21) or varchar(32) and why?

    Read the article

  • How to design a database schema for storing text in multiple languages?

    - by stach
    We have a PostgreSQL database. And we have several tables which need to keep certain data in several languages (the list of possible languages is thankfully system-wide defined). For example lets start with: create table blah (id serial, foo text, bar text); Now, let's make it multilingual. How about: create table blah (id serial, foo_en text, foo_de text, foo_jp text, bar_en text, bar_de text, bar_jp text); That would be good for full-text search in Postgres. Just add a tsvector column for each language. But is it optimal? Maybe we should use another table to keep the translations? Like: create table texts (id serial, colspec text, obj_id int, language text, data text); Maybe, just maybe, we should use something else - something out of the SQL world? Any help is appreciated.

    Read the article

  • Large amount of constants in Java

    - by Lars D
    I need to include about 1 MByte of data in a Java application, for very fast and easy access in the rest of the source code. My main background is not Java, so my initial idea was to convert the data directly to Java source code, defining 1MByte of constant arrays, classes (instead of C++ struct) etc., something like this: public final/immutable/const MyClass MyList[] = { { 23012, 22, "Hamburger"} , { 28375, 123, "Kieler"} }; However, it seems that Java does not support such constructs. Is this correct? If yes, what is the best solution to this problem?

    Read the article

  • Javascript: How to escape Unicode Chars

    - by user293006
    JSON String: { "id":31896, "name":"Zickey attitude - McKinley, La Rosi\u00e8re, 21 ao\u00fbt 2006", ... } this causes an unterminated string in javascript. my focus on solution is: data.replace(/(\S)\1(\1)+/g, ''); or data.replace(/\u([0-9A-Z])/, ''); any ideas/solution? example: http://api.jamendo.com/get2/id+name+url+stream+album_name+album_url+album_id+artist_id+artist_name/track/jsonpretty/track_album+album_artist/?n=13&order=ratingmonth_desc&tag_idstr=jazz last node is the problem, fyi. (/\u([0-9A-Z])/, '\1');

    Read the article

  • Selecting a portion of a JSON array and applying variables in javascript or jquery

    - by user1644609
    I am retrieving a JSON file that returns its results like what you see below. The JSON has 365 days worth of data. I would like to create "views" of this JSON using javascript, one which pulls the last 10 days, then 1 month, 6 months, etc. After the getJSON function I am doing this to get a string as JSON, then turn it into an object and will then graph it. So I would like each "view" to be an object for the specified timeframe (using the one JSON). The obj_10days, obj_1month, etc variables would then be plotted. var $ graph = data ; var obj = $ . parseJSON ( $ graph ) ; JSON: [ { "Low": 8.63, "Volume": 14211900, "Date": "2012-10-26", "High": 8.79, "Close": 8.65, "Adj Close": 8.65, "Open": 8.7 }, { "Low": 8.65, "Volume": 12167500, "Date": "2012-10-25", "High": 8.81, "Close": 8.73, "Adj Close": 8.73, "Open": 8.76 }, { "Low": 8.68, "Volume": 20239700, "Date": "2012-10-24", "High": 8.92, "Close": 8.7, "Adj Close": 8.7, "Open": 8.85 }, Any help is appreciated, thank you!

    Read the article

  • Merge two excel files (with the condition)

    - by chennai
    I have a form in access in which i have two text boxes which accepts two excel files with a button click. now when i click generate button an output excel file has to be generated or created based on the following conditions In one excel file i have these data : id code country count t100 gb123 india 3123 t100 gh125 UK 1258 t123 ytr15 USA 1111 t123 gb123 Germany 100 t145 gh575 india 99 t458 yt777 USA 90 In another excel file i have these data country location India delhi UK london USA wallstreet Germany frankfurt The rows can be more than what i mentioned here ... now i want to merge them according to the country. In book1 excel file for example wherever you find country india the location field delhi has to be inserted right beside the country field and it has to be done for each and every country which i mentioned in book2 excel file and the output file has to be sorted according to the count at last. For example the output file should like this id code country count Location t100 gb123 india 3123 delhi t100 gh125 UK 1258 london t123 ytr15 USA 1111 wallstreet t123 gb123 Germany 100 frankfrt t145 gh575 india 99 delhi t458 yt777 USA 90 wallstreet

    Read the article

  • TCP Scanner Python MultiThreaded

    - by user1473508
    I'm trying to build a small tcp scanner for a netmask. The code is as follow: import socket,sys,re,struct from socket import * host = sys.argv[1] def RunScanner(host): s = socket(AF_INET, SOCK_STREAM) s.connect((host,80)) s.settimeout(0.1) String = "GET / HTTP/1.0" s.send(String) data = s.recv(1024) if data: print "host: %s have port 80 open"%(host) Slash = re.search("/", str(host)) if Slash : netR,_,Wholemask = host.partition('/') Wholemask = int(Wholemask) netR = struct.unpack("!L",inet_aton(netR))[0] for host in (inet_ntoa(struct.pack("!L", netR+n)) for n in range(0, 1<<32-Wholemask)): try: print "Doing host",host RunScanner(host) except: pass else: RunScanner(host) To launch : python script.py 10.50.23.0/24 The problem I'm having is that even with a ridiculous low settimeout value set, it takes ages to cover the 255 ip addresses since most of them are not assigned to a machine. How can i make a way faster scanner that wont get stuck if the port is close.MultiThreading ? Thanks !

    Read the article

  • alternative to lag SQL command

    - by mahen
    I have a table which has a table like this. Month-----Book_Type-----sold_in_Dollars Jan----------A------------ 100 Jan----------B------------ 120 Feb----------A------------ 50 Mar----------A------------ 60 Mar----------B------------ 30 and so on I have to calculate the expected sales for each month and book type based on the last 2 months sales. So for March and type A it would be (100+50)/2 = 75 For March and type B it is 120/1 since no data for Feb is there. I was trying to use the lag function but it wouldn't work since there is data missing in a few rows. Any ideas on this?

    Read the article

  • Input Sanitation Best Practices

    - by Adam Driscoll
    Our team has recently been working on a logic and data layer for our database. We were not approved to utilize Entity or Linq to SQL for the data layer. It was primarily built by hand. A lot of the SQL is auto generated. An obvious down fall of this is the need to sanitize inputs prior to retrieval and insertion. What are the best methods for doing this? Searching for terms like insert, delete, etc seems like a poor way to accomplish this. Is there a better alternative?

    Read the article

  • Several modules in a package importing one common module

    - by morpheous
    I am writing a python package. I am using the concept of plugins - where each plugin is a specialization of a Worker class. Each plugin is written as a module (script?) and spawned in a separate process. Because of the base commonality between the plugins (e.g. all extend a base class 'Worker'), The plugin module generally looks like this: import commonfuncs def do_work(data): # do customised work for the plugin print 'child1 does work with %s' % data In C/C++, we have include guards, which prevent a header from being included more than once. Do I need something like that in Python, and if yes, how may I make sure that commonfuncs is not 'included' more than once?

    Read the article

  • Creating a JSONRepresentation of my NSDictionary messes up the order?

    - by Lewion
    Hi all, To pass data to my webservice, I create an NSDictionary with the objects and keys I need, and use JSONRepresentation to format it nicely so I can post it to my service. It all worked fine with the previous version where only 2 parameters were required. An array with listitems, and a UDID. No I also need to pass a version number because we need to provide more data for people with the application at this new version. Only problem is when I create my JSONRepresentation now, the order of things are all messed up. NSMutableDictionary *rowDict = [[NSMutableDictionary alloc] initWithObjectsAndKeys:arrayDict,@"basketListV2",sharedData.udid,@"UDID",@"1.4",@"version",nil]; It prints out version first, then UDID and then basketListV2. Anyone know what I can do to maintain the order of my NSDict? I tried both NSDictionary and NSMutableDictionary (Probably doesn't have to do anything with it but for testing purposes I had to try it.) Thanks in advance. Lewion

    Read the article

  • check if lookup yields any valid rows for insertion before clearing table using ssis

    - by Chris
    SSIS ignoramus needing help! the situation: a temp table is populated from an excel file, which has been known to change formats at random times, that is owned by a different group. a lookup need to be performed on the temp table, tableA, to populate tableB with valid data. if the lookup results in 0 rows being returned, an email should be sent and the existing data in tableB should remain untouched. If the lookup results in a number of valid rows 0, tableB should have all rows deleted and the new records from the lookup on tableA inserted. question: what would be the best way to check if there are any valid rows and perform the appropriate action(s), depending on my results? Thanks!

    Read the article

  • How to break a jquery variable dynamically based on condition

    - by Adi
    I have a jquery variable which has is showing the value in the console as .. ["INCOMING", 0, "INETCALL", 0, "ISD", 31.8, "LOCAL", 197.92, "STD", 73.2] Now as per my need i have to break these values and make it like this ["INCOMING", 0],["INETCALL", 0],["ISD", 31.8],["LOCAL", 197.92],["STD", 73.2] but these values i need to make in the required formate dynamically as this is received from database. Here is my ajax call to get the values from server side.. var dbdata=""; $(document).ready(function() { $.ajax({ type: 'GET', url: 'getPieChartdata', async:false, dataType: "text", success: function(data) { dbdata=JSON.parse(data); } }); console.log(dbdata); }); Please guys help me . Thanks in advance..

    Read the article

  • Is it good to use JQuery's validation plugin?

    - by kwokwai
    Hi all, I am learning JQuery, and I have checked out that JQUery has got a validation plugin. http://docs.jquery.com/Plugins/Validation#Validate_forms_like_you.27ve_never_been_validating_before.21 To use it, users have to include another script file in the Head tag in HTML. I am thinking if this will cause any code collisions to the codes in the validation plugin when more and more javascript files are included. Should I use JQuery to write myself new customed functions for checking data input from users or use the JQuery data validation plugin? Please advise.

    Read the article

  • 32/64 Bit Question

    - by user48408
    Here's my question. What is the best way to determine what bit architecture your app is running on? What I am looking to do: On a 64 bit server I want my app to read 64 bit datasources (stored in reg key Software\Wow6432Node\ODBC\ODBC.INI\ODBC Data Sources) and if its 32 bit I want to read 32 bit datasources, (i.e. Read from Software\ODBC\ODBC.INI\ODBC Data Sources). I might be missing the point, but I don't want to care what mode my app is running in. I simply want to know if the OS is 32 or 64 bit. [System.Environment.OSVersion.Platform doesn't seem to be cutting it for me. Its returning Win32NT on my local xp machine and on a win2k8 64 bit server (even when all my projects are set to target 'any cpu')]

    Read the article

  • Rspec > testing database views

    - by Sean McCleary
    How can database views be tested in Rspec? Every scenario is wrapped in a transaction and the data does not look like it is being persisted to the database (MySQL in my case). My view returns with an empty result set because none of the records are being persisted in the transaction. I am validating that the records are not being stored by setting a debug point in my spec and checking my data with a database client while the spec is being debugged. The only way I can think to have my view work would be if I could commit the transaction before the end of the scenario and then clear the database after the scenario is complete. Does anyone know how to accomplish this or is there a better way? Thanks

    Read the article

< Previous Page | 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184  | Next Page >