Search Results

Search found 67143 results on 2686 pages for 'complex data types'.

Page 791/2686 | < Previous Page | 787 788 789 790 791 792 793 794 795 796 797 798  | Next Page >

  • How to read a XML format file to memory in C#?

    - by Nano HE
    // .net 2.0 and vs2005 used. I find some code below. I am not sure I can extended the sample code or not? thank you. if (radioButton.Checked) { MemoryStream ms=new MemoryStream(); byte[] data=ASCIIEncoding.ASCII.GetBytes(textBox1.Text); ms.Write(data,0,data.Length); reader = new XmlTextReader(ms); //some procesing code ms.Close(); reader.Close(); } BTW, Could you please help me to do some dissection about the line below. byte[] data=ASCIIEncoding.ASCII.GetBytes(textBox1.Text);

    Read the article

  • Should every class have its own namespace?

    - by thehouse
    Something that has been troubling me for a while: The current wisdom is that types should be kept in a namespace that only contains functions which are part of the type's non-member interface (see C++ Coding Standards Sutter and Alexandrescu or here) to prevent ADL pulling in unrelated definitions. Does this imply that all classes must have a namespace of their own? If we assume that a class may be augmented in the future by the addition of non-member functions, then it can never be safe to put two types in the same namespace as either one of them may introduce non-member functions that could interfere with the other. The reason I ask is that namespaces are becoming cumbersome for me. I'm writing a header-only library and I find myself using classes names such as project::component::class_name::class_name. Their implementations call helper functions but as these can't be in the same namespace they also have to be fully qualified!

    Read the article

  • Loading specific files from arbitrary directories?

    - by Haydn V. Harach
    I want to load foo.txt. foo.txt might exist in the data/bar/ directory, or it might exist in the data/New Folder/ directory. There might be a different foo.txt in both of these directories, in which case I would want to either load one and ignore the other according to some order that I've sorted the directories by (perhaps manually, perhaps by date of creation), or else load them both and combine the results somehow. The latter (combining the results of both/all foo.txt files) is circumstantial and beyond the scope of this question, but something I want to be able to do in the future. I'm using SDL and boost::filesystem. I want to keep my list of dependencies as small as possible, and as cross-platform as possible. I'm guessing that my best bet would be to get a list of every directory (within the data/ folder), sort/filter this list, then when I go to load foo.txt, I search for it in each potential directory? This sounds like it would be very inefficient, if I have dozens of potential directories to search through every time. What's the best way to go about accomplishing this? Bonus: What if I want some of the directories to be archives? ie. considering both data/foo/ and data/bar.zip to both be valid, and pull foobar.txt from either one without caring.

    Read the article

  • Setting initial state of a JSF component to invalid

    - by user359391
    Hi there I have a small JSF application where the user is required to enter some data about themselves. For each component on the page that has required="true" I want to show an icon depending if there is data in the field or not. My problem is that when the page is initially shown all fields are valid, even if they do not have any data in them. So my question is how I can set a component to be invalid based on if there is data in the field or not? After a submit of the page (or after the component loses focus) the icon is shown properly, it is only on the initial page load I have a problem. (i.e there is no post data) Here is my xhtml for a component that needs to be validated: <s:decorate id="employeeIdDecoration" template="/general/util/errorStyle.xhtml"> <ui:define name="label">#{messages['userdetails.employeeId']}</ui:define> <h:inputText value="#{authenticator.user.employeeId}" required="true"> <a4j:support event="onblur" reRender="employeeIdDecoration" bypassUpdates="true"/> </h:inputText> the template: <s:label styleClass="#{invalid?'error':''}"> <ui:insert name="label"/> <s:span styleClass="required" rendered="#{required}">*</s:span> </s:label> <span class="#{invalid?'error':''}"> <s:validateAll> <ui:insert/> </s:validateAll> <h:graphicImage value="/resources/redx.png" rendered="#{invalid}" height="16" width="16" style="vertical-align:middle;"/> <h:graphicImage value="/resources/Checkmark.png" rendered="#{!invalid}" height="16" width="16" style="vertical-align:middle;"/> </span> Any help will be appreciated.

    Read the article

  • Unable to send an associative array in JSON format in Zend to client

    - by Anorflame
    Hi, In one of my actions in a controller, I'm using the json view helper to send back a response to an ajax request. On the client side I alert the data that is passed to the success callback function. It works fine as long as the response is a number or an array with default keys. Once I try to send an associative array, it alerts with [object Object]. Server code: $childArray = array('key'=>'value'); $this->_helper->json($childArray); javascript: function displayChildren(data){ alert(data); } ... $.ajax({ url: "/po/add", dataType: "json", data: {format: "json"}, success: displayChildren }); I have no idea what am I doing wrong here, so any help would be appreciated...

    Read the article

  • How to convert a byte array of 19200 bytes in size where each byte represents 4 pixels (2 bits per p

    - by Klinger
    I am communicating with an instrument (remote controlling it) and one of the things I need to do is to draw the instrument screen. In order to get the screen I issue a command and the instrument replies with an array of bytes that represents the screen. Below is what the instrument manual has to say about converting the response to the actual screen: The command retrieves the framebuffer data used for the display. It is 19200 bytes in size, 2-bits per pixel, 4 pixels per byte arranged as 320x240 characteres. The data is sent in RLE encoded form. To convert this data into a BMP for use in Windows, it needs to be turned into a 4BPP. Also note that BMP files are upside down relative to this data, i.e. the top display line is the last line in the BMP. I managed to unpack the data, but now I am stuck on how to actually go from the unpacked byte array to a bitmap. My background on this is pretty close to zero and my searches have not revealed much either. I am looking for directions and/or articles I could use to help me undestand how to get this done. Any code or even pseudo code would also help. :-) So, just to summarize it all: How to convert a byte array of 19200 bytes in size, where each byte represents 4 pixels (2 bits per pixel), to a bitmap arranged as 320x240 characters. Thanks in advance.

    Read the article

  • Clientside Javascript --> Serverside Java --> user is served a .doc

    - by ignorantslut
    I am helping someone out with a javascript-based web app (even though I know next to nothing about web development) and we are unsure about the best way to implement a feature we'd like to have. Basically, the user will be using our tool to view all kinds of boring data in tables, columns, etc. via javascript. We want to implement a feature where the user can click a button or link that then allows the user to download the displayed data in a .doc file. Our basic idea so far is something like: call a Java function on the server with the desired data passed in as a String when the link is clicked generate the .doc file on the server automatically "open" a link to the file in the client's browser to initiate the download Is this possible? If so, is it feasible? Or, can you recommend a better solution? edit: the data does not reside on the server; rather, it is queried from a SQL database

    Read the article

  • Looking for suggestions about an architecture of a MultiThreaded app.

    - by Dimitri
    Hello everyone. I am looking to develop a multithreaded application that will be running in unconditional loop and processing high volume of data. High volume is 2000+ records per minute. Processing will involve data retrieval, calculations and data updates. I need the application to perform so that there is virtually no back log, meaning i need to be able to finish up all of the 2000 points in one minute or even faster. Our current implementation features a multithreaded application that is spawn multiple times (from 10 to 20) and we are noticing that it's not handling data as expected and i even feel that instances of the application compete with each other for processor time and eventually if not slowing, not benefiting each other for sure. I would like to know what would be the best approach: have a single instance running but maximize threads that can run simultaneously? or is there other ways i don't know? I'm open to suggestions. Thank you in advance

    Read the article

  • Page_Load after Modal Popup

    - by n0chi
    I have a page with a user control which gets some data updated via a modal popup. Upon clicking "ok" on the modal popup - the new data is being written to the database - but the base page doesnt "reload" to show the updated data. How do I get that to happen?

    Read the article

  • Objective C - Parse NSData

    - by EZFrag
    I have the following data inside an NSData object: <00000000 6f2d840e 31504159 2e535953 2e444446 3031a51b 8801015f 2d02656e 9f110101 bf0c0cc5 0affff3f 00000003 ffff03 I'm having issues parsing this data. This data contains information which is marked by tags Tag 1 is from byte value 0x84 to 0xa5 Tag 2 is from byte value 0xa5 to 0x88 Tag 3 is from byte value 0x88 to 0x5f0x2d Tag 4 is from byte value 0x5f0x2d to 0x9f0x11 How would I go about to get those values from the NSData object? Regards, EZFrag

    Read the article

  • Trying to packetize TCP with non-blocking IO is hard! Am I doing something wrong?

    - by Ricket
    Oh how I wish TCP was packet-based like UDP is! But alas, that's not the case, so I'm trying to implement my own packet layer. Here's the chain of events so far (ignoring writing packets) Oh, and my Packets are very simply structured: two unsigned bytes for length, and then byte[length] data. (I can't imagine if they were any more complex, I'd be up to my ears in if statements!) Server is in an infinite loop, accepting connections and adding them to a list of Connections. PacketGatherer (another thread) uses a Selector to figure out which Connection.SocketChannels are ready for reading. It loops over the results and tells each Connection to read(). Each Connection has a partial IncomingPacket and a list of Packets which have been fully read and are waiting to be processed. On read(): Tell the partial IncomingPacket to read more data. (IncomingPacket.readData below) If it's done reading (IncomingPacket.complete()), make a Packet from it and stick the Packet into the list waiting to be processed and then replace it with a new IncomingPacket. There are a couple problems with this. First, only one packet is being read at a time. If the IncomingPacket needs only one more byte, then only one byte is read this pass. This can of course be fixed with a loop but it starts to get sorta complicated and I wonder if there is a better overall way. Second, the logic in IncomingPacket is a little bit crazy, to be able to read the two bytes for the length and then read the actual data. Here is the code, boiled down for quick & easy reading: int readBytes; // number of total bytes read so far byte length1, length2; // each byte in an unsigned short int (see getLength()) public int getLength() { // will be inaccurate if readBytes < 2 return (int)(length1 << 8 | length2); } public void readData(SocketChannel c) { if (readBytes < 2) { // we don't yet know the length of the actual data ByteBuffer lengthBuffer = ByteBuffer.allocate(2 - readBytes); numBytesRead = c.read(lengthBuffer); if(readBytes == 0) { if(numBytesRead >= 1) length1 = lengthBuffer.get(); if(numBytesRead == 2) length2 = lengthBuffer.get(); } else if(readBytes == 1) { if(numBytesRead == 1) length2 = lengthBuffer.get(); } readBytes += numBytesRead; } if(readBytes >= 2) { // then we know we have the entire length variable // lazily-instantiate data buffers based on getLength() // read into data buffers, increment readBytes // (does not read more than the amount of this packet, so it does not // need to handle overflow into the next packet's data) } } public boolean complete() { return (readBytes > 2 && readBytes == getLength()+2); } Basically I need feedback on my code. Please suggest any improvements. Even overhauling my entire system would be okay, if you have suggestions for how better to implement the whole thing. Book recommendations are welcome too; I love books. I just get the feeling that something isn't quite right.

    Read the article

  • Can using non primitive Integer/ Long datatypes too frequently in the application, hurt the performance??

    - by Marcos
    I am using Long/Integer data types very frequently in my application, to build Generic datatypes. I fear that using these wrapper objects instead of primitive data types may be harmful for performance since each time it needs to create objects which is an expensive operation. but also it seems that I have no other choice(when I have to use primtives with generics) rather than just using them. However, still it would be great if you can suggest if there is anything I could do to make it better. or any way if I could just avoid it ?? Also What may be the downsides of this ? Suggestions welcomed!

    Read the article

  • [Java6] Same source code, Eclipse build success but Maven (javac) fails

    - by EnToutCas
    Keep getting this error when compiling using Maven: type parameters of <X>X cannot be determined; no unique maximal instance exists for type variable X with upper bounds int,java.lang.Object Generics type interference cannot be applied to primitive types. But I thought since Java5, boxing/unboxing mechanism works seamlessly between primitive types and wrapper classes. In any case, the strange thing is Eclipse doesn't report any errors and happily compiles. I'm using JDK1.6.0_12. What could possibly be the problem here?

    Read the article

  • Defining relationship in value objects using Hibernate

    - by kate
    Hi, We have three tables .We need to get data from these tables based upon particular conditions. Like if TableA.columv=TableB.columc=tableC.column then get data. We are using value objects to map Objects to relations. Question is how to maintain these relation ships in value objects. And how to retrieve data from it. We have one value object per table.

    Read the article

  • jQuery jqXHR - cancel chained calls, trigger error chain

    - by m0sa
    I am creating a ajax utility for interfacing with my server methods. I would like to leverage jQuery 1.5+ deferred methods from the object returned from the jQuery.ajax() call. The situation is following. The serverside method always returns a JSON object: { success: true|false, data: ... } The client-side utility initiates the ajax call like this var jqxhr = $.ajax({ ... }); And the problem area: jqxhr.success(function(data, textStatus, xhr) { if(!data || !data.success) { ???? // abort processing, trigger error } }); return jqxhr; // return to caller so he can attach his own handlers So the question is how to cancel invocation of all the callers appended success callbacks an trigger his error handler in the place mentioned with ???? ? The documentation says the deferred function invocation lists are FIFO, so my success handler is definitely the first one.

    Read the article

  • How should I synchronize lists in WCF?

    - by mafutrct
    I've got a WCF service that offers multiple lists of items of different types. The lists can be changed on the server. Every change has to be published to all clients to make sure every client has an up-to-date copy of each server list. Currently I'm using this strategy: On login, every client receives the current status of every list. On every change, the added or removed item is sent to all clients. The drawback is that I have to create a callback for every list, since the items are of different types. Is there a pattern I could apply? Or do I really have to duplicate code for each of the lists?

    Read the article

  • Zend Framework Handle One to Many

    - by user192344
    I have 2 tables "user", "contact", the relation between two tables is one user has many contact Table member m_id name ------------ Table Contact c_id c_m_id value in zend model class, i do it in this way /* Member.php */ class Default_Model_DbTable_Member extends Zend_Db_Table_Abstract { protected $_name = 'member'; protected $_dependentTables = array('Default_Model_DbTable_Contact'); } /* Contact.php */ class Default_Model_DbTable_Contact extends Zend_Db_Table_Abstract { protected $_name = 'contact'; protected $_referenceMap = array( 'Member' => array( 'columns'=> array('c_id'), 'refTableClass'=> 'Default_Model_DbTable_Member', 'refColumns'=> array('c_m_id') ) ); /* IndexController.php */ class IndexController extends Zend_Controller_Action { public function indexAction() { $m= new Default_Model_DbTable_Member(); $row = $m->find(1); $data = $row->current(); $data = $data->findDependentRowset('Default_Model_DbTable_Contact'); print_r($data->toArray()); } } But i just get Invalid parameter number: no parameters were bound , my goal is to search a member detail record, and it also contains a array which store all contact info (i can use join method to do that, but i just want to try zend feature)

    Read the article

  • Concurrent Threads in C# using BackgroundWorker

    - by Jim Fell
    My C# application is such that a background worker is being used to wait for the acknowledgement of some transmitted data. Here is some psuedo code demonstrating what I'm trying to do: UI_thread { TransmitData() { // load data for tx // fire off TX background worker } RxSerialData() { // if received data is ack, set ack received flag } } TX_thread { // transmit data // set ack wait timeout // fire off ACK background worker // wait for ACK background worker to complete // evaluate status of ACK background worker as completed, failed, etc. } ACK_thread { // wait for ack received flag to be set } What happens is that the ACK BackgroundWorker times out, and the acknowledgement is never received. I'm fairly certain that it is being transmitted by the remote device because that device has not changed at all, and the C# application is transmitting. I have changed the ack thread from this (when it was working)... for( i = 0; (i < waitTimeoutVar) && (!bAckRxd); i++ ) { System.Threading.Thread.Sleep(1); } ...to this... DateTime dtThen = DateTime.Now(); DateTime dtNow; TimeSpan stTime; do { dtNow = DateTime.Now(); stTime = dtNow - dtThen; } while ( (stTime.TotalMilliseconds < waitTimeoutVar) && (!bAckRxd) ); The latter generates a very acurate wait time, as compared to the former. However, I am wondering if removal of the Sleep function is interferring with the ability to receive serial data. Does C# only allow one thread to run at a time, that is, do I have to put threads to sleep at some time to allow other threads to run? Any thoughts or suggestions you may have would be appreciated. I am using Microsoft Visual C# 2008 Express Edition. Thanks.

    Read the article

  • @Resource annotation is null at run-time.

    - by Andrew
    I'm using GlassFish v3. The following field is declared in a class: @Resource private javax.sql.DataSource _data_source; The following is declare in web.xml: <data-source <namejava:app/env/data</name <class-namecom.mysql.jdbc.Driver</class-name <server-namelocalhost</server-name <port-number3306</port-number <usermyUser</user <passwordmyPass</password </data-source At run-time _data_source is empty. What am I doing wrong?

    Read the article

  • NOT LIKE not working on comparison to a column

    - by rodling
    Data is fairly large and takes few minutes to run it every time, so its taking a lot of time debugging this problem. When I run like concat('%',T.item,'%') on smaller data it seems to identify items properly. However, when I run it on the main DB (the code shown), it still shows many(maybe even all) of the exceptions. EDIT: it seems when i add NOT it stops identifying items select distinct T.comment from (select comment, source, item from data, non_informative where ticker != "O" and source != 7 and source != 6) as T where T.comment not like concat('%',T.item,'%') order by T.comment; comment and source are in data, item is in non_informative Some items from T.item: 'Stock Analysis -', '#InsideTrades', 'IIROC Trade' Example comment which should be removed '#InsideTrades #4 | MACNAB CRAIG (Director,Officer,Chief Executive Officer): Filed Form 4 for $NNN (NATIONAL RETA' Can't seem to figure out it why shows all the items

    Read the article

  • How to limit the number of post values on UpdatePanel ?

    - by Aristos
    I have notice that the UpdatePanel post every field included on the form on every trigger. But in most of my cases I use 2-3 UpdatePanels at the same page, and each one is independent. When I click for update the one panel, then my page receive all the input data of the page (ok this is logical) but I won to read only this UpdatePanels data and act according, and not the other panels data. So I see that a lot of traffic is happened this way. So is there a way to say to one UpdatePanel - send only my input data, and not everything found on the page. ? Thank you in advanced.

    Read the article

  • PreparedStatement and 'null' value in WHERE clause

    - by bond
    I am using PrepareStatement and BatchUpdate for executimg a UPDATE query. In for loop I create a Batch. At end of loop I execute batch. Above logic works fine if SQL query used in PrepareStatement does not have null values in WHERE claues. Update Statement fails if there is null value in WHERE clasue. My code looks something like this, connection = getConnection(); PreparedStatement ps = connection.prepareStatement("UPDATE TEST_TABLE SET Col1 = true WHERE Col2 = ? AND Col3 = ?") for (Data aa : InComingData) { if(null == aa.getCol2()) { ps.setNull(1, java.sql.Types.INTEGER); } else { ps.setInteger(1,aa.getCol2()) } if(null == aa.getCol3()) { ps.setNull(2, java.sql.Types.INTEGER); } else { ps.setInteger(2,aa.getCol3()) } ps.addBatch(); } ps.executeBatch(); connection.commit(); Any help would be appreciated.

    Read the article

  • Yes another ON DUPLICATE KEY UPDATE query

    - by Andy Gee
    I've been reading all the questions on here but I still don't get it I have two identical tables of considerable size. I would like to update table packages_sorted with data from packages_sorted_temp without destroying the existing data on packages_sorted Table packages_sorted_temp contains data on only 2 columns db_id and quality_rank Table packages_sorted contains data on all 35 columns but quality_rank is 0 The primary key on each table is db_id and this is what I want to trigger the ON DUPLICATE KEY UPDATE with. In essence how do I merge these two tables by and change packages_sorted.quality_rank of 0 to the quality_rank stored in packages_sorted_temp under the same primary key Here's what's not working INSERT INTO `packages_sorted` ( `db_id` , `quality_rank` ) SELECT `db_id` , `quality_rank` FROM `packages_sorted_temp` ON DUPLICATE KEY UPDATE `packages_sorted`.`db_id` = `packages_sorted`.`db_id`

    Read the article

< Previous Page | 787 788 789 790 791 792 793 794 795 796 797 798  | Next Page >