Search Results

Search found 25005 results on 1001 pages for 'sequential number'.

Page 335/1001 | < Previous Page | 331 332 333 334 335 336 337 338 339 340 341 342  | Next Page >

  • How to make a increasing numbers after filenames in C?

    - by zaplec
    Hi, I have a little problem. I need to do some little operations on quite many files in one little program. So far I have decided to operate them in a single loop where I just change the number after the name. The files are all named TFxx.txt where xx is increasing number from 1 to 80. So how can I open them all in a single loop one after one? I have tried this: for(i=0; i<=80; i++) { char name[8] = "TF"+i+".txt"; FILE = open(name, r); /* Do something */ } As you can see the second line would be working in python but not in C. I have tried to do similiar running numbering with C to this program, but I haven't found out yet how to do that. The format doesn't need to be as it is on the second line, but I'd like to have some advice of how can I solve this problem. All I need to do is just be able to open many files and do same operations to them.

    Read the article

  • Spring Integration 1.0 RC2: Streaming file content?

    - by gdm
    I've been trying to find information on this, but due to the immaturity of the Spring Integration framework I haven't had much luck. Here is my desired work flow: New files are placed in an 'Incoming' directory Files are picked up using a file:inbound-channel-adapter The file content is streamed, N lines at a time, to a 'Stage 1' channel, which parses the line into an intermediary (shared) representation. This parsed line is routed to multiple 'Stage 2' channels. Each 'Stage 2' channel does its own processing on the N available lines to convert them to a final representation. This channel must have a queue which ensures no Stage 2 channel is overwhelmed in the event that one channel processes significantly slower than the others. The final representation of the N lines is written to a file. There will be as many output files as there were routing destinations in step 4. *'N' above stands for any reasonable number of lines to read at a time, from [1, whatever I can fit into memory reasonably], but is guaranteed to always be less than the number of lines in the full file. How can I accomplish streaming (steps 3, 4, 5) in Spring Integration? It's fairly easy to do without streaming the files, but my files are large enough that I cannot read the entire file into memory. As a side note, I have a working implementation of this work flow without Spring Integration, but since we're using Spring Integration in other places in our project, I'd like to try it here to see how it performs and how the resulting code compares for length and clarity.

    Read the article

  • Can't see *all* databases in a remote SQL Server instance

    - by George
    Yesterday I posted a related question on StackOverflow. This problem involved not being able to see a SQL Server 2008 instance on another PC. I am not sure why adding the port number enabled me to see a SQL Server that I could not otherwise see, since the port number that I specified was, after all, the default port. Now I notice that I have another problem. While I can connect to the remote SQL 2008 Server instance, I cannot see all the databases in the instance. I am trying to connect to the 2008 instance from another PC using SQL Server 2008 Mgt Studio. I am connecting from a Windows 7 Ultimate PC to a Windows XP Pro PC. I suspect that my problem has something to do with not all database in the remote instance having the same version. For example, I "upgraded" a a SL 2005 database to 2008 by doing a backup frm 2005 and importing it into 2008. When I realized that this was not one of the database that I could see from my other PC, I noticed that the compatability level of the imported was still 2005, so I changed it to 2008. Still I could not see the database. I am sure that this is relevant: I just noticed that on my remote server, the sql node instance node, named "sql2008" says "version 10" when I am on the remote serfver, but when I connect to the sql2008 remote instance fron my local PC, the connection is shown locally as being a "SQL Servr version 8.0" instance. I suspect that locally, I am only being shown databases that are somehow in the remote 2008 instance but have not been upgraded. I guess I don't know what constitutes an upgraded database and I don not know who to connect to see all the databases, even if this requires multiple connections from the source PC.

    Read the article

  • start-stop-daemon quoted arguments misinterpreted

    - by Martin Westin
    Hi, I have been trying to make an init script using start-stop-daemon. I am stuck on the arguments to the daemon. I want to keep these in a variable at the top of the script but I can't get the quotations to filter down correctly. I'll use ls here so we don't have to look at binaries and arguments that most people wont know or care about. The end result I am looking for is for start-stop... to run ls -la "/folder with space/" DAEMON=/usr/bin/ls DAEMON_OPTS='-la "/folder with space/"' start-stop-daemon --start --make-pidfile --pidfile $PID --exec $DAEMON -- $DAEMON_OPTS Double escaping the options and trying innumerable variations of quotations do not help... Then they end up at the daemon they are always messed up. Enclosing $DAEMON_OPTS in quotes changes things... then they are seen as one since quote... never the right number though :) Echoing the command-line (start-stop...) prints exactly the right stuff to screen. But the daemon (the real one, not ls) complains about the wrong number of arguments. How do I specify a variable so that quotes inside it are brought along to the daemon correctly? Thanks, Martin

    Read the article

  • Indexing and Searching Over Word Level Annotation Layers in Lucene

    - by dmcer
    I have a data set with multiple layers of annotation over the underlying text, such as part-of-tags, chunks from a shallow parser, name entities, and others from various natural language processing (NLP) tools. For a sentence like The man went to the store, the annotations might look like: Word POS Chunk NER ==== === ===== ======== The DT NP Person man NN NP Person went VBD VP - to TO PP - the DT NP Location store NN NP Location I'd like to index a bunch of documents with annotations like these using Lucene and then perform searches across the different layers. An example of a simple query would be to retrieve all documents where Washington is tagged as a person. While I'm not absolutely committed to the notation, syntactically end-users might enter the query as follows: Query: Word=Washington,NER=Person I'd also like to do more complex queries involving the sequential order of annotations across different layers, e.g. find all the documents where there's a word tagged person followed by the words arrived at followed by a word tagged location. Such a query might look like: Query: "NER=Person Word=arrived Word=at NER=Location" What's a good way to go about approaching this with Lucene? Is there anyway to index and search over document fields that contain structured tokens?

    Read the article

  • Bulletin board - Database optimisation

    - by andrew
    This question is a follow on from this Question The project and problem The project I am currently working on is a bulletin board for a large non-profit organisation. The bulletin board will be used to allow inter-office communication within the organisation. I am building the application and have been having trouble extracting the results that I need from my database because I don't think it is properly normalized and because of limitations in my knowledge of relational database theory and mysql. I would appreciate input into the design of the board in general and in particular, ways that the database structure can be improved to facilitate efficient queries and help me develop this application and future application faster Business Logic The bulletin board will be used in the following way Posting bulletins and responses to bulletins Employees or 'users' in offices around the country will be able to post messages to the bulletin board.Bulletins must be posted to a location and categorised- i'll call these "bulletins". Users will be able to post any number of replies to any one bulletin and users will be able to reply to their own bulletin - i'll call these 'replies'. Rating bulletins and replies Users will be able to either 'like' or 'dislike' a bulletin or a reply and the total number of likes or dislikes will be shown for each bulletin or reply. Viewing the bulletin board and responses Bulletins can be displayed chronologically. Users can sort bulletins chronologically or chronologically by the latest reply to that bulletin(let me know if you need more explanation) When a particular bulletin is selected, replies to that bulletin will be displayed chronologically @PerformanceDBA - edited 10:34 est 28/12/10I have begun implementing the data model. I assume that the 6th data model is the physical model because it contains the associative tables. I am going to post any questions that I have below. I will put up a database dump once I am done. I will then put up a list of all the queries that I need to run on the database and begin writing them. I hope you had a good Christmas. I'm in Canada and there's snow! Implementation of Physical model

    Read the article

  • How to add a new Stage to my default stage?

    - by Raigomaru
    I want to add a new Stage called field to the default stage (i need to place different elements on it later). And then i want to add a bitmap called myBitmap to the field. But nothing happens. I don't understand what should i do... var field:Stage = new Stage(); field.x = 200; field.y = 200; field.width = 300; field.height = 300; stage.addChild(field); var bdWidth:Number = 100; var bdHeight:Number = 100; var bdTransparent:Boolean = true; var bdFillColorARGB:uint = 0xFF007090; var myBitmapData:BitmapData = new BitmapData(bdWidth, bdHeight, bdTransparent, bdFillColorARGB); var myBitmap:Bitmap = new Bitmap(myBitmapData); myBitmap.x = 10; myBitmap.y = 10; field.addChild(myBitmap);

    Read the article

  • Django - foreignkey problem

    - by realshadow
    Hey, Imagine you have this model: class Category(models.Model): node_id = models.IntegerField(primary_key = True) type_id = models.IntegerField(max_length = 20) parent_id = models.IntegerField(max_length = 20) sort_order = models.IntegerField(max_length = 20) name = models.CharField(max_length = 45) lft = models.IntegerField(max_length = 20) rgt = models.IntegerField(max_length = 20) depth = models.IntegerField(max_length = 20) added_on = models.DateTimeField(auto_now = True) updated_on = models.DateTimeField(auto_now = True) status = models.IntegerField(max_length = 20) node = models.ForeignKey(Category_info, verbose_name = 'Category_info', to_field = 'node_id' The important part is the foreignkey. When I try: Category.objects.filter(type_id = type_g.type_id, parent_id = offset, status = 1) I get an error that get returned more than category, which is fine, because it is supposed to return more than one. But I want to filter the results trough another field, which would be type id (from the second Model) Here it is: class Category_info(models.Model): objtree_label_id = models.AutoField(primary_key = True) node_id = models.IntegerField(unique = True) language_id = models.IntegerField() label = models.CharField(max_length = 255) type_id = models.IntegerField() The type_id can be any number from 1 - 5. I am desparately trying to get only one result where the type_id would be number 1. Here is what I want in sql: SELECT n.*, l.* FROM objtree_nodes n JOIN objtree_labels l ON (n.node_id = l.node_id) WHERE n.type_id = 15 AND n.parent_id = 50 AND l.type_id = 1 Any help is GREATLY appreciated. Regards

    Read the article

  • Producer consumer with qualifications

    - by tgguy
    I am new to clojure and am trying to understand how to properly use its concurrency features, so any critique/suggestions is appreciated. So I am trying to write a small test program in clojure that works as follows: there 5 producers and 2 consumers a producer waits for a random time and then pushes a number onto a shared queue. a consumer should pull a number off the queue as soon as the queue is nonempty and then sleep for a short time to simulate doing work the consumers should block when the queue is empty producers should block when the queue has more than 4 items in it to prevent it from growing huge Here is my plan for each step above: the producers and consumers will be agents that don't really care for their state (just nil values or something); i just use the agents to send-off a "consumer" or "producer" function to do at some time. Then the shared queue will be (def queue (ref [])). Perhaps this should be an atom though? in the "producer" agent function, simply (Thread/sleep (rand-int 1000)) and then (dosync (alter queue conj (rand-int 100))) to push onto the queue. I am thinking to make the consumer agents watch the queue for changes with add-watcher. Not sure about this though..it will wake up the consumers on any change, even if the change came from a consumer pulling something off (possibly making it empty) . Perhaps checking for this in the watcher function is sufficient. Another problem I see is that if all consumers are busy, then what happens when a producer adds something new to the queue? Does the watched event get queued up on some consumer agent or does it disappear? see above I really don't know how to do this. I heard that clojure's seque may be useful, but I couldn't find enough doc on how to use it and my initial testing didn't seem to work (sorry don't have the code on me anymore)

    Read the article

  • Sort Grid Columns of mixed type in EXTJS Grid

    - by Amit
    Hello, I want to sort the extjs columns, I have the column type as float and from the server side i am getting values which can contain "-" value , now what happens the grid is displaying me the NaN value instead of - and the sort is not working anymore. My requirement is to create a custom sort which can sort first based on number and then sort based on string. Thanks to suggest as renderer also not works for me. My Json String is: {metaData:{"totalProperty":"total", "root":"records","fields":[{"header":"Part Number##false","name":"XJE010^VT-007!0","type":"string"},{"header":"Marketing Status##false","name":"STP716^VT-007!0","type":"string"},{"header":"Package##false","name":"XJE016^VT-007!0","type":"string"},{"header":"Automotive Grade##false","name":"STP472^VT-007!0","type":"string"},{"header":"VDSS##false","name":"XJG810^VT-007!0","type":"float"},{"header":"Drain Current (Dc)(I_D) % (A)##false","name":"XJG273^VT-006!0","type":"float"},{"header":"RDS(on) (@VGS=10V) % (&#937;)##false","name":"XJG640^VT-006!3","type":"float"},{"header":"Features##false","name":"GNP023^VT-007!0","type":"string"},{"header":"RDS(on) (@4.5 or 5V) % (&#937;)##false","name":"XJG640^VT-006!6","type":"float"},{"header":"RDS(on) (@2.7V) % (&#937;)##false","name":"XJG640^VT-006!7","type":"float"},{"header":"RDS(on) (@1.8V) % (&#937;)##false","name":"XJG640^VT-006!8","type":"float"},{"header":"Free Samples##false","name":"STP0881^VT-007!0","type":"string"},{"header":"Total Gate Charge(Qg) typ ()##true","name":"STP049^VT-002!0","type":"float"},{"header":"Total Power Dissipation(PD) % (W)##true","name":"XJG820^VT-006!0","type":"float"}]},"success":"true", "total":13,"records":[{"XJE010^VT-007!0":"STB80PF55$$/cn/analog/product/67164.jsp","STP716^VT-007!0":"Active","XJE016^VT-007!0":"D2PAK","STP472^VT-007!0":"_","XJG810^VT-007!0":"-55","XJG273^VT-006!0":"80","XJG640^VT-006!3":".018","GNP023^VT-007!0":"-","XJG640^VT-006!6":"-","XJG640^VT-006!7":"-","XJG640^VT-006!8":"-","STP0881^VT-007!0":"No","STP049^VT-002!0":"190","XJG820^VT-006!0":"300"},{"XJE010^VT-007!0":"STD10PF06$$/cn/analog/product/64543.jsp","STP716^VT-007!0":"Active","XJE016^VT-007!0":"IPAK TO-251 TO 252 DPAK","STP472^VT-007!0":"_","XJG810^VT-007!0":"-60","XJG273^VT-006!0":"-10","XJG640^VT-006!3":".2","GNP023^VT-007!0":"-","XJG640^VT-006!6":"-","XJG640^VT-006!7":"-","XJG640^VT-006!8":"-","STP0881^VT-007!0":"No ... Regards, Amit

    Read the article

  • How to use R's ellipsis feature when writing your own function?

    - by Ryan Thompson
    The R language has a nifty feature for defining functions that can take a variable number of arguments. For example, the function data.frame takes any number of arguments, and each argument becomes the data for a column in the resulting data table. Example usage: > data.frame(letters=c("a", "b", "c"), numbers=c(1,2,3), notes=c("do", "re", "mi")) letters numbers notes 1 a 1 do 2 b 2 re 3 c 3 mi The function's signature includes an ellipsis, like this: function (..., row.names = NULL, check.rows = FALSE, check.names = TRUE, stringsAsFactors = default.stringsAsFactors()) { [FUNCTION DEFINITION HERE] } I would like to write a function that does something similar, taking multiple values and consolidating them into a single return value (as well as doing some other processing). In order to do this, I need to figure out how to "unpack" the ... from the function's arguments within the function. I don't know how to do this. The relevant line in the function definition of data.frame is object <- as.list(substitute(list(...)))[-1L], which I can't make any sense of. So how can I convert the ellipsis from the function's signature into, for example, a list? To be more specific, how can I write get_list_from_ellipsis in the code below? my_ellipsis_function(...) { input_list <- get.list.from.ellipsis(...) output_list <- lapply(X=input_list, FUN=do_something_interesting) return(output_list) } my_ellipsis_function(a=1:10,b=11:20,c=21:30)

    Read the article

  • How do I display java.lang.* object allocations in Eclipse profiler?

    - by Martin Wickman
    I am profiling an application using the Eclipse profiler. I am particularly interested in number of allocated object instances of classes from java.lang (for instance java.lang.String or java.util.HashMap). I also want to know stuff like number of calls to String.equals() etc. I use the "Object Allocations" tab and I shows all classes in my application and a count. It also shows all int[], byte[], long[] etc, but there is no mention of any standard java classes. For instance, this silly code: public static void main(String[] args) { Object obj[] = new Object[1000]; for (int i = 0; i < 1000; i++) { obj[i] = new StringBuffer("foo" + i); } System.out.println (obj[30]); } Shows up in the Object Allocations tab as 7 byte[]s, 4 char[]s and 2 int[]s. It doesn't matter if I use 1000 or 1 iterations. It seems the profiler simply ignores everything that is in any of the java.* packages. The same applies to Execution Statistics as well. Any idea how to display instances of java.* in the Eclipse Profiler?

    Read the article

  • Advice on a simple Windows Form

    - by Austin Hyde
    I have a VERY simple windows form that the user uses to manage "Stores". Each store has a name and number, and is kept in a corresponding DB table. The form has a listbox of stores, an add button that creates a new store, a delete button, and an edit button. Beside those I have text boxes for the name and number, and save/cancel buttons. When the user chooses a store from the list box, and clicks 'edit', the textboxes become populated and save/cancel become active. When the user clicks 'add', I create a new Store, add it to the listbox, activate the textboxes and save/cancel buttons, then commit it to the database when the user clicks 'save', or discards it when the user clicks 'cancel'. Right now, my event system looks like this (in psuedo-code. It's just shorter that way.) add->click: store = new Store() listbox.add(store) populateAndEdit(store) delete->click: store = listbox.selectedItem db.deleteOnSubmit(store) listbox.remove(store) db.submit() edit->click: populateAndEdit(listbox.selectedItem) save->click: parseAndSave(listbox.selectedItem) db.submit() disableTexts() cancel->click: disableTexts() The problem is in how I determine if we are inserting a new Store, or updating an existing one. The obvious solution to me would be to make it a "modal" process - that is, when I click edit, I go into edit mode, and the save button does things differently than if I were in add mode. I know I could make this more MVC-like, but I don't really think this simple form merits the added complexity. I'm not very experienced with winforms, so I'm not sure if I even have the right idea for how to tackle this. Is there a better way to do this? I would like to keep it simple, but usable.

    Read the article

  • Database localization

    - by Don
    Hi, I have a number of database tables that contain name and description columns which need to be localized. My initial attempt at designing a DB schema that would support this was something like: product ------- id name description local_product ------- id product_id local_name local_description locale_id locale ------ id locale However, this solution requires a new local_ table for every table that contains name and description columns that require localization. In an attempt to avoid this overhead I redesigned the schema so that only a single localization table is needed product ------- id localization_id localization ------- id local_name local_description locale_id locale ------ id locale Here's an example of the data which would be stored in this schema when there are 2 tables (product and country) requiring localization: country id, localization_id ----------------------- 1, 5 product id, localization_id ----------------------- 1, 2 localization id, local_name, local_description, locale_id ------------------------------------------------------ 2, apple, a delicious fruit, 2 2, pomme, un fruit délicieux, 3 2, apfel, ein köstliches Obst, 4 5, ireland, a small country, 2 5, irlande, un petite pay, 3 locale id, locale -------------- 2, en 3, fr 4, de Notice that the compound primary key of the localization table is (id, locale_id), but the foreign key in the product table only refers to the first element of this compound PK. This seems like 'a bad thing' from the POV of normalization. Is there any way I can fix this problem, or alternatively, is there a completely different schema that supports localization without creating a separate table for each localizable table? Update: A number of respondents have proposed a solution that requires creating a separate table for each localizable table. However, this is precisely what I'm trying to avoid. The schema I've proposed above almost solves the problem to my satisfaction, but I'm unhappy about the fact that the localization_id foreign keys only refer to part of the corresponding primary key in the localization table. Thanks, Don

    Read the article

  • How to reference other documents in a couchDB view (joining like functionality)

    - by Surfrdan
    We have a CouchDB representation of an XML database which we use to power a javascript based frontend for manipulating the XML documents. The basic structure is a simple 3 level hierachy. i.e. A - B - C A: Parent doucument (type A) B: any number of child documents of parent type A C: any number of child documents of parent type B We represent these 3 document types in CouchDB with a 'type' attribute: e.g. { "_id":"llgc-id:433", "_rev":"1-3760f3e01d7752a7508b047e0d094301", "type":"A", "label":"Top Level A document", "logicalMap":{ "issues":{ "1":{ "URL":"http://hdl.handle.net/10107/434-0", "FILE":"llgc-id:434" }, "2":{ "URL":"http://hdl.handle.net/10107/467-0", "FILE":"llgc-id:467" etc... } } } } { "_id":"llgc-id:433", "_rev":"1-3760f3e01d7752a7508b047e0d094301", "type":"B", "label":"a B document", } What I want to do is produce a view which returns documents just like the A type but includes the label attribute from the B document within the logicalMap list e.g. { "_id":"llgc-id:433", "_rev":"1-3760f3e01d7752a7508b047e0d094301", "type":"A", "label":"Top Level A document", "logicalMap":{ "issues":{ "1":{ "URL":"http://hdl.handle.net/10107/434-0", "FILE":"llgc-id:434", "LABEL":"a B document" }, "2":{ "URL":"http://hdl.handle.net/10107/467-0", "FILE":"llgc-id:467", "LABEL":"another B document" etc... } } } } I'm struggling to get my head around the best way to perform this. It looks like it should be fairly simple though!

    Read the article

  • Credit Card storage solution

    - by jtnire
    Hi Everyone, I'm developing a solution that is designed to store membership details, as well as credit card details. I'm trying to comply with PCI DSS as much as I can. Here is my design so far: PAN = Primary account number == long number on credit card Server A is a remote server. It stores all membership details (Names, Address etc..) and provides indivudal Key A's for each PAN stored Server B is a local server, and actually holds the encrypted PANs, as well as Key B, and does the decryption. To get a PAN, the client has to authenticate with BOTH servers, ask Server A for the respective Key A, then give Key A to server B, which will return the PAN to the client (provided authentication was sucessful). Server A will only ever encrypt Key A with Server B's public Key, as it will have it beforehand. Server B will probably have to send a salt first though, however I doin't think that has to be encrypted I havn't really thought about any implementation (i.e. coding) specifics yet regarding the above, however the solution is using Java's Cajo framework (wrapper for RMI) so that is how the servers will communicate with each other (Currently, membership details are transfered in this way). The reason why I want Server B to do the decryption, and not the client, is that I am afraid of decryption keys going into the client's RAM, even though it's probably just as bad on the server... Can anyone see anything wrong with the above design? It doesn't matter if the above has to be changed. Thanks jtnire

    Read the article

  • Dynamically generate client-side HTML form control using JavaScript and server-side Python code in Google App Engine

    - by gisc
    I have the following client-side front-end HTML using Jinja2 template engine: {% for record in result %} <textarea name="remark">{{ record.remark }}</textarea> <input type="submit" name="approve" value="Approve" /> {% endfor %} Thus the HTML may show more than 1 set of textarea and submit button. The back-end Python code retrieves a variable number of records from a gql query using the model, and pass this to the Jinja2 template in result. When a submit button is clicked, it triggers the post method to update the record: def post(self): if self.request.get('approve'): updated_remark = self.request.get('remark') record.remark = db.Text(updated_remark) record.put() However, in some instances, the record updated is NOT the one that correspond to the submit button clicked (eg if a user clicks on record 1 submit, record 2 remark gets updated, but not record 1). I gather that this is due to the duplicate attribute name remark. I can possibly use JavaScript/jQuery to generate different attribute names. The question is, how do I code the back-end Python to get the (variable number of) names generated by the JavaScript? Thanks.

    Read the article

  • How does DateTime.Now affect query plan caching in SQL Server?

    - by Bill Paetzke
    Question: Does passing DateTime.Now as a parameter to a proc prevent SQL Server from caching the query plan? If so, then is the web app missing out on huge performance gains? Possible Solution: I thought DateTime.Today.AddDays(1) would be a possible solution. It would pass the same end-date to the sql proc (per day). And the user would still get the latest data. Please speak to this as well. Given Example: Let's say we have a stored procedure. It reports data back to a user on a webpage. The user can set a date range. If the user sets today's date as the "end date," which includes today's data, the web app passes DateTime.Now to the sql proc. Let's say that one user runs a report--5/1/2010 to now--over and over several times. On the webpage, the user sees 5/1/2010 to 5/4/2010. But the web app passes DateTime.Now to the sql proc as the end date. So, the end date in the proc will always be different, although the user is querying a similar date range. Assume the number of records in the table and number of users are large. So any performance gains matter. Hence the importance of the question. Example proc and execution (if that helps to understand): CREATE PROCEDURE GetFooData @StartDate datetime @EndDate datetime AS SELECT * FROM Foo WHERE LogDate >= @StartDate AND LogDate < @EndDate Here's a sample execution using DateTime.Now: EXEC GetFooData '2010-05-01', '2010-05-04 15:41:27' -- passed in DateTime.Now Here's a sample execution using DateTime.Today.AddDays(1) EXEC GetFooData '2010-05-01', '2010-05-05' -- passed in DateTime.Today.AddDays(1) The same data is returned for both procs, since the current time is: 2010-05-04 15:41:27.

    Read the article

  • Use PermGen space or roll-my-own intern method?

    - by Adamski
    I am writing a Codec to process messages sent over TCP using a bespoke wire protocol. During the decode process I create a number of Strings, BigDecimals and dates. The client-server access patterns mean that it is common for the client to issue a request and then decode thousands of response messages, which results in a large number of duplicate Strings, BigDecimals, etc. Therefore I have created an InternPool<T> class allowing me to intern each class of object. Internally, the pool uses a WeakHashMap<T, WeakReferemce<T>>. For example: InternPool<BigDecimal> pool = new InternPool<BigDecimal>(); ... // Read BigDecimal from in buffer and then intern. BigDecimal quantity = pool.intern(readBigDecimal(in)); My question: I am using InternPool for BigDecimal but should I consider also using it for String instead of String's intern() method, which I believe uses PermGen space? What is the advantage of using PermGen space?

    Read the article

  • What's the best way to transfer a large dataset over a .NET web service?

    - by Malvineous
    I've inherited a C# .NET application which talks to a web service, and the web service talks to an Oracle database. I need to add an export function to the UI, to produce an Excel spreadsheet of some of the data. I have created a web service function to run a database query, load the data into a DataTable and then return it, which works fine for a small number of rows. However there is enough data in the full run that the client application locks up for a few minutes and then returns a timeout error. Obviously this isn't the best way to retrieve such a large dataset. Before I go ahead and come up with some dodgy way of splitting the call, I'm wondering if there is already something in place that can handle this. At the moment I'm thinking of a startExport function then repeatedly calling a next50Rows function until there is no data left, but because web services are stateless this means I'm going to have to keep some sort of ID number around and deal with the associated permissions. It would mean that I don't have to load the entire data set into the web server's memory though, which is one good thing. So if anyone knows a better way to retrieve a large amount of data (in a table format) over a .NET web service, please let me know!

    Read the article

  • how do I deconstruct COUNT()?

    - by user151841
    I have a view with some joins in it. I'm doing a select from that view with COUNT(*) as one of the columns of the select. I'm surprised by the number it's returning. Note that there is no GROUP BY nor aggregate column statement in the source view that the query is drawing from. How can I take it apart to see how it arrives at this number? I have three columns in the GROUP BY clause. SELECT column1, column2, column3, COUNT(*) FROM View GROUP BY column1, column2, column3 I get a result like +---------+---------+---------+----------+ | column1 | column2 | column3 | COUNT(*) | +---------+---------+---------+----------+ | value1 | valueA | value_a | 103 | +---------+---------+---------+----------+ | value2 | valueB | value_b | 56 | +---------+---------+---------+----------+ etc. I'd like to see how it arrives at that 103, 26, etc. In other words, I want to run a query that returns 103 rows of something, so that I know that I've expressed the query properly. I'm double-checking my work. I'm not saying that I think COUNT(*) doesn't work ( I know that "SELECT is not broken" ), what I want to double-check is exactly what I'm expressing in my query, because I think I've expressed the wrong thing, which would be why I'm getting unexpected values. I need to see more what I'm actually directing MySQL to count. So should I take them one by one, and try out each value in a WHERE clause? In other words, should I do SELECT column1 FROM View WHERE column1 = 'first_grouped_value' SELECT column1 FROM View WHERE column1 = 'second_grouped_value' SELECT column2 FROM View WHERE column1 = 'first_grouped_value' SELECT column2 FROM View WHERE column1 = 'second_grouped_value' and see the row count returned matches the COUNT(*) value in the grouped results? Because of confidentiality, I won't be able to post any of the query or database structure. All I'm asking for is a general technique to see what COUNT(*) is actually counting.

    Read the article

  • Delphi: Center Specific Line in TRichEdit by Scrolling

    - by Anagoge
    I have a Delphi 2007 TRichEdit with several lines in it. I want to scroll the richedit vertically such that a specific line number if approximately centered in the visible/display area of the richedit. For example, I want to write the code for CenterLineInRichEdit in this example: procedure CenterLineInRichEdit(Edit: TRichEdit; LineNum: Integer); begin ... Edit.ScrollTo(...); end; procedure TForm1.FormCreate(Sender: TObject); var REdit: TRichEdit; i: Integer; begin REdit := TRichEdit.Create(Self); REdit.Parent := Self; Redit.ScrollBars := ssVertical; REdit.SetBounds(10, 10, 200, 150); for i := 1 to 25 do REdit.Lines.Add('This is line number ' + IntToStr(i)); CenterLineInRichEdit(REdit, 13); end; I looked into using the WM_VSCROLL message, and it allows scrolling up/down one line, etc. but not scrolling to center a specific line. I assume I would need to calculate the line height, the display area height, etc? Or is there an easier way?

    Read the article

  • Creating a method for wrapping a loaded image in a UIImageView

    - by eco_bach
    I have the following un my applicationDidFinishLaunching method UIImage *image2 = [UIImage imageWithContentsOfFile:[[NSBundle mainBundle] pathForResource:@"image2.jpg" ofType:nil]]; view2 = [[UIImageView alloc] initWithImage:image2]; view2.hidden = YES; [containerView addSubview:view2]; I'm simply adding an image to a view. But because I need to add 30-40 images I need to wrap the above in a function (which returns a UIImageView) and then call it from a loop. This is my first attempt at creating the function -(UIImageView)wrapImageInView:(NSString *)imagePath { UIImage *image = [UIImage imageWithContentsOfFile:[[NSBundle mainBundle] pathForResource:imagePath ofType:nil]]; UIImageView *view = [[UIImageView alloc] initWithImage:image]; view.hidden = YES; } And then to invoke it I have the following so far, for simplicity I am only wrapping 3 images //Create an array and add elements to it NSMutableArray *anArray = [[NSMutableArray alloc] init]; [anArray addObject:@"image1.jpg"]; [anArray addObject:@"image2.jpg"]; [anArray addObject:@"image3.jpg"]; //Use a for each loop to iterate through the array for (NSString *s in anArray) { UIImageView *wrappedImgInView=[self wrapImage:s]; [containerView addSubview:wrappedImgInView]; NSLog(s); } //Release the array [anArray release]; I have 2 general questions 1-Is my approach correct?, ie following best practices, for what I am trying to do (load a multiple number of images(jpgs, pngs, etc.) and adding them to a container view) 2-For this to work properly with a large number of images, do I need to keep my array creation separate from my method invocation? Any other suggestions welcome!

    Read the article

  • How do I make this nested for loop, testing sums of cubes, more efficient?

    - by Brian J. Fink
    I'm trying to iterate through all the combinations of pairs of positive long integers in Java and testing the sum of their cubes to discover if it's a Fibonacci number. I'm currently doing this by using the value of the outer loop variable as the inner loop's upper limit, with the effect being that the outer loop runs a little slower each time. Initially it appeared to run very quickly--I was up to 10 digits within minutes. But now after 2 full days of continuous execution, I'm only somewhere in the middle range of 15 digits. At this rate it may end up taking a whole year just to finish running this program. The code for the program is below: import java.lang.*; import java.math.*; public class FindFib { public static void main(String args[]) { long uLimit=9223372036854775807L; //long maximum value BigDecimal PHI=new BigDecimal(1D+Math.sqrt(5D)/2D); //Golden Ratio for(long a=1;a<=uLimit;a++) //Outer Loop, 1 to maximum for(long b=1;b<=a;b++) //Inner Loop, 1 to current outer { //Cube the numbers and add BigDecimal c=BigDecimal.valueOf(a).pow(3).add(BigDecimal.valueOf(b).pow(3)); System.out.print(c+" "); //Output result //Upper and lower limits of interval for Mobius test: [c*PHI-1/c,c*PHI+1/c] BigDecimal d=c.multiply(PHI).subtract(BigDecimal.ONE.divide(c,BigDecimal.ROUND_HALF_UP)), e=c.multiply(PHI).add(BigDecimal.ONE.divide(c,BigDecimal.ROUND_HALF_UP)); //Mobius test: if integer in interval (floor values unequal) Fibonacci number! if (d.toBigInteger().compareTo(e.toBigInteger())!=0) System.out.println(); //Line feed else System.out.print("\r"); //Carriage return instead } //Display final message System.out.println("\rDone. "); } } Now the use of BigDecimal and BigInteger was delibrate; I need them to get the necessary precision. Is there anything other than my variable types that I could change to gain better efficiency?

    Read the article

  • Sending buffered images between Java client and Twisted Python socket server

    - by PattimusPrime
    I have a server-side function that draws an image with the Python Imaging Library. The Java client requests an image, which is returned via socket and converted to a BufferedImage. I prefix the data with the size of the image to be sent, followed by a CR. I then read this number of bytes from the socket input stream and attempt to use ImageIO to convert to a BufferedImage. In abbreviated code for the client: public String writeAndReadSocket(String request) { // Write text to the socket BufferedWriter bufferedWriter = new BufferedWriter(new OutputStreamWriter(socket.getOutputStream())); bufferedWriter.write(request); bufferedWriter.flush(); // Read text from the socket BufferedReader bufferedReader = new BufferedReader(new InputStreamReader(socket.getInputStream())); // Read the prefixed size int size = Integer.parseInt(bufferedReader.readLine()); // Get that many bytes from the stream char[] buf = new char[size]; bufferedReader.read(buf, 0, size); return new String(buf); } public BufferedImage stringToBufferedImage(String imageBytes) { return ImageIO.read(new ByteArrayInputStream(s.getBytes())); } and the server: # Twisted server code here # The analog of the following method is called with the proper client # request and the result is written to the socket. def worker_thread(): img = draw_function() buf = StringIO.StringIO() img.save(buf, format="PNG") img_string = buf.getvalue() return "%i\r%s" % (sys.getsizeof(img_string), img_string) This works for sending and receiving Strings, but image conversion (usually) fails. I'm trying to understand why the images are not being read properly. My best guess is that the client is not reading the proper number of bytes, but I honestly don't know why that would be the case. Side notes: I realize that the char[]-to-String-to-bytes-to-BufferedImage Java logic is roundabout, but reading the bytestream directly produces the same errors. I have a version of this working where the client socket isn't persistent, ie. the request is processed and the connection is dropped. That version works fine, as I don't need to care about the image size, but I want to learn why the proposed approach doesn't work.

    Read the article

< Previous Page | 331 332 333 334 335 336 337 338 339 340 341 342  | Next Page >