Search Results

Search found 60903 results on 2437 pages for 'data mapping'.

Page 1619/2437 | < Previous Page | 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626  | Next Page >

  • Loading datasets from datastore and merge into single dictionary. Resource problem.

    - by fredrik
    Hi, I have a productdatabase that contains products, parts and labels for each part based on langcodes. The problem I'm having and haven't got around is a huge amount of resource used to get the different datasets and merging them into a dict to suit my needs. The products in the database are based on a number of parts that is of a certain type (ie. color, size). And each part has a label for each language. I created 4 different models for this. Products, ProductParts, ProductPartTypes and ProductPartLabels. I've narrowed it down to about 10 lines of code that seams to generate the problem. As of currently I have 3 Products, 3 Types, 3 parts for each type, and 2 languages. And the request takes a wooping 5500ms to generate. for product in productData: productDict = {} typeDict = {} productDict['productName'] = product.name cache_key = 'productparts_%s' % (slugify(product.key())) partData = memcache.get(cache_key) if not partData: for type in typeData: typeDict[type.typeId] = { 'default' : '', 'optional' : [] } ## Start of problem lines ## for defaultPart in product.defaultPartsData: for label in labelsForLangCode: if label.key() in defaultPart.partLabelList: typeDict[defaultPart.type.typeId]['default'] = label.partLangLabel for optionalPart in product.optionalPartsData: for label in labelsForLangCode: if label.key() in optionalPart.partLabelList: typeDict[optionalPart.type.typeId]['optional'].append(label.partLangLabel) ## end problem lines ## memcache.add(cache_key, typeDict, 500) partData = memcache.get(cache_key) productDict['parts'] = partData productList.append(productDict) I guess the problem lies in the number of for loops is too many and have to iterate over the same data over and over again. labelForLangCode get all labels from ProductPartLabels that match the current langCode. All parts for a product is stored in a db.ListProperty(db.key). The same goes for all labels for a part. The reason I need the some what complex dict is that I want to display all data for a product with it's default parts and show a selector for the optional one. The defaultPartsData and optionaPartsData are properties in the Product Model that looks like this: @property def defaultPartsData(self): return ProductParts.gql('WHERE __key__ IN :key', key = self.defaultParts) @property def optionalPartsData(self): return ProductParts.gql('WHERE __key__ IN :key', key = self.optionalParts) When the completed dict is in the memcache it works smoothly, but isn't the memcache reset if the application goes in to hibernation? Also I would like to show the page for first time user(memcache empty) with out the enormous delay. Also as I said above, this is only a small amount of parts/product. What will the result be when it's 30 products with 100 parts. Is one solution to create a scheduled task to cache it in the memcache every hour? It this efficient? I know this is alot to take in, but I'm stuck. I've been at this for about 12 hours straight. And can't figure out a solution. ..fredrik

    Read the article

  • What can I use MySQL for?

    - by ilhan
    I know how to store data in MySQL. Shortly, I know the basics: design, storing strings, integers, date. Is there something else that could be done/achieve with MySQL? Like some kind of functions, temprory bla blas? I don't know. (I know PHP)

    Read the article

  • How to setup Lucene/Solr for a B2B web app?

    - by Bill Paetzke
    Given: 1 database per client (business customer) 5000 clients Clients have between 2 to 2000 users (avg is ~100 users/client) 100k to 10 million records per database Users need to search those records often (it's the best way to navigate their data) Possibly relevant info: Several new clients each week (any time during business hours) Multiple web servers and database servers (users can login via any web server) Let's stay agnostic of language or sql brand, since Lucene (and Solr) have a breadth of support For Example: Joel Spolsky said in Podcast #11 that his hosted web app product, FogBugz On-Demand, uses Lucene. He has thousands of on-demand clients. And each client gets their own database. They use an index per client and store it in the client's database. I'm not sure on the details. And I'm not sure if this is a serious mod to Lucene. The Question: How would you setup Lucene search so that each client can only search within its database? How would you setup the index(es)? Where do you store the index(es)? Would you need to add a filter to all search queries? If a client cancelled, how would you delete their (part of the) index? (this may be trivial--not sure yet) Possible Solutions: Make an index for each client (database) Pro: Search is faster (than one-index-for-all method). Indices are relative to the size of the client's data. Con: I'm not sure what this entails, nor do I know if this is beyond Lucene's scope. Have a single, gigantic index with a database_name field. Always include database_name as a filter. Pro: Not sure. Maybe good for tech support or billing dept to search all databases for info. Con: Search is slower (than index-per-client method). Flawed security if query filter removed. One last thing: I would also accept an answer that uses Solr (the extension of Lucene). Perhaps it's better suited for this problem. Not sure.

    Read the article

  • apache httpclient and spawning a browser that will share the session

    - by Nick
    I'm have a java program that uses Apache httpclient api. This is used to login to and communicate to a webapp. Once logged in, there's a situation in which the program issues an execute process to open up firefox to hit the webapp and allow the user to see data in the browser. Since the java program is already logged in, is there a way to share that current session PHPSESSID so that the spawned firefox is already logged in and working in that same session?

    Read the article

  • Better alternative to autonumber primary keys

    - by Comrad_Durandal
    I am looking for a better primary key than the autonumber data type, namely for the reason that it's limited to a long integer, when I really just need the field to reflect a number or text string that will never ever repeat, no matter HOW many records are added or deleted from the table. The problem is I am not sure how to implement something like turning the current date and time into a hexadecimal string and using that as a unique field I can use as a primary key. Am I just being too paranoid about running out of space?

    Read the article

  • Max value amongst 4 columns in a row.

    - by KandadaBoggu
    I have test_scores table with following fields: Table schema: id (number) score1 (number) score2 (number) score3 (number) score4 (number) Sample data: id score1 score2 score3 score4 1 10 05 30 50 2 05 15 10 00 3 25 10 05 15 Expected result set: id col_name col_value 1 score4 50 2 score2 15 3 score1 25 What is a good SQL for this?(I am using MySQL.)

    Read the article

  • opensourcing a website code

    - by pygabriel
    Hi! I'm writing a little website (webapp) in php+codeigniter, I'd really like to make it open source (to attract collaborators and to have a free VCS hosting). Is that a good practice? This mine security? Which are the best tools to change important data before uploading? (like config files with db names and passwords used for testing etc..)

    Read the article

  • Set Hudson build number from a script

    - by Joe Schneider
    Is there a way to set the next build number in Hudson from a script? I have the nextBuildNumber plug-in installed, and attempted to use wget with --post-data, but that page appears to require login. I have two steps of a chained build and I want to keep the build numbers in sync.

    Read the article

  • PHP object exchange between servers

    - by bensiu
    I got script what read from database and manipulate it so on the end I got $result array... on one server is it possible to serialize this object and pass it to other script so this $result array could be available for other script on second server... I got on first server: return serialize ( $results ); and on second: $data = unserialize ( file_get_contents ( 'http://www.......com/reader.php' ) ); ...but there is no communication between .... What I am doing wrong ? Bensiu

    Read the article

  • Does single or double quote matter in str_ireplace in PHP ?

    - by Richards
    Hi, I've to replace newline (\n) with & in a string so that the received data could be parsed with parse_str() into array. The thing is that when I put \n in single quote it somehow turns out as to be replaced with a space: str_ireplace(array('&', '+', '\n'), array('', '', '&'), $response) "id=1 name=name gender=gender age=age friends=friends" But when I put \n in double quotes then it works just fine: str_ireplace(array('&', '+', "\n"), array('', '', '&'), $response) "id=1&name=name&gender=gender&age=age&friends=friends" Why is that so?

    Read the article

  • Detecting suspicious behaviour in a web application - what to look for?

    - by Sosh
    I would like to ask the proactive (or paranoid;) among us: What are you looking for, and how? I'm thinking mainly about things that can be watched for programaticaly, rather than manually inspecting logs. For example: - Manual/automated hack attempts - Data skimming - Bot registrations (that have evaded captcha etc.) - Other unwanted behaviour Just wondering what most people would consider practical and effective..

    Read the article

  • Techniques for querying a set of object in-memory in a Java application

    - by Edd Grant
    Hi All, We have a system which performs a 'coarse search' by invoking an interface on another system which returns a set of Java objects. Once we have received the search results I need to be able to further filter the resulting Java objects based on certain criteria describing the state of the attributes (e.g. from the initial objects return all objects where x.y z && a.b == c). The criteria used to filter the set of objects each time is partially user configurable, by this I mean that users will be able to select the values and ranges to match on but the attributes they can pick from will be a fixed set. The data sets are likely to contain <= 10,000 objects for each search. The search will be executed manually by the application user base probably no more than 2000 times a day (approx). It's probably worth mentioning that all the objects in the result set are known domain object classes which have Hibernate and JPA annotations describing their structure and relationship. Off the top of my head I can think of 3 ways of doing this: For each search persist the initial result set objects in our database, then use Hibernate to re-query them using the finer grained criteria. Use an in-memory Database (such as hsqldb?) to query and refine the initial result set. Write some custom code which iterates the initial result set and pulls out the desired records. Option 1 seems to involve a lot of toing and froing across a network to a physical Database (Oracle 10g) which might result in a lot of network and disk activity. It would also require the results from each search to be isolated from other result sets to ensure that different searches don't interfere with each other. Option 2 seems like a good idea in principle as it would allow me to do the finer query in memory and would not require the persistence of result data which would only be discarded after the search was complete. Gut feeling is that this could be pretty performant too but might result in larger memory overheads (which is fine as we can be pretty flexible on the amount of memory our JVM gets). Option 3 could be very performant but is something I would like to avoid as any code we write would require such careful testing that the time taken to acheive something flexible and robust enough would probably be prohibitive. I don't have time to prototype all 3 ideas so I am looking for comments people may have on the 3 options above, plus any further ideas I have not considered, to help me decide which idea might be most suitable. I'm currently leaning toward option 2 (in memory database) so would be keen to hear from people with experience of querying POJOs in memory too. Hopefully I have described the situation in enough detail but don't hesitate to ask if any further information is required to better understand the scenario. Cheers, Edd

    Read the article

  • WPF DataGrid Get All SelectedRows

    - by anvarbek raupov
    Have to use free WPF DataGrid (I thought Infragistics libraries are bad, I take it back after this) for this project of mine. Looks like DataGrid doesnt have clean MVVM Friendly way of getting the list of selectedRows ? I can data bind to SelectedItem="{Binding SelectedSourceFile}" but this only shows the 1st selected row. Need to be able to get all selected rows. Any hints to do it cleanly via MVVM ?

    Read the article

  • How can i learn Table Name in database an column name?

    - by Phsika
    How can i learn table Name in database an how can i learn any Table's Column name? SELECT Col.COLUMN_NAME, Col.DATA_TYPE FROM INFORMATION_SCHEMA.COLUMNS AS Col LEFT OUTER JOIN INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE AS Usg ON Col.TABLE_NAME = Usg.TABLE_NAME AND Col.COLUMN_NAME = Usg.COLUMN_NAME LEFT OUTER JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS AS Con ON Usg.CONSTRAINT_NAME = Con.CONSTRAINT_NAME WHERE Col.TABLE_NAME = 'Addresses_Temp' AND Con.Constraint_TYPE = 'PRIMARY KEY' But it returns to me empty data:(

    Read the article

  • Is there a way that I can force mod_perl to re-use buffer memory?

    - by Pavel Georgiev
    Hi, I have a Perl script running in mod_perl that needs to write a large amount of data to the client, possibly over a long period. The behavior that I observe is that once I print and flush something, the buffer memory is not reclaimed even though I rflush (I know this can't be reclaimed back by the OS). Is that how mod_perl operates and is there a way that I can force it to periodically free the buffer memory, so that I can use that for new buffers instead of taking more from the OS?

    Read the article

  • Stateless NHibernate for querying

    - by JontyMC
    We have a database that is updated via a background process. We are using NHibernate to query the data for display on the web UI, so we don't need change tracking or lazy-loading. If we mark all the mappings as mutable="false", is this the same as using a stateless session?

    Read the article

  • Filter Facebook Stream by Post privacy?

    - by fabian
    Hi there, i query some wall data within my facebook tab. I was wondering how to filter the data (query) to show only post which are visible to a certain country. $query = " SELECT post_id, created_time, attachment,action_links, privacy FROM stream WHERE source_id = ".$page_id." AND viewer_id = ".$user_id." AND actor_id = ".$actor_id." LIMIT 50"; The Output already show Australia: But how to filter for Australia-Only. Array ( [posts] => Array ( [0] => Array ( [post_id] => 123 [viewer_id] => 123 [source_id] => 123 [type] => 46 [app_id] => [attribution] => [actor_id] => 123 [target_id] => [message] => Only for Austria [attachment] => Array ( [description] => ) [app_data] => [action_links] => [comments] => Array ( [can_remove] => 1 [can_post] => 1 [count] => 0 [comment_list] => ) [likes] => Array ( [href] => http://www.facebook.com/social_graph.php?node_id=118229678189906&class=LikeManager [count] => 0 [sample] => [friends] => [user_likes] => 0 [can_like] => 1 ) [privacy] => Array ( [description] => Austria [value] => CUSTOM [friends] => [networks] => [allow] => [deny] => ) [updated_time] => 1271520716 [created_time] => 1271520716 [tagged_ids] => [is_hidden] => 0 [filter_key] => [permalink] => http://www.facebook.com/pages/ )

    Read the article

  • PHP thread pool?

    - by embedded
    I have scheduled a CRON job to run every 4 hours which needs to gather user accounts information. Now I want to speed things up and to split the work between several processes and to use one process to update the MySQL DB with the retrieved data from other processes. In JAVA I know that there is a thread pool which I can dedicate some threads to accomplish some work. how do I do it in PHP? Any advice is welcome. Thank

    Read the article

  • jquery in ajax loaded content

    - by Kim Gysen
    My application is supposed to be a single page application and I have the following code that works fine: home.php: <div id="container"> </div> accordion.php: //Click functions: load content $('#parents').click(function(){ //Load parent in container $('#container').load('http://www.blabla.com/entities/parents/parents.php'); }); parents.php: <div class="entity_wrapper"> Some divs and selectors </div> <script type="text/javascript"> $(document).ready(function(){ //Some jQuery / javascript }); </script> So the content loads fine, while the scripts dynamically loaded execute fine as well. I apply this system repetitively and it continues to work smoothly. I've seen that there are a lot of frameworks available on SPA's (such as backbone.js) but I don't understand why I need them if this works fine. From the backbone.js website: When working on a web application that involves a lot of JavaScript, one of the first things you learn is to stop tying your data to the DOM. It's all too easy to create JavaScript applications that end up as tangled piles of jQuery selectors and callbacks, all trying frantically to keep data in sync between the HTML UI, your JavaScript logic, and the database on your server. For rich client-side applications, a more structured approach is often helpful. Well, I totally don't have the feeling that I'm going through the stuff they mention. Adding the javascript per page works really well for me. They are html containers with clear scope and the javascript is just related to that part. More over, the front end doesn't do that much, most of the logic is managed based on Ajax calls to external PHP scripts. Sometimes the js can be a bit more extended for some functionalities, but all just loads as smooth in less than a second. If you think that this is bad coding, please tell me why I cannot do this and more importantly, what is the alternative I should apply. At the moment, I really don't see a reason on why I would change this approach as it just works too well. I'm kinda stuck on this question because it just worries me sick as it seems to easy to be true. Why would people go through hard times if it would be as easy as this...

    Read the article

  • Strange Java Socket Behavior (Connects, but Doesn't Send)

    - by Donald Campbell
    I have a fairly complex project that boils down to a simple Client / Server communicating through object streams. Everything works flawlessly for two consecutive connections (I connect once, work, disconnect, then connect again, work, and disconnect). The client connects, does its business, and then closes. The server successfully closes both the object output stream and the socket, with no IO errors. When I try to connect a third time, the connection appears to go through (the ServerSocket.accept() method goes through and an ObjectOutputStream is successfully created). No data is passed, however. The inputStream.readUnshared() method simply blocks. I have taken the following memory precautions: When it comes time to close the sockets, all running threads are stopped, and all objects are nulled out. After every writeUnshared() method call, the ObjectOutputBuffer is flushed and reset. Has anyone encountered a similar problem, or does anyone have any suggestions? I'm afraid my project is rather large, and so copying code is problematic. The project boils down to this: SERVER MAIN ServerSocket serverSocket = new ServerSocket(port); while (true) { new WorkThread(serverSocket.accept()).start(); } WORK THREAD (SERVER) public void run() { ObjectInputBuffer inputBuffer = new ObjectInputBuffer(new BufferedInputStream(socket.getInputStream())); while (running) { try { Object myObject = inputBuffer.readUnshared(); // do work is not specified in this sample doWork(myObject); } catch (IOException e) { running = false; } } try { inputBuffer.close(); socket.close(); } catch (Exception e) { System.out.println("Could not close."); } } CLIENT public Client() { Object myObject; Socket mySocket = new Socket(address, port); try { ObjectOutputBuffer output = new ObjectOutputBuffer(new BufferedOutputStream(mySocket.getOutputStream())); output.reset(); output.flush(); } catch (Exception e) { System.out.println("Could not get an input."); mySocket.close(); return; } // get object data is not specified in this sample. it simply returns a serializable object myObject = getObjectData(); while (myObject != null) { try { output.writeUnshared(myObject); output.reset(); output.flush(); } catch (Exception e) { e.printStackTrace(); break; } // catch } // while try { output.close(); socket.close(); } catch (Exception e) { System.out.println("Could not close."); } } Thank you to everyone who may be able to help!

    Read the article

< Previous Page | 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626  | Next Page >