Search Results

Search found 14354 results on 575 pages for 'existing records'.

Page 192/575 | < Previous Page | 188 189 190 191 192 193 194 195 196 197 198 199  | Next Page >

  • stxxl Assertion `it != root_node_.end()' failed

    - by Fabrizio Silvestri
    I am receiving this assertion failed error when trying to insert an element in a stxxl map. The entire assertion error is the following: resCache: /usr/include/stxxl/bits/containers/btree/btree.h:470: std::pair , bool stxxl::btree::btree::insert(const value_type&) [with KeyType = e_my_key, DataType = unsigned int, CompareType = comp_type, unsigned int RawNodeSize = 16384u, unsigned int RawLeafSize = 131072u, PDAllocStrategy = stxxl::SR, stxxl::btree::btree::value_type = std::pair]: Assertion `it != root_node_.end()' failed. Aborted Any idea? Edit: Here's the code fragment void request_handler::handle_request(my_key& query, reply& rep) { c_++; strip(query.content); std::cout << "Received query " << query.content << " by thread " << boost::this_thread::get_id() << ". It is number " << c_ << "\n"; strcpy(element.first.content, query.content); element.second = c_; testcache_.insert(element); STXXL_MSG("Records in map: " << testcache_.size()); } Edit2 here's more details (I omit constants, e.g. MAX_QUERY_LEN) struct comp_type : std::binary_function<my_key, my_key, bool> { bool operator () (const my_key & a, const my_key & b) const { return strncmp(a.content, b.content, MAX_QUERY_LEN) < 0; } static my_key max_value() { return max_key; } static my_key min_value() { return min_key; } }; typedef stxxl::map<my_key, my_data, comp_type> cacheType; cacheType testcache_; request_handler::request_handler() :testcache_(NODE_CACHE_SIZE, LEAF_CACHE_SIZE) { c_ = 0; memset(max_key.content, (std::numeric_limits<unsigned char>::max)(), MAX_QUERY_LEN); memset(min_key.content, (std::numeric_limits<unsigned char>::min)(), MAX_QUERY_LEN); testcache_.enable_prefetching(); STXXL_MSG("Records in map: " << testcache_.size()); }

    Read the article

  • Loading from pickled data causes database error with new saves

    - by hibbie
    In order to save time moving data I pickled some models and dumped them to file. I then reloaded them into another database using the same exact model. The save worked fine and the objects kept their old id which is what I wanted. However, when saving new objects I run into nextval errors. Not being very adept with postgres, I'm not sure how to fix this so I can keep old records with their existing ID while being able to continue adding new data. Thanks, Thomas

    Read the article

  • mysql count rows and grop them by month

    - by user2661296
    I have a table called cc_calls and there I have many call records I want to count them and group them in months I have a timestamp called starttime and I can use that row to extract the month, also limit the count for 12 months the results should be like: Month Count January 768768 February 876786 March 987979 April 765765 May 898797 June 876876 July 786575 August 765765 September 689787 October 765879 November 897989 December 876876 Can anyone guide me or show me the mysql query that I need to get this result.

    Read the article

  • How do I create a safe local development environment?

    - by docgnome
    I'm currently doing web development with another developer on a centralized development server. In the past this has worked alright, as we have two separate projects we are working on and rarely conflict. Now, however, we are adding a third (possible) developer into the mix. This is clearly going to create problems with other developers changes affecting my work and vice versa. To solve this problem, I'm thinking the best solution would be to create a virtual machine to distribute between the developers for local use. The problem I have is when it comes to the database. Given that we all develop on laptops, simply keeping a local copy of the live data is plain stupid. I've considered sanitizing the data, but I can't really figure out how to replace the real data, with data that would be representative of what people actually enter with out repeating the same information over and over again, e.g. everyone's address becomes 123 Testing Lane, Test Town, WA, 99999 or something. Is this really something to be concerned about? Are there tools to help with this sort of thing? I'm using MySQL. Ideally, if I sanitized the db it should be done from a script that I can run regularly. If I do this I'd also need a way to reduce the size of the db itself. (I figure I could select all the records created after x and whack them and all the records in corresponding tables out so that isn't really a big deal.) The second solution I've thought of is to encrypt the hard drive of the vm, but I'm unsure of how practical this is in terms of speed and also in the event of a lost/stolen laptop. If I do this, should the vm hard drive file itself be encrypted or should it be encrypted in the vm? (I'm assuming the latter as it would be portable and doesn't require the devs to have any sort of encryption capability on their OS of choice.) The third is to create a copy of the database for each developer on our development server that they are then responsible to keep the schema in sync with the canonical db by means of migration scripts or what have you. This solution seems to be the simplest but doesn't really scale as more developers are added. How do you deal with this problem?

    Read the article

  • jqueryUI Slider: Setting minimum and maximum values in range from DB

    - by alexBrand
    I have a jQueryUI slider on my website that deals with price range. I have a products table in mysql that has various entries. I am using the slider to filter the results, but I need to set the minimum and maximum prices from the records in my database. Should I just generate (with php) hidden fields in my html that contain the minimum and maximum and then use jQuery to obtain them? Or is there a better way of achieving this, maybe using AJAX? Thanks

    Read the article

  • Gestion del vencimiento de listas y documentos en sharePoint

    - by vizcaynot
    Hello: Can I do the following with SharePoint 2007: 1) Create lists (records) and document libraries with content that is in force for a certain amount of days and after that, they automatically expire? How? 2) When searching for words in a SharePoint site where there are lists and documents, SharePoint displays results similar to Google, would it be possible that Sharepoint tell me what documents are no longer current? How? Thank you very much.

    Read the article

  • How to create Insert script using c#

    - by karthik
    From my c# code behind, i pass a query to mysql database and get the data in a DataTable. Now i want to use the data in data table to write the Insert query in a script file [.sql] The objective of doing so is, whatever records i select from mysql, i should write that to a script file as backup. Thais why i need the insert statements. How ? Or any other idea is appreciated.

    Read the article

  • SQL query - choosing 'last updated' record in a group, better db design?

    - by Jimmy
    Hi, Let's say I have a MySQL database with 3 tables: table 1: Persons, with 1 column ID (int) table 2: Newsletters, with 1 column ID (int) table 3: Subscriptions, with columns Person_ID (int), Newsletter_ID (int), Subscribed (bool), Updated (Datetime) Subscriptions.Person_ID points to a Person, and Subscription.Newsletter_ID points to a Newsletter. Thus, each person may have 0 or more subscriptions to 0 or more magazines at once. The table Subscriptions will also store the entire history of each person's subscriptions to each newsletter. If a particular Person_ID-Newsletter_ID pair doesn't have a row in the Subscriptions table, then it's equivalent to that pair having a subscription status of 'false'. Here is a sample dataset Persons ID 1 2 3 Newsletters ID 4 5 6 Subscriptions Person_ID Newsletter_ID Subscribed Updated 2 4 true 2010-05-01 3 4 true 2010-05-01 3 5 true 2010-05-10 3 4 false 2010-05-15 Thus, as of 2010-05-16, Person 1 has no subscription, Person 2 has a subscription to Newsletter 4, and Person 3 has a subscription to Newsletter 5. Person 3 had a subscription to Newsletter 4 for a while, but not anymore. I'm trying to do 2 kinds of query. A query that shows everyone's active subscriptions as of query time (we can assume that updated will never be in the future -- thus, this means returning the record with the latest 'updated' value for each Person_ID-Newsletter_ID pair, as long as Subscribed is true (if the latest record for a Person_ID-Newsletter_ID pair has a Subscribed status of false, then I don't want that record returned)). A query that returns all active subscriptions for a specific newsletter - same qualification as in 1. regarding records with 'false' in the Subscribed column. I don't use SQL/databases often enough to tell if this design is good, or if the SQL queries needed would be slow on a database with, say, 1M records in the Subscriptions table. I was using the Visual query builder tool in Visual Studio 2010 but I can't even get the query to return the latest updated record for each Person_ID-Newsletter_ID pair. Is it possible to come up with SQL queries that don't involve using subqueries (presumably because they would become too slow with a larger data set)? If not, would it be a better design to have a separate Subscriptions_History table, and every time a subscription status for a Person_ID-Newsletter-ID pair is added to Subscriptions, any existing record for that pair is moved to Subscriptions_History (that way the Subscriptions table only ever contains the latest status update for any Person_ID-Newsletter_ID pair)? I'm using .net on Windows, so would it be easier (or the same, or harder) to do this kind of queries using Linq? Entity Framework? Thanks!

    Read the article

  • Table for each region in MySQL

    - by King Wu
    There are four regions with more than one million records total. Should I create a table with a region column or a table for each region and combine them to get the top ranks? If I combine all four regions, none of my columns will be unique so I will need to also add an id column for my primary key. Otherwise, name, accountId & characterId would be candidate keys or should I just add an id column anyways. Table: ---------------------------------------------------------------- | name | accountId | iconId | level | characterId | updateDate | ----------------------------------------------------------------

    Read the article

  • how to fetch data from multiple tables MySQL

    - by faisal
    hi all, I am looking some help in php+MySQL+jquery I have 2 tables table1 table 1 have 4 colume (id, title, desc, thumb_img) tabel2 table 2 have 3 colume(id, table1id, img) I just want to compare 2 table with the value of $_get['QS']; and show the records from both (title, desc, img) Looking forward for the help.:)

    Read the article

  • sef service map generation problem for sobi2 component

    - by raghu-pandiri
    Hi, We are using sef service map component and installed SEF SM SOBI2 Integrator for Joomla 1.5.x but if i enable the intigrator service map not getting generated giving a blank page.if i disable the intigrator it is generating properly for other content on the site.we have 35k + items in sobi2.is it not support for that much of records or what?

    Read the article

  • datagrid speed issue

    - by girish
    i m binding gridview in asp.net application... i m about to bind some thousand records in datagrid... on RowDataBound event of the grid i need to check if the log in user is authorised to view the perticuler record...so i need to send database request...like wise such other two to three operation requires to send request to database.... about three to four request are sended to database during each row bound of the gridview...is it effective on speed of the grid?

    Read the article

  • Better alternative to autonumber primary keys

    - by Comrad_Durandal
    I am looking for a better primary key than the autonumber data type, namely for the reason that it's limited to a long integer, when I really just need the field to reflect a number or text string that will never ever repeat, no matter HOW many records are added or deleted from the table. The problem is I am not sure how to implement something like turning the current date and time into a hexadecimal string and using that as a unique field I can use as a primary key. Am I just being too paranoid about running out of space?

    Read the article

  • Reasons to store users' data in LDAP instead of RDBMS

    - by Ancymon
    It is often said that using LDAP is a good way to store data about users. That's beacause users' "directory" is hierarchical and it changes rarely. But in my opinion that doesn't exclude using RDBMS. What might be reasons to use LDAP? I guess that storing multi-valued fields or adding custom fields in LDAP might be easier but it can be done in database too (unless you have many records)

    Read the article

  • Find and replace braced tags within a MySQL table

    - by Cy
    I have about 40000 records in that table that contains plain text and within the plain text, contains that kind of tags which its only characteristic is that they are braced between [ ] [caption id="attachment_2948" align="alignnone" width="480" caption="the caption goes here"] How could I remove those? (replace by nothing) I could also run a PHP program if necessary to do the cleanup.

    Read the article

  • How to setup Lucene/Solr for a B2B web app?

    - by Bill Paetzke
    Given: 1 database per client (business customer) 5000 clients Clients have between 2 to 2000 users (avg is ~100 users/client) 100k to 10 million records per database Users need to search those records often (it's the best way to navigate their data) Possibly relevant info: Several new clients each week (any time during business hours) Multiple web servers and database servers (users can login via any web server) Let's stay agnostic of language or sql brand, since Lucene (and Solr) have a breadth of support For Example: Joel Spolsky said in Podcast #11 that his hosted web app product, FogBugz On-Demand, uses Lucene. He has thousands of on-demand clients. And each client gets their own database. They use an index per client and store it in the client's database. I'm not sure on the details. And I'm not sure if this is a serious mod to Lucene. The Question: How would you setup Lucene search so that each client can only search within its database? How would you setup the index(es)? Where do you store the index(es)? Would you need to add a filter to all search queries? If a client cancelled, how would you delete their (part of the) index? (this may be trivial--not sure yet) Possible Solutions: Make an index for each client (database) Pro: Search is faster (than one-index-for-all method). Indices are relative to the size of the client's data. Con: I'm not sure what this entails, nor do I know if this is beyond Lucene's scope. Have a single, gigantic index with a database_name field. Always include database_name as a filter. Pro: Not sure. Maybe good for tech support or billing dept to search all databases for info. Con: Search is slower (than index-per-client method). Flawed security if query filter removed. One last thing: I would also accept an answer that uses Solr (the extension of Lucene). Perhaps it's better suited for this problem. Not sure.

    Read the article

  • How to copy a text array to a series of cells in Excel

    - by aSystemOverload
    I am dynamically creating a report, where I create a worksheet, bring in the records afresh. How can I easily type the field names and copy them to the cells. Without doing one cell per line, there are ~20 columns. I tried: dim fieldNames as variant fieldNames = ("'DS Date', 'A', 'B', 'A','S ASD', 'S','D S','D S', 'S','D S', 'SD', 'S','D'") Sheets("DATA").Range("C14:W14").Value = Application.WorksheetFunction.Transpose(fieldNames) But it just posts the whole thing in each cell? Any ideas?

    Read the article

  • Most suitable collection for multithreading requests?

    - by Raj Aththanayake
    I have ~10,000 records would like to keep in a collection in memory and execute LINQ queries against it. This collection should be available for all the users in the application domain and can access concurrently. I’m looking for a .NET collection that supports multithreading can query asynchronously and efficiently without any threading issues. Any suggestions on deciding a collection for this?

    Read the article

  • Incorrect syntax near the keyword 'select' while execuing query

    - by sam
    I am getting Incorrect syntax near the keyword 'select' after executing the following code. declare @c int SELECT @c = COUNT(*) FROM (select id, max(date_stored) from table B INNER JOIN table P ON B.id = P.id where id = 3) select @c I want to select total no of records having max stored dates in database. Can any one plz tell what I am doing wrong

    Read the article

  • Obtaining ActiveRecords if NOT nil

    - by user275729
    I would like to be able to gather all records in a table where the user_id is not null. This is what I have but it doesn't seem to be working (even though I've had it working in a seperate project): named_scope :all_registered, :conditions => ["user_id != ?", nil]

    Read the article

< Previous Page | 188 189 190 191 192 193 194 195 196 197 198 199  | Next Page >