Search Results

Search found 61580 results on 2464 pages for 'document based database'.

Page 211/2464 | < Previous Page | 207 208 209 210 211 212 213 214 215 216 217 218  | Next Page >

  • Weird TPS peaks on SQL 2005 replicated database.

    - by SuperCoolMoss
    When monitoring the Transactions/Sec using perfmon on one of my SQL2005 replicated databases - I'm seeing the TPS increase to 1000 and then immediately drop back down again - this happens every 5 seconds. I'm not sure what's causing this - is this something to do with replication? We also have asynchronous statistics set on this particular database. I've tried profiling when the users are not connected - but nothing is writing to the database. ADDED PICTURE AND BOUNTY

    Read the article

  • Wordpress database query running slow - one of the columns doesn't exist!

    - by Pavel
    Hi there. I'm having some problems with the query that wordpress runs. That's the one: SELECT DISTINCT ID,post_title,post_date,post_content,MATCH(post_title,post_content) AGAINST ('S') AS score FROM wp_posts WHERE MATCH (post_title,post_content) AGAINST ('S') AND post_date <= 'S' AND post_status = 'S' AND id != N AND post_type = 'S' ORDER BY score DESC When I'm running this query in phpmyadmin it says that N column doesn't exist so clause "AND id != N" si not making any sense. I ran the query again without this clause and db behaved like fully optimized one. Please can someone give me a hint on that? My questions are: What this clause is used for? What wordpress is trying to find by running this and Can I modify core wordpress files to get rid of this clause? Any response or help greatly appreciated!!

    Read the article

  • What's the best way to "shuffle" a table of database records?

    - by Darth
    Say that I have a table with a bunch of records, which I want to randomly present to users. I also want users to be able to paginate back and forth, so I have to perserve some sort of order, at least for a while. The application is basically only AJAX and it uses cache for already visited pages, so even if I always served random results, when the user tries to go back, he will get the previous page, because it will load from the local cache. The problem is, that if I return only random results, there might be some duplicates. Each page contains 6 results, so to prevent this, I'd have to do something like WHERE id NOT IN (1,2,3,4 ...) where I'd put all the previously loaded IDs. Huge downside of that solution is that it won't be possible to cache anything on the server side, as every user will request different data. Alternate solution might be to create another column for ordering the records, and shuffle it every insert time unit here. The problem here is, I'd need to set random number out of a sequence to every record in the table, which would take as many queries as there are records. I'm using Rails and MySQL if that's of any relevance.

    Read the article

  • Spring validation has error in XML document from ServletContext resource

    - by user1441404
    I applied spring validation in my registration page .but the follwing error are shown in my server log of my app engine server. javax.servlet.UnavailableException: org.springframework.beans.factory.xml.XmlBeanDefinitionStoreException: Line 22 in XML document from ServletContext resource [/WEB-INF/spring/appServlet/servlet-context.xml] is invalid; nested exception is org.xml.sax.SAXParseException; lineNumber: 22; columnNumber: 30; cvc-complex-type.2.4.c: The matching wildcard is strict, but no declaration can be found for element 'property'. My code is given below : <?xml version="1.0" encoding="UTF-8"?> <beans:beans xmlns="http://www.springframework.org/schema/mvc" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:beans="http://www.springframework.org/schema/beans" xmlns:context="http://www.springframework.org/schema/context" xsi:schemaLocation="http://www.springframework.org/schema/mvc http://www.springframework.org/schema/mvc/spring-mvc-3.0.xsd http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.0.xsd http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-3.0.xsd http://www.springframework.org/schema/jee http://www.springframework.org/schema/jee/spring-jee-3.0.xsd > <beans:bean name="/register" class="com.my.registration.NewUserRegistration"> <property name="validator"> <bean class="com.my.validation.UserValidator" /> </property> <beans:property name="formView" value="newuser"></beans:property> <beans:property name="successView" value="home"></beans:property> </beans:bean> <beans:bean class="org.springframework.web.servlet.view.InternalResourceViewResolver"> <beans:property name="prefix" value="/WEB-INF/views/" /> <beans:property name="suffix" value=".jsp" /> </beans:bean> </beans:beans>

    Read the article

  • Which database engines support IP addresses as a native type?

    - by Matt McClellan
    I'm trying to find databases with support for IP addresses as a native type (as opposed to storing as a string, or an unsigned integer, which at least one commenter has already pointed out won't work for IPv6). The primary reason I'm looking for this is ease of development. For example, sorting on a "native" IP address column would be correct (as opposed to when it's stored as a string). I would assume support for such a type would also include useful operations such as determining if an IP address is inside a specified network for use in WHERE clauses. The only one I'm aware of so far is PostgreSQL with its inet class. Does anyone have any others?

    Read the article

  • Getting document.getElementsByName from another page PHP/javascript

    - by DarkN3ss
    so i have been looking around on how to do this but with no success. Im trying to get the value of the name test from an external website <input type="hidden" name="test" value="ThisIsAValue" /> But so far i have only found how to get the value of that with an ID <input type="hidden" id="test" name="test" value="ThisIsAValue" autocomplete="off" /> but I need to try find it without a ID is my problem. And this is an example on how to get it from the ID <?php $doc = new DomDocument; $doc->validateOnParse = true; $doc->loadHtml(file_get_contents('http://example.com/bla.php')); var_dump($doc->getElementById('test')); ?> And i have found how to get it from name and NOT ID on the same page <script> function getElements() { var test = document.getElementsByName("test")[0].value; alert(test); } </script> But again I dont know how to get the value of it by name from an external page eg "http://example.com/bla.php", any help? Thanks

    Read the article

  • best web database solution for scala for a high traffic site?

    - by egervari
    I am in charge of a rebuilding a website that gets about 250,000 visitors a day. We'd like to use Scala, but it does not work very well with Spring (in some minor cases) and Hibernate (there is a major and very annoying mismatch here if you want to use scala collections, which we do). The application itself is going to have about 40-50 tables. Other than Hibernate, is there an ORM that works awesome with Scala and is as performant and reliable as Hibernate? Does it also have the same capabilities, or are we going to run into leaky-abstractions if we don't use Hibernate? It would be a big risk for us to go with a framework that is newer and doesn't seem to have a lot of industry backing... and at the same time, Hibernate is a real pain to program against when using Scala. 1) The Java Collection <- Scala Collection is absolutely painful. There is a lot more boilerplate and crap to write. 2) The IDE doesn't import JavaConversions and java interfaces automatically... so we this needs to be done manually. Optimizing Imports in IDEA is going to destroy all the manual work. 3) There is also a performance cost to converting back and forth all the time in your domain objects and your dao classes. 4) Not to mention there needs to be a lot of casting, which produces code ugly as sin. I actually would love to write my own orm that is 100% tailored to scala, but obviously this is really outside of the scope of our project for now. So what is the best approach?

    Read the article

  • How to prevent a database from being restored?

    - by André
    Is there a way to prevent a database from being restored with a DDL trigger or something? The background is that I would like to prevent restoring a database on a test server by a colleague. So far I had a look a DDL trigger but didn't find the right event to react on the restore action.

    Read the article

  • Quickly or concisely determine the longest string per column in a row-based data collection

    - by ccornet
    Judging from the failure of my last inquiry, I need to calculate and preset the widths of a set of columns in a table that is being made into an Excel file. Unfortunately, the string data is stored in a row-based format, but the widths must be calculated in a column-based format. The data for the spreadsheets are generated from the following two collections: var dictFiles = l.Items.Cast<SPListItem>().GroupBy(foo => foo.GetSafeSPValue("Category")).ToDictionary(bar => bar.Key); StringDictionary dictCols = GetColumnsForItem(l.Title); Where l is an SPList whose title determines which columns are used. Each SPListItem corresponds to a row of data, which are sorted into separate worksheets based on Category (hence the dictionary). The second line is just a simple StringDictionary that has the column name (A, B, C, etc.) as a key and the corresponding SPListItme field display name as the corresponding value. So for each Category, I enumerate through dictFiles[somekey] to get all the rows in that sheet, and get the particular cell data using SPListItem.Fields[dictCols[colName]]. What I am asking is, is there a quick or concise method, for any one dictFiles[somekey], to retrieve a readout of the longest string in each column provided by dictCols? If it is impossible to get both quickness and conciseness, I can settle for either (since I always have the O(n*m) route of just enumerating the collection and updating an array whenever strCurrent.Length strLongest.Length). For example, if the goal table was the following... Item# Field1 Field2 Field3 1 Oarfish Atmosphere Pretty 2 Raven Radiation Adorable 3 Sunflower Flowers Cute I'd like a function which could cleanly take the collection of items 1, 2, and 3 and output in the correct order... Sunflower, Atmosphere, Adorable Using .NET 3.5 and C# 3.0.

    Read the article

  • Simulate stochastic bipartite network based on trait values of species - in R

    - by Scott Chamberlain
    I would like to create bipartite networks in R. For example, if you have a data.frame of two types of species (that can only interact across species, not within species), and each species has a trait value (e.g., size of mouth in the predator allows who gets to eat which prey species), how do we simulate a network based on the traits of the species (that is, two species can only interact if their traits overlap in values for instance)? UPDATE: Here is a minimal example of what I am trying to do. 1) create phylogenetic tree; 2) simulate traits on the phylogeny; 3) create networks based on species trait values. # packages install.packages(c("ape","phytools")) library(ape); library(phytools) # Make phylogenetic trees tree_predator <- rcoal(10) tree_prey <- rcoal(10) # Simulate traits on each tree trait_predator <- fastBM(tree_predator) trait_prey <- fastBM(tree_prey) # Create network of predator and prey ## This is the part I can't do yet. I want to create bipartite networks, where ## predator and prey interact based on certain crriteria. For example, predator ## species A and prey species B only interact if their body size ratio is ## greater than X.

    Read the article

  • Efficient way to combine results of two database queries.

    - by ensnare
    I have two tables on different servers, and I'd like some help finding an efficient way to combine and match the datasets. Here's an example: From server 1, which holds our stories, I perform a query like: query = """SELECT author_id, title, text FROM stories ORDER BY timestamp_created DESC LIMIT 10 """ results = DB.getAll(query) for i in range(len(results)): #Build a string of author_ids, e.g. '1314,4134,2624,2342' But, I'd like to fetch some info about each author_id from server 2: query = """SELECT id, avatar_url FROM members WHERE id IN (%s) """ values = (uid_list) results = DB.getAll(query, values) Now I need some way to combine these two queries so I have a dict that has the story as well as avatar_url and member_id. If this data were on one server, it would be a simple join that would look like: SELECT * FROM members, stories WHERE members.id = stories.author_id But since we store the data on multiple servers, this is not possible. What is the most efficient way to do this? Thanks.

    Read the article

  • Run multiple MySQL queries based on a series of ifs

    - by OldWest
    I am just getting started on this complex query I need to write and was hoping for any suggestions or feedback regarding table structure and the actual query itself.. I've already created my tables and populated test data, and now just trying to sort out how and what is possible within MySQL. Here is an outline of the problem: End result: Listing of rates based on specific queried criteria (see below): Age: [ 27 ] Spouse Age: [ 25 ] Num of Children: [ 3 ] Zip Code: [ 97128 ] The problem I am running into is each company that provides rates has a unique way of dealing with the rate. And I am looking for the best approach for multiple queries based on the company (one query with results for each company more or less all combined into one result set). Here are some facts: - Each company deals with zip code ranges which assist in the query result. - Each company has a different method of calculating the rate based on the Applicant, Spouse, Num of Children: Example, a) Company A determines rate by: Applicant + Spouse + Child(ren) = rate (age is pertinent to the applicant within a range). b) Company B determines the rate by total number of applicants like: 1, 2, 3, 4, 5, 6+ = rate (and age is ignored). First off, what would I call this type of query? Multiple nested query? And should I intertwine php within it to determine the If()s ... I apologize if this thread lacks sufficient data, so please tell me anything you would like to see.

    Read the article

  • Best suited tool to document message processing done in C written program

    - by user3494614
    I am relatively new to UML and it's seems to be very vast I have a small program which basically receives messages on socket and then depending upon message ID embedded as first byte of message it processes the buffer. There are around 5 different message ID which it processes and communicates on another socket and has around 8 major functions. So program in short is like this. I am not pasting entire .c file or main function but just giving some bits and pieces of it so that to get idea of program flow. int main(int argc, char** argv) { register_shared_mem(); listen(); while(get_next_message(buffer)) { switch((msg)(buffer)->id) { case TYPE1: process1(); answer(); ..... } } } I want to document this is pictorial way like for Message type 1 it calls this function which calls another and which calls another. Please let me know any open source tool which will allow me to quickly draw such kind of UML or sequence diagram and will also allow me to write brief description of what each function does? Thanks In Advance

    Read the article

  • mysql database design: thread and reply of a reply?

    - by ajsie
    in my forum i have threads and replies. one thread has multiple replies. but then, a reply can be a reply of an reply (like google wave). because of that a reply has to have a column "reply_id" so it can point to the parent reply. but then, the "top-level" replies (the replies directly under the thread) will have no parent reply. so how can i fix this? how should the columns be in the reply table (and thread table). at the moment it looks like this: threads: id title body replies: id thread_id (all replies will belong to a thread) reply_id (here lies the problem. the top-level replies wont have a parent reply) body what could a smart design look like to enable reply a reply?

    Read the article

  • Booking logic and architecture, database sync: Hotels, tennis courts reservation system ...

    - by coulix
    Hello Stackers, Imagine that you want to design a tennis booking system. You have 5 tennis clubs as partenrs with no online api allowing you to check on their side if a court is booked or not: You have to build this part as well. Every time a booking is done on their side you want it to be know by our system. Probably using a POST request form tennis partner to our server. Every time a booking is done on our website, we want to push the booking to their system. The difficulty is that their system need to be online and accessible from outside. Ip may change, we have to use a dns updater. In case their system is not available we still accept the booking and fallback to an async email with 'i confirm booking/reject booking' link sent to the club. I find the whole process quite complex and was wondering about the way online hotel booking system and hotel were working. Do they all have their data open and online ? The good thing is that the data will grow large and fits nicely to some no SQL ;) like couch db

    Read the article

  • Whats the best way to design this database scenario?

    - by ankimal
    I want to setup 2 MySQL databases which differ in schema in that, one is normalized and the other is flat for quicker reads and writes. The information being stored in both DBs is the same, but the representation is obviously different owing to the different design approaches. I need to find a robust solution to sync information in real time from my normalized version to my flatter version.

    Read the article

  • database design: table with large amount of columns (50+) or many sub tables with small amount of co

    - by Guillaume
    In our oroject we already have a lots of tables (100+). Some of them contains a lot of columns (50-100) and we are facing the need of adding more columns from time to time. What do you think is best - from maintenance and performance point of view - to split these huge tables in smaller entities or to keep the tables the way they are ? We are using an ORM tools, so we don't need to write custom request.

    Read the article

  • Scalably processing large amount of comlpicated database data in PHP, many times a day.

    - by Eph
    I'm soon to be working on a project that poses a problem for me. It's going to require, at regular intervals throughout the day, processing tens of thousands of records, potentially over a million. Processing is going to involve several (potentially complicated) formulas and the generation of several random factors, writing some new data to a separate table, and updating the original records with some results. This needs to occur for all records, ideally, every three hours. Each new user to the site will be adding between 50 and 500 records that need to be processed in such a fashion, so the number will not be steady. The code hasn't been written, yet, as I'm still in the design process, mostly because of this issue. I know I'm going to need to use cron jobs, but I'm concerned that processing records of this size may cause the site to freeze up, perform slowly, or just piss off my hosting company every three hours. I'd like to know if anyone has any experience or tips on similar subjects? I've never worked at this magnitude before, and for all I know, this will be trivial to the server and not pose much of an issue. As long as ALL records are processed before the next three hour period occurs, I don't care if they aren't processed simultaneously (though, ideally, all records belonging to a specific user should be processed in the same batch), so I've been wondering if I should process in batches every 5 minutes, 15 minutes, hour, whatever works, and how best to approach this (and make it scalable in a way that is fair to all users)?

    Read the article

  • Database for Large number of 1kB data chunks (MySQL?)

    - by The Unknown
    I have a very large dataset, each item in the dataset being roughly 1kB in size. The data needs to be queried rapidly by many applications distributed over a network. The dataset has more than a million items (so 500 million+ 1kB data chunks). What would be the best method to storing this dataset (need to allow adding more items, and reading them rapidly, but never modifying already added data)? Would using a MySQL DB using the binary blob format be appropriate? Or should each of these be stored as files on a file system? edit: the number is 1 million items now, but needs to be able to scale to well over 500 million items easily.

    Read the article

  • Is Cassandra database row size limited by available memory?

    - by Adam Hollidge
    I'm working with very long time series -- hundreds of millions of data points in one series -- and am considering Cassandra as a data store. In this question, one of the Cassandra committers (the über helpful jbellis) says that Cassandra rows can be very large, and that column slicing operations are faster than row slices, hence my question: Is the row size still limited by available memory?

    Read the article

  • Don't know how to encrypt database using SQLCipher

    - by Armaan
    I have included SQLCipher into my project exactly like explained in this link: http://sqlcipher.net/ios-tutorial/ But I am not sure how to encrypt the database I have read description from above link but not getting. Actually what I am doing is if application is opening first time then it will copy the database(i.e. without encryption) to the document directory. One more thing my database is blank when copying from bundle to document directory. I have tried to use sqlite3_key function after opening the database but nothing is encrypted. But I didn't found something like how to encrypt database when copying from bundle to document directory. I am planning to use FMDB so it would be better to reply according to that. Please guide me how to do that or point to direction if is there any tutorial for it. Also suggest what should be the standard approach to do that.

    Read the article

< Previous Page | 207 208 209 210 211 212 213 214 215 216 217 218  | Next Page >