Search Results

Search found 41135 results on 1646 pages for 'non relational database'.

Page 298/1646 | < Previous Page | 294 295 296 297 298 299 300 301 302 303 304 305  | Next Page >

  • Finding gaps (missing records) in database records using SQL

    - by Tony_Henrich
    I have a table with records for every consecutive hour. Each hour has some value. I want a T-SQL query to retrieve the missing records (missing hours, the gaps). So for the DDL below, I should get a record for missing hour 04/01/2010 02:00 AM (assuming date range is between the first and last record). Using SQL Server 2005. Prefer a set based query. DDL: CREATE TABLE [Readings]( [StartDate] [datetime] NOT NULL, [SomeValue] [int] NOT NULL ) INSERT INTO [Readings]([StartDate], [SomeValue]) SELECT '20100401 00:00:00.000', 2 UNION ALL SELECT '20100401 01:00:00.000', 3 UNION ALL SELECT '20100401 03:00:00.000', 45

    Read the article

  • Help to choose NoSQL database for project

    - by potapuff
    There is a table: doc_id(integer)-value(integer) Approximate 100k doc_id and 27?? rows. Majority query on this table - searching documents similar to current document: select 10 documents with maximum of (count common to current document value)/(count ov values in document). Nowadays we use PostgreSQL. Table weight (with index) ~1,5 GB. Average query time ~0.5s. Should I transfer all this to NoSQL base, if so, what?

    Read the article

  • Surgical slave reads for Ruby on Rails, mulitple databases.

    - by Daniel
    Greetings, I'm currently working on a multiple database rails application. I want to off load the SELECT queries on to the slave databases for only SOME of the databases or specific models. The issue is that in places, we swap out the current database connection and put in a different one for a short time; to load fixtures or to handle sharding. Does anyone have any recommendations on a ruby gem that 1. will split select/(sql writes) with a considerable amount of control. We want to handle just some models and we are looking for a neat surgical fix. 2. does not monkey around with activerecord. 3. is still being maintained. TIA -daniel

    Read the article

  • I need some help optimizing my database schema

    - by Steffan
    Here's a layout of my data: Heading 1: Sub heading Sub heading Sub heading Sub heading Sub heading Heading 2: Sub heading Sub heading Sub heading Sub heading Sub heading Heading 3: Sub heading Sub heading Sub heading Sub heading Sub heading Heading 4: Sub heading Sub heading Sub heading Sub heading Sub heading Heading 5: Sub heading Sub heading Sub heading Sub heading Sub heading These headings need to have a 'Completion Status' boolean value which gets linked to a user Id. Currently, this is how my table looks: id | userID | field_1 | field_2 | field_3 | field_4 | etc... ----------------------------------------------------------------------- 1 | 1 | 0 | 0 | 1 | 0 | ----------------------------------------------------------------------- 2 | 2 | 1 | 0 | 1 | 1 | Each field represents one Sub Heading. Having this many columns in my table looks awfully inefficient... How can I go about optimizing this? I can't think of any way to neaten it up :/

    Read the article

  • Acquiring Table Lock in Database - Interview Question

    - by harigm
    One of my interview Questions, if multiple users across the world are accessing the application, in which it uses a Table which has a Primary Key as Auto Increment Field. The Question how can you prevent the other user getting the Same Primary key when the other user is executing? My answer was I will obtain the Lock on the table and I will make the user to wait Until that user is released with the Primary key. But the Question How do you acquire the Table lock programmatically and implement this? If there are 1000 users coming every minute to the application, if you explicity hold the lock on the table, then the application will become slower? How do you manage this? Please suggest the possible answers for the above question

    Read the article

  • Implementing a "special" Access database to Expression Web

    - by Newbie
    I have just got an answer to my question about combining the result of a SQL. This was done with "ConcatRelated". Now I want to implement this in Expression Web 3. The SQL's I used in Access: SELECT land.id, land.official_name, vaksiner.vaksiner FROM land INNER JOIN (vaksiner INNER JOIN land_sykdom ON vaksiner.id = land_sykdom.sykdom) ON land.kort = land_sykdom.land ORDER BY land.official_name; and SELECT DISTINCT id, official_name, ConcatRelated("vaksiner","qryVaksinerRaw","id = " & [id]) AS vaksiner FROM qryVaksinerRaw; The last is saved as vaksine_query This is the SQL that I want to add to Expression Web: SELECT vaksine_query.id, vaksine_query.official_name, vaksine_query.vaksiner FROM vaksine_query WHERE vaksine_query.id="?"; Expression Web gives me the error message "Undefined function 'ContactRelated' in expression.

    Read the article

  • Data Access Layer, Best Practices

    - by labratmatt
    I'm looking for input on the best way to refactor the data access layer (DAL) in my PHP based web app. I follow an MVC pattern: PHP/HTML/CSS/etc. views on the front end, PHP controllers/services in the middle, and a PHP DAL sitting on top of a relational database in the model. Pretty standard stuff. Things are working fine, but my DAL is getting large (codesmell?) and becoming a bit unwieldy. My DAL contains almost all of the logic to interface with my database and is full of functions that look like this: function getUser($user_id) { $statement = "select id, name from users where user_id=:user_id"; PDO builds statement and fetchs results as an array return $array_of_results_generated_by_PDO_fetch_method; } Notes: The logic in my controller only interacts with the model using functions like the above in the DAL I am not using a framework (I'm of the opinion that PHP is a templating language and there's no need to inject complexity via a framework) I generally use PHP as a procedural language and tend to shy away from its OOP approach (I enjoy OOP development but prefer to keep that complexity out of PHP) What approaches have you taken when your DAL has reached this point? Do I have a fundamental design problem? Do I simply need to chop my DAL into a number of smaller files (logically divide it)? Thanks.

    Read the article

  • Perform Grouping of Resultsets in Code, not on Database Level

    - by NinjaBomb
    Stackoverflowers, I have a resultset from a SQL query in the form of: Category Column2 Column3 A 2 3.50 A 3 2 B 3 2 B 1 5 ... I need to group the resultset based on the Category column and sum the values for Column2 and Column3. I have to do it in code because I cannot perform the grouping in the SQL query that gets the data due to the complexity of the query (long story). This grouped data will then be displayed in a table. I have it working for specific set of values in the Category column, but I would like a solution that would handle any possible values that appear in the Category column. I know there has to be a straightforward, efficient way to do it but I cannot wrap my head around it right now. How would you accomplish it? EDIT I have attempted to group the result in SQL using the exact same grouping query suggested by Thomas Levesque and both times our entire RDBMS crashed trying to process the query. I was under the impression that Linq was not available until .NET 3.5. This is a .NET 2.0 web application so I did not think it was an option. Am I wrong in thinking that? EDIT Starting a bounty because I believe this would be a good technique to have in the toolbox to use no matter where the different resultsets are coming from. I believe knowing the most concise way to group any 2 somewhat similar sets of data in code (without .NET LINQ) would be beneficial to more people than just me.

    Read the article

  • Using JSON Data to Populate a Google Map with Database Objects

    - by MikeH
    I'm revising this question after reading the resources mentioned in the original answers and working through implementing it. I'm using the google maps api to integrate a map into my Rails site. I have a markets model with the following columns: ID, name, address, lat, lng. On my markets/index view, I want to populate a map with all the markets in my markets table. I'm trying to output @markets as json data, and that's where I'm running into problems. I have the basic map displaying, but right now it's just a blank map. I'm following the tutorials very closely, but I can't get the markers to generate dynamically from the json. Any help is much appreciated! Here's my setup: Markets Controller: def index @markets = Market.filter_city(params[:filter]) respond_to do |format| format.html # index.html.erb format.json { render :json => @market} format.xml { render :xml => @market } end end Markets/index view: <head> <script type="text/javascript" src="http://www.google.com/jsapi?key=GOOGLE KEY REDACTED, BUT IT'S THERE" > </script> <script type="text/javascript"> var markets = <%= @markets.to_json %>; </script> <script type="text/javascript" charset="utf-8"> google.load("maps", "2.x"); google.load("jquery", "1.3.2"); </script> </head> <body> <div id="map" style="width:400px; height:300px;"></div> </body> Public/javascripts/application.js: function initialize() { if (GBrowserIsCompatible() && typeof markets != 'undefined') { var map = new GMap2(document.getElementById("map")); map.setCenter(new GLatLng(40.7371, -73.9903), 13); map.addControl(new GLargeMapControl()); function createMarker(latlng, market) { var marker = new GMarker(latlng); var html="<strong>"+market.name+"</strong><br />"+market.address; GEvent.addListener(marker,"click", function() { map.openInfoWindowHtml(latlng, html); }); return marker; } var bounds = new GLatLngBounds; for (var i = 0; i < markets.length; i++) { var latlng=new GLatLng(markets[i].lat,markets[i].lng) bounds.extend(latlng); map.addOverlay(createMarker(latlng, markets[i])); } } } window.onload=initialize; window.onunload=GUnload;

    Read the article

  • Is there an online user agent database?

    - by Gary Richardson
    How do you parse your user agent strings? I'm looking to get: Browser Browser Version OS OS Version from a user agent string. My app is written in perl and was previously using HTTP::BrowserDetect. It's a bit dated and is no longer maintained. I'm in no way tied to using perl for the actual lookup. I've come to the conclusion that automagic parsing is a lost cause. I was thinking of writing a crud type app to show me a list of unclassified UA's and manually keep them up to date. Does such an resource already exist that I can tap into? It would be awesome if I could make an HTTP call to look up the user agent info. Thanks!

    Read the article

  • I am using relational division with EAV, but I need to find results in EAV that have some of the cat

    - by NewToDB
    I have two tables: CREATE TABLE EAV ( subscriber_id INT(1) NOT NULL DEFAULT '0', attribute_id CHAR(62) NOT NULL DEFAULT '', attribute_value CHAR(62) NOT NULL DEFAULT '', PRIMARY KEY (subscriber_id,attribute_id) ) INSERT INTO EAV (subscriber_id, attribute_id, attribute_value) VALUES (1,'color','red') INSERT INTO EAV (subscriber_id, attribute_id, attribute_value) VALUES (1,'size','xl') INSERT INTO EAV (subscriber_id, attribute_id, attribute_value) VALUES (1,'garment','shirt') INSERT INTO EAV (subscriber_id, attribute_id, attribute_value) VALUES (2,'color','red') INSERT INTO EAV (subscriber_id, attribute_id, attribute_value) VALUES (2,'size','xl') INSERT INTO EAV (subscriber_id, attribute_id, attribute_value) VALUES (2,'garment','pants') INSERT INTO EAV (subscriber_id, attribute_id, attribute_value) VALUES (3,'garment','pants') CREATE TABLE CRITERIA ( attribute_id CHAR(62) NOT NULL DEFAULT '', attribute_value CHAR(62) NOT NULL DEFAULT '' ) INSERT INTO CRITERIA (attribute_id, attribute_value) VALUES ('color', 'red') INSERT INTO CRITERIA (attribute_id, attribute_value) VALUES ('size', 'xl') To find all subscribers in the EAV that match my criteria, I use relational division: SELECT DISTINCT(subscriber_id) FROM EAV WHERE subscriber_id IN (SELECT E.subscriber_id FROM EAV AS E JOIN CRITERIA AS CR ON E.attribute_id = CR.attribute_id AND E.attribute_value = CR.attribute_value GROUP BY E.subscriber_id HAVING COUNT() = (SELECT COUNT() FROM CRITERIA)) This gives me an unique list of subscribers who have all the criteria. So that means I get back subscriber 1 and 2 since they are looking for the color red and size xl, and that's exactly my criteria. But what if I want to extend this so that I also get subscriber 3 since this subscriber didn't specifically say what color or size they want (ie. there is no entry for attribute 'color' or 'size' in the EAV table for subscriber 3). Given my current design, is there a way I can extend my query to include subscribers that have zero or more of the attributes defined, and if they do have the attribute defined, then it must match the criteria? Or is there a better way to design the table to aid in querying?

    Read the article

  • Search Multiple Tables of a Mysql Database

    - by DogPooOnYourShoe
    I have the following code: $query = "select * from customer where Surname like \"%$trimmed%\" OR TitleName like \"%$trimmed%\" OR PostCode like \"%$trimmed%\" order by Surname"; However, I have another table which I want to search from with the same paramaters(variables) as that. I know that something like "select * from customer,othertable" might not be possible, Is there a way to do it?

    Read the article

  • Database hosting options for a PosgreSQL project

    - by AJ
    PostgreSQL has announced an Android app [contest] (http://wiki.postgresql.org/wiki/AndroidAppContest). I wanted to try out something but the only hosting I have does not provide PostgreSQL. Do I have any economical (read cheap :D ) options? Is there a free hosting that anyone knows of? Thanks in advance. --AJ

    Read the article

  • Splitting MS Access Database - Front End Part Location

    - by kristof
    One of the best practices as specified by Microsoft for Access Development is splitting Access application into 2 parts; Front End that hold all the object except tables and the Back End that holds the tables. The msdn page links there to the article Splitting Microsoft Access Databases to Improve Performance and Simplify Maintainability that describes the process in details. It is recommended that in multi user environment the Back End is stored on the server/shared folder while the Front End is distributed to each user. That implies that each time there are any changes made to the front end they need to be deployed to every user machine. My question is: Assuming that the users themselves do not have rights to modify the Front End part of the application what would be the drawbacks/dangers of leaving this on the server as well next to the Back End copy? I can see the performance issues here, but are there any dangers here like possible corruptions etc? Thank you EDIT Just to clarify, the scenario specified in question assumes one Front End stored on the server and shared by users. I understand that the recommendation is to have FE deployed to each user machine, but my question is more about what are the dangers if that is not done. E.g. when you are given an existing solution that uses the approach of both FE and BE on the server. Assuming the the performance is acceptable and the customer is reluctant to change the approach would you still push the change? And why exactly? For example the danger of possible data corruption would definitely be the strong enough argument, but is that the case? It is a part of follow up of my previous question From SQL Server to MS Access 2007

    Read the article

  • best way to statistically detect anomalies in data

    - by reinier
    Hi, our webapp collects huge amount of data about user actions, network business, database load, etc etc etc All data is stored in warehouses and we have quite a lot of interesting views on this data. if something odd happens chances are, it shows up somewhere in the data. However, to manually detect if something out of the ordinary is going on, one has to continually look through this data, and look for oddities. My question: what is the best way to detect changes in dynamic data which can be seen as 'out of the ordinary'. Are bayesan filters (I've seen these mentioned when reading about spam detection) the way to go? Any pointers would be great! EDIT: To clarify the data for example shows a daily curve of database load. This curve typically looks similar to the curve from yesterday In time this curve might change slowly. It would be nice that if the curve from day to day changes say within some perimeters, a warning could go off. R

    Read the article

  • How to avoid duplication entry via form into database

    - by DAFFODIL
    <?php $con = mysql_connect("localhost","root",""); if (!$con) { die('Could not connect: ' . mysql_error()); } mysql_select_db("form", $con); $reb = "select count(*) from customer where name = '$name';" if (mysql_result($reb,0) > 0) { echo "Item Already Added!<br>"; } else { // add the item } mysql_close($con); header( 'Location: http://localhost/cus.php' ); ?> Parse error: parse error in C:\wamp\www\c.php on line 10

    Read the article

  • Linq to Sql Data class in dbml

    - by Simon
    I am abit curious about dbml.... Should I create one dbml file for one database or separated into different parts e.g. User dbml (only tables relate to users) etc? When I do this I will have abit of problems. Assume the User dbml has a User table and if the Order dbml has a User table as well, this won't be allowed if the entity namespace are the same. If I have set a different entity namespace for each of the dbml, it works but this will gives me a different entity of User table. When a single data returns to Business Logic layer, there is a difficulty of knowing which entity namespace of the user table to be used. If I built one dbml file instead of having separate dbml, will single dbml appear slower than the separated dbml version when fetching the data from the database.

    Read the article

  • Help to chouse NoSQL database for project

    - by potapuff
    There is a table: doc_id(integer)-value(integer) Approximate 100k doc_id and 27?? rows. Majority query on this table - searching documents similar to current document: select 10 documents with maximum of (count common to current document value)/(count ov values in document). Nowadays we use PostgreSQL. Table weight (with index) ~1,5 GB. Average query time ~0.5s. Should I transfer all this to NoSQL base, if so, what?

    Read the article

  • How to automate database updates at webserver

    - by user221919
    hi I am developing the online bidding system using asp.net where I need to close the auction if the auction time is get closed without any bid. As in the following web site : http://www.bidrivals.com/us/ Please help me to resolve to this problem. Waiting for your valuable thoughts. Thanking You.

    Read the article

  • The CHOICE : Firebird or H2

    - by blow
    Hi, i have to choice a database to use in server-mode for a java desktop application. I think both are great java database. In my opinion (im NOT well-informed): H2 PRO Is java based Develeopment say it is very very fast Easy to install, configure and use with java application H2 CONS Is a young project Reliability doubt for commercial porpouse FireBird PRO Rock solid project Well documented Should be fast and well optimized for large data Has a java driver... FireBird CONS It is not java based ... ? So, i can't choice between this great db, can i have a suggestion? Thank.

    Read the article

< Previous Page | 294 295 296 297 298 299 300 301 302 303 304 305  | Next Page >