Search Results

Search found 31628 results on 1266 pages for 'legacy database'.

Page 308/1266 | < Previous Page | 304 305 306 307 308 309 310 311 312 313 314 315  | Next Page >

  • How Implement a system to determine if a milestone has been reached

    - by Luc M
    I have a table named stats player_id team_id match_date goal assist` 1 8 2010-01-01 1 1 1 8 2010-01-01 2 0 1 9 2010-01-01 0 5 ... I would like to know when a player reach a milestone (eg 100 goals, 100 assists, 500 goals...) I would like to know also when a team reach a milestone. I want to know which player or team reach 100 goals first, second, third... I thought to use triggers with tables to accumulate the totals. Table player_accumulator (and team_accumulator) table would be player_id total_goals total_assists 1 3 6 team_id total_goals total_assists 8 3 1 9 0 5 Each time a row is inserted in stats table, a trigger will insert/update player_accumulator and team_accumulator tables. This trigger could also verify if player or team has reached a milestone in milestone table containing numbers milestone 100 500 1000 ... A table player_milestone would contains milestone reached by player: player_id stat milestone date 1 goal 100 2013-04-02 1 assist 100 2012-11-19 There is a better way to implements a "milestone" ? There is an easiest way without triggers ? I'm using PostgreSQL

    Read the article

  • Passing BLOB/CLOB as parameter to PL/SQL function

    - by Ula Krukar
    I have this procedure i my package: PROCEDURE pr_export_blob( p_name IN VARCHAR2, p_blob IN BLOB, p_part_size IN NUMBER); I would like for parameter p_blob to be either BLOB or CLOB. When I call this procedure with BLOB parameter, everything is fine. When I call it with CLOB parameter, I get compilation error: PLS-00306: wrong number or types of arguments in call to 'pr_export_blob' Is there a way to write a procedure, that can take either of those types as parameter? Some kind of a superclass maybe?

    Read the article

  • What is a good DBMS for archiving?

    - by Thomas.Winsnes
    I've been stuck in a MsSql/MySql world now for a few years, and I've decided to spread my wings a little further. At the moment I'm researching which DBMS is good at things needed when archiving data. Eg. lots of writes and low reads. I've seen the NoSQL crusade, but I have a very RDBMS mindset, so I'm a bit skeptical. Anyone have any suggestions? Or even any pointers to where there are some benchmarks etc for this kind of stuff. Thank you :) Thomas

    Read the article

  • Using jQuery to store basic text string in mySQL base?

    - by Kenny Bones
    Could someone point me in the right direction here? Basically, I've got this jQuery code snippet: $('.bggallery_images').click(function () { var newBG = "url('" + $(this).attr('src'); var fullpath = $(this).attr('src'); var filename = fullpath.replace('img/Bakgrunner/', ''); $('#wrapper').css('background-image', newBG); // Lagre til SQL $.ajax({ url: "save_to_db.php", // The url to your function to handle saving to the db data: filename, dataType: 'Text', type: 'POST', // Could also use GET if you prefer success: function (data) { // Just for testing purposes. alert('Background changed to: ' + data); } }); }); This is being run when I click a certain button. So it's actually within a click handler. If I understand this correctly, this snippet takes the source if the image I just clicked and strips it so I end up with only the filename. If I do an alert(filename), I get the filename only. So this is working ok. But then, it does an ajax call to a php file called "save_to_db.php" and sends data: filename. This is correct right? Then, it does a callback which does an alert + data. Does this seem correct so far? Cause my php file looks like this: <?php require("dbconnect2.php"); $uploadstring = $_POST['filename']; $sessionid = $_SESSION['id']; echo ($sessionid); mysql_query("UPDATE brukere SET brukerBakgrunn = '$uploadstring' WHERE brukerID=" .$_SESSION['id']); mysql_close(); ?> When I click the image, the jQuery snippet fires and I get the results of this php file as output for the alert box. I think that the variables somehow are empty. Because notice the echo($sessionid); which is a variable I've created just to test what the session ID is. And it returns nothing. What could be the issue here? Edit: I just tried to echo out the $uploadstring variable as well and it also returns nothing. It's like the jQuery snippet doesn't even pass the variable on to the php file?

    Read the article

  • In SQL server, to convert a varchar which have this format (nnn:nn:nn)

    - by user1688917
    I have this varchar format as time accumulation and i want to convert it to an integer to do a SUM and get the total time for a group. The fist part which may be 1, 2, 3, 4 or even five digits represent the accumulation of Hours and then seperated by a colon. then come the second part which is accumulation of minutes and last accumulation of seconds (2 digits each). How to convert this to integer in one query if possile.

    Read the article

  • Non-Access or -base QBE tool?

    - by idbin.cath0br0
    Hello. I'm currently looking for a QBE tool that can execute queries on PostgreSQL or MySQL. OS doesn't really matter. Reason is that we've got to do QBE at school but I don't want to use neither Microsoft Access nor OpenOffice.org Base (lack of features). Any help would be appreciated.

    Read the article

  • Do non-clustered indexes slow down inserts?

    - by mikeinmadison
    I'm working in Sql Server 2005. I have an event log table that tracks user actions, and I want to make sure that inserts into the table are as fast as possible. Currently the table doesn't have any indexes. Does adding a single non-clustered index slow down inserts at all? Or is it only clustered indexes that slow down inserts? Or should I just add a clustered index and not worry about it?

    Read the article

  • Web Shop Schema - Document Db

    - by Maxem
    I'd like to evaluate a document db, probably mongo db in an ASP.Net MVC web shop. A little reasoning at the beginning: There are about 2 million products. The product model would be pretty bad for rdbms as there'd be many different kinds of products with unique attributes. For example, there'd be books which have isbn, authors, title, pages etc as well as dvds with play time, directors, artists etc and quite a few more types. In the end, I'd have about 9 different products with a combined column count (counting common columns like title only once) of about 70 to 100 whereas each individual product has 15 columns at most. The three commonly used ways in RDBMS would be: EAV model which would have pretty bad performance characteristics and would make it either impractical or perform even worse if I'd like to display the author of a book in a list of different products (think start page, recommended products etc.). Ignore the column count and put it all in the product table: Although I deal with somewhat bigger databases (row wise), I don't have any experience with tables with more than 20 columns as far as performance is concered but I guess 100 columns would have some implications. Create a table for each product type: I personally don't like this approach as it complicates everything else. C# Driver / Classes: I'd like to use the NoRM driver and so far I think i'll try to create a product dto that contains all properties (grouped within detail classes like book details, except for those properties that should be displayed on list views etc.). In the app I'll use BookBehavior / DvdBehaviour which are wrappers around a product dto but only expose the revelent Properties. My questions now: Are my performance concerns with the many columns approach valid? Did I overlook something and there is a much better way to do it in an RDBMS? Is MongoDb on Windows stable enough? Does my approach with different behaviour wrappers make sense?

    Read the article

  • Is copying /var/lib/mysql a good alterntive to mysqldump?

    - by kemp
    Since I'm making a full backup of my entire debian system, I was thinking if having a copy of /var/lib/mysql directory is a viable alternative to dumping tables with mysqldump. are all informations needed contained in that directory? can single tables be imported in another mysql? can there be problems while restoring those files on a (probably slightly) different mysql server version?

    Read the article

  • How do I perform 'WHERE' on groups of rows?

    - by Drew
    I have a table, which looks like: +-----------+----------+ + person_id + group_id + +-----------+----------+ + 1 + 10 + + 1 + 20 + + 1 + 30 + + 2 + 10 + + 2 + 20 + + 3 + 10 + +-----------+----------+ I need a query such that only person_ids with groups 10 AND 20 AND 30 are returned (only person_id: 1). I am not sure how to do this, as from what I can see it would require me to group the rows by person_id and then select the rows which contain all group_ids. I'm looking for something which will preserve the use of keys without resorting to string operations on group_concat() or such.

    Read the article

  • SQL SERVER - Detecting non-indexed columns but used in WHERE clause

    - by Vadi
    How to detect a column included in WHERE clause but used in indexed? Little Background: Until the time the table has few number of records things will be okay, once it started having millions of records then index should be created for a column which is used in WHERE clauses in stored procs, inline queries etc., Since we have hundreds of stored procs and queries that often gets changed by the devs I wanted to have a automated way of identifying those columns that are used in WHERE clauses but not an index is created. How to do that in SQL SERVER 2008?

    Read the article

  • YAHOO QUERY LANGUAGE BUG!

    - by Damiano
    Hello everybody! Today, I've started with Yahoo Query Language. I would use it to retrive stocks details, so I'm talking about Yahoo Finance. I think there is a bug on this language. This is my query: select * from yahoo.finance.quoteslist where symbol='@^GSPC' I ALWAYS get 51 results! it's impossible, take a look at: http://it.finance.yahoo.com/q/cp?s=^GSPC There are 500 results! I also tried some paging parameters. select * from yahoo.finance.quoteslist(50,30) where symbol='@^GSPC' (to get from 50 to 80) select * from yahoo.finance.quoteslist(100) where symbol='@^GSPC' (to get the first 100 results) select * from yahoo.finance.quoteslist where symbol='@^GSPC' limit 30 offset 50 but ALWAYS the last stock is: <quote symbol="BBY"> <Symbol>BBY</Symbol> <LastTradePriceOnly>41.03</LastTradePriceOnly> <LastTradeDate>5/7/2010</LastTradeDate> <LastTradeTime>4:00pm</LastTradeTime> <Change>-0.48</Change> <Open>41.35</Open> <DaysHigh>42.35</DaysHigh> <DaysLow>39.60</DaysLow> <Volume>14129531</Volume> </quote> Why do I have this kind of problem? Thank you so much for your support! (P.S. I've tested it on Yahoo YQL console)

    Read the article

  • Multi-variable indexes in postgres

    - by Jackson Davis
    Im looking at an application where I will be doing quite a few SELECTs where I am trying to find column_a = x AND column_b = y. Is the correct to create that index that something like the following? CREATE INDEX index_name ON table (column_a, column_b)

    Read the article

  • Grails multi column indexes

    - by Kimble
    Can someone explain how to define multi column indexes in Grails? The documentation is at best sparse. This for example does not seem to work at all: http://grails.org/GORM+Index+definitions I've had some luck with this, but the results seems random at best. Definitions that works in one domain class does not when applied to another (with different names of course). http://www.grails.org/doc/1.1/guide/single.html#5.5.2.6%20Database%20Indices Some working examples and explanations would be highly appreciated!

    Read the article

  • what is the question for the query?

    - by Kevinniceguy
    Sorry...I mean what question will be for this query? SELECT SUM(price) FROM Room r, Hotel h WHERE r.hotelNo = h.hotelNo and hotelName = 'Paris Hilton' and roomNo NOT IN (SELECT roomNo FROM Booking b, Hotel h WHERE (dateFrom <= CURRENT_DATE AND dateTo >= CURRENT_DATE) AND b.hotelNo = h.hotelNo AND hotelName = 'Paris Hilton');

    Read the article

  • What's wrong with this SQL query?

    - by ThinkingInBits
    I have two tables: photographs, and photograph_tags. Photograph_tags contains a column called photograph_id (id in photographs). You can have many tags for one photograph. I have a photograph row related to three tags: boy, stream, and water. However, running the following query returns 0 rows SELECT p.* FROM photographs p, photograph_tags c WHERE c.photograph_id = p.id AND (c.value IN ('dog', 'water', 'stream')) GROUP BY p.id HAVING COUNT( p.id )=3 Is something wrong with this query?

    Read the article

  • How to efficiently store and update binary data in Mongodb?

    - by Rocketman
    I am storing a large binary array within a document. I wish to continually add bytes to this array and sometimes change the value of existing bytes. I was looking for some $append_bytes and $replace_bytes type of modifiers but it appears that the best I can do is $push for arrays. It seems like this would be doable by performing seek-write type operations if I had access somehow to the underlying bson on disk, but it does not appear to me that there is anyway to do this in mongodb (and probably for good reason). If I were instead to just query this binary array, edit or add to it, and then update the document by rewriting the entire field, how costly will this be? Each binary array will be on the order of 1-2MB, and updates occur once every 5 minutes and across 1000s of documents. Worse, yet there is no easy way to spread these out (in time) and they will usually be happening close to one another on the 5 minute intervals. Does anyone have a good feel for how disastrous this will be? Seems like it would be problematic. An alternative would be to store this binary data as separate files on disk, implement a thread pool to efficiently manipulate the files on disk, and reference the filename from my mongodb document. (I'm using python and pymongo so I was looking at pytables). I'd prefer to avoid this though if possible. Is there any other alternative that I am overlooking here? Thanks in advnace.

    Read the article

  • How do I add a one-to-one relationship in MYSQL?

    - by alex
    +-------+-------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +-------+-------------+------+-----+---------+-------+ | pid | varchar(99) | YES | | NULL | | +-------+-------------+------+-----+---------+-------+ 1 row in set (0.00 sec) +-------+---------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +-------+---------------+------+-----+---------+-------+ | pid | varchar(2000) | YES | | NULL | | | recid | varchar(2000) | YES | | NULL | | +-------+---------------+------+-----+---------+-------+ 2 rows in set (0.00 sec) This is my table. pid is just the id of the user. "recid" is a recommended song for that user. I hope to have a list of pid's, and then recommended songs for each person. Of course, in my 2nd table, (pid, recid) would be unique key. How do I do a one-to-one query for this ?

    Read the article

  • Rails, search item in different model?

    - by Danny McClelland
    Hi Everyone, I have a kase model which I am using a simple search form in. The problem I am having is some kases are linked to companies through a company model, and people through a people model. At the moment my search (in Kase model) looks like this: # SEARCH FACILITY def self.search(search) search_condition = "%" + search + "%" find(:all, :conditions => ['jobno LIKE ? OR casesubject LIKE ? OR transport LIKE ? OR goods LIKE ? OR comments LIKE ? OR invoicenumber LIKE ? OR netamount LIKE ? OR clientref LIKE ? OR kase_status LIKE ? OR lyingatlocationaddresscity LIKE ?', search_condition, search_condition, search_condition, search_condition, search_condition, search_condition, search_condition, search_condition, search_condition, search_condition]) end What I am trying to work out, is what condition can I add to allow a search by Company or Person to show the cases they are linked to. @kase.company.companyname and company.companyname don't work :( Is this possible? Thanks, Danny

    Read the article

  • Ruby on Rails Mongrel web server stuck when MySQL service is running

    - by Marcos Buarque
    Hi, I am a Ruby on Rails newbie and already have a problem. I have started the Mongrel web server and it works fine when MySQL service isn't running. But when MySQL is on, Mongrel stucks. It ceases from serving the pages. So far, I have tested the localhost:3000 URL. When MySQL is off, it serves the page. When I click "about application's environment", I get the messasge (of course) "Can't connect to MySQL server on 'localhost' (10061)". After starting the MySQL service and refreshing, I get no more answer and Mongrel does not serve the webpage. It gets stuck with no answer to the browser. Then I have to stop the webserver and restart it. I have installed mysql2 gem with the command gem install mysql2. I was able to create the _test and _development databases with the command line rake db:create. I have tested with MySQL root user and blank password and also tried with a superuser user I have created. No success. Here is the server log: ======================== Started GET "/rails/info/properties" for 127.0.0.1 at Fri Dec 24 17:41:25 -0200 2010 Mysql2::Error (Can't connect to MySQL server on 'localhost' (10061)): Rendered C:/Ruby187/lib/ruby/gems/1.8/gems/actionpack-3.0.3/lib/action_dispatch/middleware/templates/rescues/_trace.erb (1.0ms) Rendered C:/Ruby187/lib/ruby/gems/1.8/gems/actionpack-3.0.3/lib/action_dispatch/middleware/templates/rescues/_request_and_response.erb (5.0ms) Rendered C:/Ruby187/lib/ruby/gems/1.8/gems/actionpack-3.0.3/lib/action_dispatch/middleware/templates/rescues/diagnostics.erb within rescues/layout (35.0ms) ================= I am running on a Windows 7 environment with firewall down.

    Read the article

  • Using memcache together with conventional cache

    - by Industrial
    Hi! Here's the deal. We would have taken the complete static html road to solve performance issues, but since the site will be partially dynamic, this won't work out for us. What we have thought of instead is using memcache + eAccelerator to speed up PHP and take care of caching for the most used data. Here's our two approaches that we have thought of right now: Using memcache on all<< major queries and leaving it alone to do what it does best. Usinc memcache for most commonly retrieved data, and combining with a standard harddrive-stored cache for further usage. The major advantage of only using memcache is of course the performance, but as users increases, the memory usage gets heavy. Combining the two sounds like a more natural approach to us, even though the theoretical compromize in performance. Memcached appears to have some replication features available as well, which may come handy when it's time to increase the nodes. What approach should we use? - Is it stupid to compromize and combine the two methods? Should we insted be focusing on utilizing memcache and instead focusing on upgrading the memory as the load increases with the number of users? Thanks a lot!

    Read the article

< Previous Page | 304 305 306 307 308 309 310 311 312 313 314 315  | Next Page >