Search Results

Search found 32492 results on 1300 pages for 'reporting database'.

Page 334/1300 | < Previous Page | 330 331 332 333 334 335 336 337 338 339 340 341  | Next Page >

  • Using a remote PHP service with Flex (Flash Builder) AIR Application?

    - by Chrisc
    Hello, I'm developing a Adobe AIR application using Flash Builder 4. This app needs to access a remote PHP service which is being hosted on a remote web server. I am having troubles figuring out how to add a PHP data service which uses a remote service. I can add the PHP data service in Flash Builder as a service hosted on localhost, but given that this will not be feasible when the application is deployed, will not work. Does anyone know how to connect a Flash Builder (Flex) project to a remote PHP data service? Thanks, Chris

    Read the article

  • MySQL Query order by numer of rows?

    - by Clemens
    hi, i have a mysql table for votes. there's am id, a project_id and a vote field (which is 1 if a specific project is voted). now i want to generate a ranking from those entries. is there a way to get the number of votes for each project_id and automatically sort the entries by the number of TRUE votes of a project with a single mysql query? Or do you know a php way? e.g. ID - Project ID - Vote 1 - 2 - 1 2 - 2 - 1 3 - 1 - 1 == Project Nr. 2 has 2 Votes Project Nr. 1 has 1 Vote Thanks in advance!

    Read the article

  • Does Ruby on Rails "has_many" array provide data on a "need to know" basis?

    - by Jian Lin
    On Ruby on Rails, say, if the Actor model object is Tom Hanks, and the "has_many" fans is 20,000 Fan objects, then actor.fans gives an Array with 20,000 elements. Probably, the elements are not pre-populated with values? Otherwise, getting each Actor object from the DB can be extremely time consuming. So it is on a "need to know" basis? So does it pull data when I access actor.fans[500], and pull data when I access actor.fans[0]? If it jumps from each record to record, then it won't be able to optimize performance by doing sequential read, which can be faster on the hard disk because those records could be in the nearby sector / platter layer -- for example, if the program touches 2 random elements, then it will be faster just to read those 2 records, but what if it touches all elements in random order, then it may be faster just to read all records in a sequential way, and then process the random elements. But how will RoR know whether I am doing only a few random elements or all elements in random?

    Read the article

  • Version of SQLite used in Android?

    - by Eno
    Reason: Im wondering how to handle schema migrations. The newer SQLite versions support an "ALTER TABLE" SQL command which would save me having to copy data, drop the table, recreate table and re-insert data.

    Read the article

  • Cassandra Production ready on Windows?

    - by BlackTea
    Question anyone know of any success stories of Cassandra running on windows in a production environment? I'm doing some work on Cassandra and trying to find the correct platform for it currently the platform is windows running MS-SQLas the data store. what are the dis-advantages if any when running Cassandra on a windows environment.

    Read the article

  • postgresql is incrementing an update by 2 ?

    - by John Tyler
    I'm migrating our model to postgresql for the FTS and data integrity update myschema.counters set counter_count= (counter_count+1) where counter_id =? Works as expected in mysql, however in postgres it is incrementing by 2 each time? It is simple int field I believe, I don't have anything special going on.

    Read the article

  • What is a good DBMS for archiving?

    - by Thomas.Winsnes
    I've been stuck in a MsSql/MySql world now for a few years, and I've decided to spread my wings a little further. At the moment I'm researching which DBMS is good at things needed when archiving data. Eg. lots of writes and low reads. I've seen the NoSQL crusade, but I have a very RDBMS mindset, so I'm a bit skeptical. Anyone have any suggestions? Or even any pointers to where there are some benchmarks etc for this kind of stuff. Thank you :) Thomas

    Read the article

  • SQL query for selecting the firsts in a series by cloumn

    - by SP
    I'm having some trouble coming up with a query for what I am trying to do. I've got a table we'll call 'Movements' with the following columns: RecID(Key), Element(f-key), Time(datetime), Room(int) The table is holding a history of Movements for the Elements. One record contains the element the record is for, the time of the recorded location, and the room it was in at that time. What I would like are all records that indicate that an Element entered a room. That would mean the first (by time) entry for any element in a series of movements for that element in the same room. The input is a room number and a time. IE, I would like all of the records indicating that any Element entered room X after time Y. The closest I came was this Select Element, min(Time) from Movements where Time > Y and Room = x group by Element This will only give me one room entry record per Element though (If the Element has entered the room X twice since time Y I'll only get the first one back) Any ideas? Let me know if I have not explained this clearly. I'm using MS SQLServer 2005.

    Read the article

  • BI with Django?

    - by Helmut
    Is there a way to develop Bi (Business Intelligence) solutions with Django? Therefore it should be possible to define models with more than one Datasource. Is anybody out there who has experienced BI with Django? How could it work ?

    Read the article

  • Retrieving all objects in code upfront for performance reasons

    - by ming yeow
    How do you folks retrieve all objects in code upfront? I figure you can increase performance if you bundle all the model calls together? This makes for a bigger deal, especially if your DB cannot keep everything in memory def hitDBSeperately { get X users ...code get Y users... code get Z users... code } Versus: def hitDBInSingleCall { get X+Y+Z users code for X code for Y... }

    Read the article

  • Single Large v/s Multiple Small MySQL tables for storing Options

    - by Prasad
    Hi there, I'm aware of several question on this forum relating to this. But I'm not talking about splitting tables for the same entity (like user for example) Suppose I have a huge options table that stores list options like Gender, Marital Status, and many more domain specific groups with same structure. I plan to capture in a OPTIONS table. Another simple option is to have the field set as ENUM, but there are disadvantages of that as well. http://www.brandonsavage.net/why-you-should-replace-enum-with-something-else/ OPTIONS Table: option_id <will be referred instead of the name> name value group Query: select .. from options where group = '15' - Since this table is expected to be multi-tenant, the no of rows could grow drastically. - I believe splitting the tables instead of finding by the group would be easier to write & faster to execute. - or perhaps partitioning by the group or tenant? Pl suggest. Thanks

    Read the article

  • Passing BLOB/CLOB as parameter to PL/SQL function

    - by Ula Krukar
    I have this procedure i my package: PROCEDURE pr_export_blob( p_name IN VARCHAR2, p_blob IN BLOB, p_part_size IN NUMBER); I would like for parameter p_blob to be either BLOB or CLOB. When I call this procedure with BLOB parameter, everything is fine. When I call it with CLOB parameter, I get compilation error: PLS-00306: wrong number or types of arguments in call to 'pr_export_blob' Is there a way to write a procedure, that can take either of those types as parameter? Some kind of a superclass maybe?

    Read the article

  • Web Shop Schema - Document Db

    - by Maxem
    I'd like to evaluate a document db, probably mongo db in an ASP.Net MVC web shop. A little reasoning at the beginning: There are about 2 million products. The product model would be pretty bad for rdbms as there'd be many different kinds of products with unique attributes. For example, there'd be books which have isbn, authors, title, pages etc as well as dvds with play time, directors, artists etc and quite a few more types. In the end, I'd have about 9 different products with a combined column count (counting common columns like title only once) of about 70 to 100 whereas each individual product has 15 columns at most. The three commonly used ways in RDBMS would be: EAV model which would have pretty bad performance characteristics and would make it either impractical or perform even worse if I'd like to display the author of a book in a list of different products (think start page, recommended products etc.). Ignore the column count and put it all in the product table: Although I deal with somewhat bigger databases (row wise), I don't have any experience with tables with more than 20 columns as far as performance is concered but I guess 100 columns would have some implications. Create a table for each product type: I personally don't like this approach as it complicates everything else. C# Driver / Classes: I'd like to use the NoRM driver and so far I think i'll try to create a product dto that contains all properties (grouped within detail classes like book details, except for those properties that should be displayed on list views etc.). In the app I'll use BookBehavior / DvdBehaviour which are wrappers around a product dto but only expose the revelent Properties. My questions now: Are my performance concerns with the many columns approach valid? Did I overlook something and there is a much better way to do it in an RDBMS? Is MongoDb on Windows stable enough? Does my approach with different behaviour wrappers make sense?

    Read the article

  • cached data base

    - by radi
    hi , in my project i need a tow tables each of it has about 2000 row , i want my application to be speed so my db should load into memory (cached) when the app start and before it close the db have to be saved on the disk . i am using java and i want to use sql

    Read the article

  • Is there an ORM that supports composition w/o Joins

    - by Ken Downs
    EDIT: Changed title from "inheritance" to "composition". Left body of question unchanged. I'm curious if there is an ORM tool that supports inheritance w/o creating separate tables that have to be joined. Simple example. Assume a table of customers, with a Bill-to address, and a table of vendors, with a remit-to address. Keep it simple and assume one address each, not a child table of addresses for each. These addresses will have a handful of values in common: address 1, address 2, city, state/province, postal code. So let's say I'd have a class "addressBlock" and I want the customers and vendors to inherit from this class, and possibly from other classes. But I do not want separate tables that have to be joined, I want the columns in the customer and vendor tables respectively. Is there an ORM that supports this? The closest question I have found on StackOverflow that might be the same question is linked below, but I can't quite figure if the OP is asking what I am asking. He seems to be asking about foregoing inheritance precisely because there will be multiple tables. I'm looking for the case where you can use inheritance w/o generating the multiple tables. Model inheritance approach with Django's ORM

    Read the article

  • How Implement a system to determine if a milestone has been reached

    - by Luc M
    I have a table named stats player_id team_id match_date goal assist` 1 8 2010-01-01 1 1 1 8 2010-01-01 2 0 1 9 2010-01-01 0 5 ... I would like to know when a player reach a milestone (eg 100 goals, 100 assists, 500 goals...) I would like to know also when a team reach a milestone. I want to know which player or team reach 100 goals first, second, third... I thought to use triggers with tables to accumulate the totals. Table player_accumulator (and team_accumulator) table would be player_id total_goals total_assists 1 3 6 team_id total_goals total_assists 8 3 1 9 0 5 Each time a row is inserted in stats table, a trigger will insert/update player_accumulator and team_accumulator tables. This trigger could also verify if player or team has reached a milestone in milestone table containing numbers milestone 100 500 1000 ... A table player_milestone would contains milestone reached by player: player_id stat milestone date 1 goal 100 2013-04-02 1 assist 100 2012-11-19 There is a better way to implements a "milestone" ? There is an easiest way without triggers ? I'm using PostgreSQL

    Read the article

  • Non-Access or -base QBE tool?

    - by idbin.cath0br0
    Hello. I'm currently looking for a QBE tool that can execute queries on PostgreSQL or MySQL. OS doesn't really matter. Reason is that we've got to do QBE at school but I don't want to use neither Microsoft Access nor OpenOffice.org Base (lack of features). Any help would be appreciated.

    Read the article

  • SQL SERVER - Detecting non-indexed columns but used in WHERE clause

    - by Vadi
    How to detect a column included in WHERE clause but used in indexed? Little Background: Until the time the table has few number of records things will be okay, once it started having millions of records then index should be created for a column which is used in WHERE clauses in stored procs, inline queries etc., Since we have hundreds of stored procs and queries that often gets changed by the devs I wanted to have a automated way of identifying those columns that are used in WHERE clauses but not an index is created. How to do that in SQL SERVER 2008?

    Read the article

  • what is the question for the query?

    - by Kevinniceguy
    Sorry...I mean what question will be for this query? SELECT SUM(price) FROM Room r, Hotel h WHERE r.hotelNo = h.hotelNo and hotelName = 'Paris Hilton' and roomNo NOT IN (SELECT roomNo FROM Booking b, Hotel h WHERE (dateFrom <= CURRENT_DATE AND dateTo >= CURRENT_DATE) AND b.hotelNo = h.hotelNo AND hotelName = 'Paris Hilton');

    Read the article

  • using dummy row with NOT NULL to solve DEFAULT NULL

    - by Tony38
    I know having DEFAULT NULLS is not a good practice but I have many optional lookup values which are FK in the system so to solve this issue here is what i am doing: I use NOT NULL for every FK / lookup colunms. I have the first row in every lookup table which is PK id = 1 as a dummy row with just "none" in all the columns. This way I can use NOT NULL in my schema and if needed reference to the none row values PK =1 for FKs which do not have any lookup value. Is this a good design or any other work arounds? EDIT: I have: Neighborhood table Postal table. Every neighborhood has a city, so the FK can be NOT NULL. But not every postal code belongs to a neighborhood. Some do, some don't depending on the country. So if i use NOT NULL for the FK between postal and neighborhood then I will be screwed as there has to be some value entered. So what i am doing in essence is: have a row in every table to be a dummy row just to link the FKs. This way row one in neighborhood table will be: n_id = 1 name =none etc... In postal table I can have: postal_code = 3456A3 FK (city) = Moscow FK (neighborhood_id)=1 as a NOT NULL. If I don't have a dummy row in the neighborhood lookup table then I have to declare FK (neighborhood_id) as a Default null column and store blanks in the table. This is an example but there is a huge number of values which will have blanks then in many tables.

    Read the article

< Previous Page | 330 331 332 333 334 335 336 337 338 339 340 341  | Next Page >