Search Results

Search found 14354 results on 575 pages for 'existing records'.

Page 448/575 | < Previous Page | 444 445 446 447 448 449 450 451 452 453 454 455  | Next Page >

  • Heroku only initializes some of my models.

    - by JayX
    So I ran heroku db:push And it returned Sending schema Schema: 100% |==========================================| Time: 00:00:08 Sending indexes schema_migrat: 100% |==========================================| Time: 00:00:00 projects: 100% |==========================================| Time: 00:00:00 tasks: 100% |==========================================| Time: 00:00:00 users: 100% |==========================================| Time: 00:00:00 Sending data 8 tables, 70,551 records groups: 100% |==========================================| Time: 00:00:00 schema_migrat: 100% |==========================================| Time: 00:00:00 projects: 100% |==========================================| Time: 00:00:00 tasks: 100% |==========================================| Time: 00:00:02 authenticatio: 100% |==========================================| Time: 00:00:00 articles: 100% |==========================================| Time: 00:08:27 users: 100% |==========================================| Time: 00:00:00 topics: 100% |==========================================| Time: 00:01:22 Resetting sequences And when I went to heroku console This worked >> Task => Task(id: integer, topic: string, content: string, This worked >> User => User(id: integer, name: string, email: string, But the rest only returned something like >> Project NameError: uninitialized constant Project /home/heroku_rack/lib/console.rb:150 /home/heroku_rack/lib/console.rb:150:in `call' /home/heroku_rack/lib/console.rb:28:in `call' >> Authentication NameError: uninitialized constant Authentication /home/heroku_rack/lib/console.rb:150 /home/heroku_rack/lib/console.rb:150:in `call' update 1: And when I typed >> ActiveRecord::Base.connection.tables it returned => ["projects", "groups", "tasks", "topics", "articles", "schema_migrations", "authentications", "users"] Using heroku's SQL console plugin I got SQL> show tables +-------------------+ | table_name | +-------------------+ | authentications | | topics | | groups | | projects | | schema_migrations | | tasks | | articles | | users | +-------------------+ So I think they are existing in heroku's database already. There is probably something wrong with rack db:migrate update 2: I ran rack db:migrate locally in both production and development modes and nothing wrong happened. But when I ran it on heroku it only returned: $ heroku rake db:migrate (in /disk1/home/slugs/389817_1c16250_4bf2-f9c9517b-bdbd-49d9-8e5a-a87111d3558e/mnt) $ Also, I am using sqlite3 update 3: so I opened up heroku console and typed in the following command class Authentication < ActiveRecord::Base;end Amazingly I was able to call Authentication class, but once I exited, nothing was changed.

    Read the article

  • How to run stored procedures and ad-hoc scripts asynchronously with "loosely" connected SQL Server 2

    - by sanga
    Is there a way to initiate a script against an instance of SQL server when it is not connected then have it run on the instance the next time it connects? This needs to happen without any intervention from me. Background situation if you are interested: We have about 120 machines each with their own instance of SQL Server 2000. Most of them are laptops. We have merge replication set up with each one. From time to time, there is a need to delete "rogue" guids from some tables in some instances that overwrite legitimate records on the main publisher as well as perform administrative tasks via stored procedure or adhoc sql statements. The problem is there is no telling when each machine is going to be connected to the network. Some folks turn their machines completely off at the end of the day. Others disconnect their machines and take them on business trips, home for the weekend etc. Did I mention that about 35 of these machines are in utility trucks and "attempt" to sync over a wireless connection. Thanks in advance for any assistance or suggestions. Sanga

    Read the article

  • Deleting unneeded rows from a table with 2 criteria

    - by stormbreaker
    Hello. I have a many-to-many relations table and I need to DELETE the unneeded rows. The lastviews table's structure is: | user (int) | document (int) | time (datetime) | This table logs the last users which viewed the document. (user, document) is unique. I show only the last 10 views of a document and until now I deleted the unneeded like this: DELETE FROM `lastviews` WHERE `document` = ? AND `user` NOT IN (SELECT * FROM (SELECT `user` FROM `lastviews` WHERE `document` = ? ORDER BY `time` DESC LIMIT 10) AS TAB) However, now I need to also show the last 5 documents a user has viewed. This means I can no longer delete rows using the previous query because it might delete information I need (say a user didn't view documents in 5 minutes and the rows are deleted) To sum up, I need to delete all the records that don't fit these 2 criterias: SELECT ... FROM `lastviews` WHERE `document` = ? ORDER BY `time` DESC LIMIT 10 and SELECT * FROM `lastviews` WHERE `user` = ? ORDER BY `time` DESC LIMIT 0, 5 I need the logic. Thanks in advance.

    Read the article

  • Stopping Filter Display in Dynamic Data Entity Web App

    - by bert
    I'm currently experimenting with the Dynamic Data Entity Web App Project type in VS2008 SP1 and after reading many tutorials which offer helpful advice for problems I so far have no need of a solution to I have fallen at the first hurdle. In the DB I have made my entity model from I decided to start small with a table called "Companies" just to see if I could tweak the display into a satisfactory shape for this small table. The Companies table has a column called "contactid" which leads to a record filled with various contact information in a "contacts" table. The default created Entity Data Model has guessed that One companies could have many contact records. So it tries to be helpful and add a "Contact" filter onto the page that allows you to see all the Companies that share a particular set of contact info indexed by the "Contact Name" field. Unfortunately the contact table is a multi-purpose one that also stores contact info for customers and there are about 1000 times more customers than there are companies. So the Dropdown makes the page load time increase exponentially and produces no benefit. So I'd like to just stop the filter from appearing. Only problem is I don't have a clue how to switch it off. Google is so far proving recalcitrant on the matter so I wondered if anyone in here knew how to get rid of a useless filter.

    Read the article

  • Problem in mutiple :dependent=> :destroy when multiple polymorphic is true

    - by piemesons
    I have four models question, answer, comment and vote.Consider it same as stackoverflow. Question has_many comments Answers has_many comments Questions has_many votes answers has_many votes comments has_many votes Here are the models (only relevant things) class Comment < ActiveRecord::Base belongs_to :commentable, :polymorphic => true has_many :votes, :as => :votable, :dependent => :destroy end class Question < ActiveRecord::Base has_many :comments, :as => :commentable, :dependent => :destroy has_many :answers, :dependent => :destroy has_many :votes, :as => :votable, :dependent => :destroy end class Vote < ActiveRecord::Base belongs_to :votable, :polymorphic => true end class Answer < ActiveRecord::Base belongs_to :question, :counter_cache => true has_many :comments, :as => :commentable , :dependent => :destroy end Now the problem is whenever i am trying to delete any question/answer/comment its giving me an error NoMethodError in QuestionsController#destroy undefined method `each' for 0:Fixnum if i remove this line from any of the model (question/answer/comment) has_many :votes, :as => :votable, :dependent => :destroy then it works perfectly. It seems there is a problem while deleting the records active record is not able to find out the proper path because of multiple joins within the tables.

    Read the article

  • Find telephonenumbers - finding number with and without an phone extension

    - by nWorx
    Hello there I've a table with about 130 000 records with telephonenumbers. The numbers are all formated like this +4311234567. The numbers always include international country code, local area code and then the phonenumber and sometimes an extension. There is a webservice which checks for the caller's number in the table. That service works already. But now the client wants that also if someone calls from a company which number is already in the database but not his extension, that the service will return some result. Example for table. **id** | **telephonenumber** | **name** | 1 | +431234567 | company A | 2 | +431234567890 | employee in company A | 3 | +4398765432 | company b now if somebody from company A calls with a different extension for example +43123456777, than it should return id1. But the problem is, that I don't know how many digits the extensions have. It could have 3,4 or more digits. Are there any patterns for string kind of matchings? The data is stored in a sql2005 database. Thanks

    Read the article

  • Slow loading of UITableView. How know why?

    - by mamcx
    I have a UITableView that show a long list of data. Use sections and follow the sugestion of http://stackoverflow.com/questions/695814/how-solve-slow-scrolling-in-uitableview . The flow is load a main UITableView & push a second selecting a row from there. However, with 3000 items take 11 seconds to show. I suspect first from the load of the records from sqlite (I preload the first 200). So I cut it to only 50. However, no matter if I preload only 1 or 500, the time is the same. The view is made from IB and all is opaque. I run out of ideas in how detect the problem. I run the Instruments tool but not know what to look. Also, when the user select a cell from the previous UITable, no visual feedback is show (ie: the cell not turn selected) for a while so he thinks he not select it and try several times. Is related to this problem. What to do? NOTE: The problem is only in the actual device: iPod Touch 2d generation Using fmdb as sqlite api Doing the caching in viewDidLoad Using NSDictionary for the caching Using a NSAutoreleasePool for the caching part. Only caching the row ID & mac 4 fields necesary to show the cell data UIView made with interface builder, SDK 2.2.1 Instruments say I use 2.5 MB in the device

    Read the article

  • Handling nulls in Datawarehouse

    - by rrydman
    I'd like to ask your input on what the best practice is for handling null or empty data values when it pertains to data warehousing and SSIS/SSAS. I have several fact and dimension tables that contain null values in different rows. Specifics: 1) What is the best way to handle null date/times values? Should I make a 'default' row in my time or date dimensions and point SSIS to the default row when there is a null found? 2) What is the best way to handle nulls/empty values inside of dimension data. Ex: I have some rows in an 'Accounts' dimensions that have empty (not NULL) values in the Account Name column. Should I convert these empty or null values inside the column to a specific default value? 3) Similar to point 1 above - What should I do if I end up with a Facttable row that has no record in one of the dimension columns? Do I need default dimension records for each dimension in case this happens? 4) Any suggestion or tips in regards to how to handle these operation in Sql server integration services (SSIS)? Best data flow configurations or best transformation objects to use would be helpful. Thanks :-)

    Read the article

  • Mysql - Help me alter this query to apply AND logic instead of OR in searching?

    - by sandeepan-nath
    First execute these tables and data dumps :- CREATE TABLE IF NOT EXISTS `Tags` ( `id_tag` int(10) unsigned NOT NULL auto_increment, `tag` varchar(255) default NULL, PRIMARY KEY (`id_tag`), UNIQUE KEY `tag` (`tag`), KEY `id_tag` (`id_tag`), KEY `tag_2` (`tag`), KEY `tag_3` (`tag`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=18 ; INSERT INTO `Tags` (`id_tag`, `tag`) VALUES (1, 'key1'), (2, 'key2'); CREATE TABLE IF NOT EXISTS `Tutors_Tag_Relations` ( `id_tag` int(10) unsigned NOT NULL default '0', `id_tutor` int(10) default NULL, KEY `Tutors_Tag_Relations` (`id_tag`), KEY `id_tutor` (`id_tutor`), KEY `id_tag` (`id_tag`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; INSERT INTO `Tutors_Tag_Relations` (`id_tag`, `id_tutor`) VALUES (1, 1), (2, 1); The following query finds all the tutors from Tutors_Tag_Relations table which have reference to at least one of the terms "key1" or "key2". SELECT td . * FROM Tutors_Tag_Relations AS td INNER JOIN Tags AS t ON t.id_tag = td.id_tag WHERE t.tag LIKE "%key1%" OR t.tag LIKE "%key2%" Group by td.id_tutor LIMIT 10 Please help me modify this query so that it returns all the tutors from Tutors_Tag_Relations table which have reference to both the terms "key1" and "key2" (AND logic instead of OR logic). Please suggest an optimized query considering huge number of data records (the query should NOT individually fetch two sets of tutors matching each keyword and then find the intersection).

    Read the article

  • Populate JOIN into a list in one database query

    - by axio
    I am trying to get the records from the 'many' table of a one-to-many relationship and add them as a list to the relevant record from the 'one' table. I am also trying to do this in a single database request. Code derived from http://stackoverflow.com/questions/1580199/linq-to-sql-populate-join-result-into-a-list almost achieves the intended result, but makes one database request per entry in the 'one' table which is unacceptable. That failing code is here: var res = from variable in _dc.GetTable<VARIABLE>() select new { x = variable, y = variable.VARIABLE_VALUEs }; However if I do a similar query but loop through all the results, then only a single database request is made. This code achieves all goals: var res = from variable in _dc.GetTable<VARIABLE>() select variable; List<GDO.Variable> output = new List<GDO.Variable>(); foreach (var v2 in res) { List<GDO.VariableValue> values = new List<GDO.VariableValue>(); foreach (var vv in v2.VARIABLE_VALUEs) { values.Add(VariableValue.EntityToGDO(vv)); } output.Add(EntityToGDO(v2)); output[output.Count - 1].VariableValues = values; } However the latter code is ugly as hell, and it really feels like something that should be do-able in a single linq query. So, how can this be done in a single linq query that makes only a single database query? In both cases the table is set to preload using the following code: _dc = _db.CreateLinqDataContext(); var loadOptions = new DataLoadOptions(); loadOptions.LoadWith<VARIABLE>(v => v.VARIABLE_VALUEs); _dc.LoadOptions = loadOptions; I am using .NET 3.5, and the database back-end was generated using SqlMetal.

    Read the article

  • Using a CNAME with Shared Windows Azure Website

    - by user1679021
    I've been following instructions on the Azure site to add a CNAME to point to my Azure website. I have had some problems getting it to work and there seems to be some contradictory information in some of the posts. I have my website running in "Shared" mode, which according to the Azure instructions supports custom domains and indeed it seems to allow me to manage domains. But some posts seem to indicate that I have to run in reserved mode. Can anyone confirm this? Also, some posts seem to indicate that I need to add the CNAME in the Azure management portal, but I cannot find where this is. Any help appreciated? I don't really understand A records and CNAME that well. My DNS provider allows me to add both. Do I need to change both? Currently my A record points the "root" to the IP address that Azure gives me and the CNAME points www.mydomain to the Azure website host mysite.azurewebsites.net. I have left them for a while to propogate and nothing seem to happen.

    Read the article

  • Transaction within a Transaction in C#

    - by Rosco
    I'm importing a flat file of invoices into a database using C#. I'm using the TransactionScope to roll back the entire operation if a problem is encountered. It is a tricky input file, in that one row does not necessary equal one record. It also includes linked records. An invoice would have a header line, line items, and then a total line. Some of the invoices will need to be skipped, but I may not know it needs to be skipped until I reach the total line. One strategy is to store the header, line items, and total line in memory, and save everything once the total line is reached. I'm pursuing that now. However, I was wondering if it could be done a different way. Creating a "nested" transaction around the invoice, inserting the header row, and line items, then updating the invoice when the total line is reached. This "nested" transaction would roll back if it is determined the invoice needs to be skipped, but the overall transaction would continue. Is this possible, practical, and how would you set this up?

    Read the article

  • jQuery and MySQL

    - by Wayne
    I have taken a jQuery script which would remove divs on a click, but I want to implement deleting records of a MySQL database. In the delete.php: <?php $photo_id = $_POST['id']; $sql = "DELETE FROM photos WHERE id = '" . $photo_id . "'"; $result = mysql_query($sql) or die(mysql_error()); ?> The jQuery script: $(document).ready(function() { $('#load').hide(); }); $(function() { $(".delete").click(function() { $('#load').fadeIn(); var commentContainer = $(this).parent(); var id = $(this).attr("id"); var string = 'id='+ id ; $.ajax({ type: "POST", url: "delete.php", data: string, cache: false, success: function(){ commentContainer.slideUp('slow', function() {$("#photo-" + id).remove();}); $('#load').fadeOut(); } }); return false; }); }); The div goes away when I click on it, but then after I refresh the page, it appears again... How do I get it to delete it from the database? Thanks :) EDIT: Woopsie... forgot to add the db.php to it, so it works now .<

    Read the article

  • Stable/repeatable random sort (MySQL, Rails)

    - by Matt Rogish
    I'd like to paginate through a randomly sorted list of ActiveRecord models (rows from MySQL database). However, this randomization needs to persist on a per-session basis, so that other people that visit the website also receive a random, paginate-able list of records. Let's say there are enough entities (tens of thousands) that storing the randomly sorted ID values in either the session or a cookie is too large, so I must temporarily persist it in some other way (MySQL, file, etc.). Initially I thought I could create a function based on the session ID and the page ID (returning the object IDs for that page) however since the object ID values in MySQL are not sequential (there are gaps), that seemed to fall apart as I was poking at it. The nice thing is that it would require no/minimal storage but the downsides are that it is likely pretty complex to implement and probably CPU intensive. My feeling is I should create an intersection table, something like: random_sorts( sort_id, created_at, user_id NULL if guest) random_sort_items( sort_id, item_id, position ) And then simply store the 'sort_id' in the session. Then, I can paginate the random_sorts WHERE sort_id = n ORDER BY position LIMIT... as usual. Of course, I'd have to put some sort of a reaper in there to remove them after some period of inactivity (based on random_sorts.created_at). Unfortunately, I'd have to invalidate the sort as new objects were created (and/or old objects being removed, although deletion is very rare). And, as load increases the size/performance of this table (even properly indexed) drops. It seems like this ought to be a solved problem but I can't find any rails plugins that do this... Any ideas? Thanks!!

    Read the article

  • Should core application configuration be stored in the database, and if so what should be done to se

    - by Rl
    I'm writing an application around a lot of hierarchical data. Currently the hierarchy is fixed, but it's likely that new items will be added to the hierarchy in the future. (please let them be leaves) My current application and database design is fairly generic and nothing dealing with specific nodes in the hierarchy is hardcoded, with the exception of validation and lookup functions written to retrieve external data from each node's particular database. This pleases me from a design point of view, but I'm nervous at the realization that the entire application rests on a handful of records in the database. I'm also frustrated that I have to enforce certain aspects of data integrity with database triggers rather than by foreign key constraints (an example is where several different nodes in the hierarchy have their own proprietary IDs and I store them in a single column which, when coupled with the node ID can be used to locate the foreign data). I'm starting to wonder whether it may have been appropriate to simply hardcoded these known nodes into the system so that it would be more "type safe" and less generic. How does one know when something should be hardcoded, and when it should be a configuration item? Is it just a cost-benefit analysis of clarity/safety now vs less work later, or am I missing some metric I should be using to determine whether or not this is appropriate. The steps I'm taking to protect these valuable configurations are to add triggers that prevent updates/deletes. The database user that this application uses will only have the ability to manipulate data through stored procedures. What else can I do?

    Read the article

  • PHP Array to CSV

    - by JohnnyFaldo
    I'm trying to convert an array of products into a CSV file, but it doesn't seem to be going to plan. The CSV file is one long line, here is my code: for($i=0;$i<count($prods);$i++) { $sql = "SELECT * FROM products WHERE id = '".$prods[$i]."'"; $result = $mysqli->query($sql); $info = $result->fetch_array(); } $header = ''; for($i=0;$i<count($info);$i++) { $row = $info[$i]; $line = ''; for($b=0;$b<count($row);$b++) { $value = $row[$b]; if ( ( !isset( $value ) ) || ( $value == "" ) ) { $value = "\t"; } else { $value = str_replace( '"' , '""' , $value ); $value = '"' . $value . '"' . "\t"; } $line .= $value; } $data .= trim( $line ) . "\n"; } $data = str_replace( "\r" , "" , $data ); if ( $data == "" ) { $data = "\n(0) Records Found!\n"; } header("Content-type: application/octet-stream"); header("Content-Disposition: attachment; filename=your_desired_name.xls"); header("Pragma: no-cache"); header("Expires: 0"); print "$data"; Also, the header doesn't force a download. I've been copy and pasting the output and saving as .csv

    Read the article

  • Create 2nd tables and add data

    - by Tyler Matema
    I have this task from school, and I am confuse and lost on how I got to do this. So basically I have to create 2 tables to the database but I have to created from php. I have created the first table, but not the second one for some reason. And then, I have to populate first and second tables with 10 and 20 sample records respectively, populate, does it mean like adding more fake users? if so is it like the code shown below? *I got error on the populating second part as well Thank you so much for the help. <?php $host = "host"; $user = "me"; $pswd = "password"; $dbnm = "db"; $conn = @mysqli_connect($host, $user, $pswd, $dbnm); if (!$conn) die ("<p>Couldn't connect to the server!<p>"); $selectData = @mysqli_select_db ($conn, $dbnm); if(!$selectData) { die ("<p>Database Not Selected</p>"); } //1st table $sql = "CREATE TABLE IF NOT EXISTS `friends` ( `friend_id` INT NOT NULL auto_increment, `friend_email` VARCHAR(20) NOT NULL, `password` VARCHAR(20) NOT NULL, `profile_name` VARCHAR(30) NOT NULL, `date_started` DATE NOT NULL, `num_of_friends` INT unsigned, PRIMARY KEY (`friend_id`) )"; //2nd table $sqlMyfriends = "CREATE TABLE `myfriends` ( `friend_id1` INT NOT NULL, `friend_id2` INT NOT NULL, )"; $query_result1 = @mysqli_query($conn, $sql); $query_result2 = @mysqli_query($conn, $sqlMyfriends); //populating 1st table $sqlSt3="INSERT INTO friends (friend_id, friend_email, password, profile_name, date_started, num_of_friends) VALUES('NULL','[email protected]','123','abc','2012-10-25', 5)"; $queryResult3 = @mysqli_query($dbConnect,$sqlSt3) //populating 2nd table $sqlSt13="INSERT INTO myfriends VALUES(1,2)"; $queryResult13=@mysqli_query($dbConnect,$sqlSt13); mysqli_close($conn); ?>

    Read the article

  • Grouping by date, with 0 when count() yields no lines

    - by SCO
    I'm using Postgresql 9 and I'm fighting with counting and grouping when no lines are counted. Let's assume the following schema : create table views { date_event timestamp with time zone ; event_id integer; } Let's imagine the following content : 2012-01-01 00:00:05 2 2012-01-01 01:00:05 5 2012-01-01 03:00:05 8 2012-01-01 03:00:15 20 I want to group by hour, and count the number of lines. I wish I could retrieve the following : 2012-01-01 00:00:00 1 2012-01-01 01:00:00 1 2012-01-01 02:00:00 0 2012-01-01 03:00:00 2 2012-01-01 04:00:00 0 2012-01-01 05:00:00 0 . . 2012-01-07 23:00:00 0 I mean that for each time range slot, I count the number of lines in my table whose date correspond, otherwise, I return a line with a count at zero. The following will definitely not work (will yeld only lines with counted lines 0). SELECT extract ( hour from date_event ),count(*) FROM views where date_event > '2012-01-01' and date_event <'2012-01-07' GROUP BY extract ( hour from date_event ); Please note I might also need to group by minute, or by hour, or by day, or by month, or by year (multiple queries is possible of course). I can only use plain old sql, and since my views table can be very big (100M records), I try to keep performance in mind. How can this be achieved ? Thank you !

    Read the article

  • Performance when querying a View

    - by Nate Bross
    I'm wondering if this is a bad practice or if in general this is the correct approach. Lets say that I've created a view that combines a few attributes from a few tables. My question, what do I need to do so I can query against this view as if it were a table without worrying about performance? All attributes in the original tables are indexed, my concern is that the result view will have hundreds of thousands of records, which I will want to narrow down quite a bit based on user input. What I'd like to avoid, is having multiple versions of the code that generates this view floating around with a few extra "where" conditions to facilitate the user input filtering. For example, assume my view has this header VIEW(Name, Type, DateEntered) this may have 100,000+ rows (possibly millions). I'd like to be able to make this view in SQL Server, and then in my application write querlies like this: SELECT Name, Type, DateEntered FROM MyView WHERE DateEntered BETWEEN @date1 and @date2; Basically, I am denormalizing my data for a series of reports that need to be run, and I'd like to centralize where I pull the data from, maybe I'm not looking at this problem from the right angle though, so I'm open to alternative ways to attack this.

    Read the article

  • I can't seem to get my grand Total to calculate correctly

    - by Kenny
    When I run this query below, SELECT clientid, CASE WHEN D.ccode = '-1' Then 'Did Not Show' ELSE D.ccode End ccode, ca, ot, bw, cshT, dc, dte, approv FROM dbo.emC D WHERE year(dte) = year(getdate()) I get the correct results. It is correct result because ccode shows 'Did Not Show' when the value on the db is '-1' However, when I do a UNION ALL so I can get total for each column, I get the results but then 'Did Not Show' is no longer visible when valye for ccode is '-1'. There are over 1000 records with valuye of '-1'. Can someone please help? Here is the entire code with UNION. SELECT clientid, CASE WHEN D.ccode = '-1' Then 'Did Not Show' ELSE D.ccode End ccode, ca, ot, bw, cshT, dc, dte, approv FROM dbo.emC D WhERE year(dte) = year(getdate()) UNION ALL SELECT 'Total', '', SUM(D.ca), SUM(D.ot), SUM(D.bw), SUM(D.cshT), '', '', '' FROM emC D WHERE YEAR(dte)='2011' I also tried using ROLLUP but the real issue here is that I can't get the 'Did Not Show' text to display when ccode value is -1 ClientID CCODE ot ca bw cshT 019692 CF001 0.00 0.00 1.00 0.00 0.00 019692 CH503 0.00 0.00 1.00 0.00 0.00 010487 AC407 0.00 0.00 1.00 0.00 0.00 028108 CH540 0.00 0.00 1.00 0.00 0.00 028108 GS925 0.00 0.00 1.00 0.00 0.00 001038 AC428 0.00 0.00 3.00 0.00 0.00 028561 Did Not Show 0.00 0.00 0.00 0.00 0.00 016884 Did Not Show 0.00 0.00 0.00 0.00 0.00 05184 CF001 0.00 0.00 4.50 0.00 0.00

    Read the article

  • Accelerometer stops delivering samples when the screen is off on Droid/Nexus One even with a WakeLoc

    - by William
    I have some code that extends a service and records onSensorChanged(SensorEvent event) accelerometer sensor readings on Android. I would like to be able to record these sensor readings even when the device is off (I'm careful with battery life and it's made obvious when it's running). While the screen is on the logging works fine on a 2.0.1 Motorola Droid and a 2.1 Nexus One. However, when the phone goes to sleep (by pushing the power button) the screen turns off and the onSensorChanged events stop being delivered (verified by using a Log.e message every N times onSensorChanged gets called). The service acquires a wakeLock to ensure that it keeps running in the background; but, it doesn't seem to have any effect. I've tried all the various PowerManager. wake locks but none of them seem to matter. _WakeLock = _PowerManager.newWakeLock(PowerManager.PARTIAL_WAKE_LOCK, "My Tag"); _WakeLock.acquire(); There have been conflicting reports about whether or not you can actually get data from the sensors while the screen is off... anyone have any experience with this on a more modern version of Android (Eclair) and hardware? This seems to indicate that it was working in Cupcake: http://groups.google.com/group/android-developers/msg/a616773b12c2d9e5 Thanks! PS: The exact same code works as intended in 1.5 on a G1. The logging continues when the screen turns off, when the application is in the background, etc.

    Read the article

  • Handling large datasets with PHP/Drupal

    - by jo
    Hi all, I have a report page that deals with ~700k records from a database table. I can display this on a webpage using paging to break up the results. However, my export to PDF/CSV functions rely on processing the entire data set at once and I'm hitting my 256MB memory limit at around 250k rows. I don't feel comfortable increasing the memory limit and I haven't got the ability to use MySQL's save into outfile to just serve a pre-generated CSV. However, I can't really see a way of serving up large data sets with Drupal using something like: $form = array(); $table_headers = array(); $table_rows = array(); $data = db_query("a query to get the whole dataset"); while ($row = db_fetch_object($data)) { $table_rows[] = $row->some attribute; } $form['report'] = array('#value' => theme('table', $table_headers, $table_rows); return $form; Is there a way of getting around what is essentially appending to a giant array of arrays? At the moment I don't see how I can offer any meaningful report pages with Drupal due to this. Thanks

    Read the article

  • Mysql - wondering about scaling a twitter-like application ?

    - by user246114
    Hi, I'm developing an app that is vaguely similar to twitter, in that it allows users to follow one another. I wanted to do this using google app engine, for its scalability promises, but it's proving kind of difficult to get running for a few different reasons. I'd basically like to have a _users table, and a _followers table. Users go into the users table, follower relationships go into _followers. The problem is that each row in the users table will probably have like 100 corresponding records in the _followers table as users start following one another. So the number of rows is going to explode quickly. Using app engine, the volume [shouldn't] be a problem. If I go with mysql, and I do actually start to get some traction, how do I scale this up? Am I going to just end up moving to a distributed database in the end anyway? Should I fight it out with google app engine? I read that Twitter was using mysql, and they've run into this problem, and are now switching to cassandra. Thanks

    Read the article

  • Question about how AppFabric's cache feature can be used.

    - by Kevin Buchan
    Question about how AppFabric's cache feature can be used. I apologize for asking a question that I should be able to answer from the documentation, but I have read and read and searched and cannot answer this question, which leads me to believe that I have a fundamentally flawed understanding of what AppFabric's caching capabilities are intended for. I work for a geographically disperse company. We have a particular application that was originally written as a client/server application. It’s so massive and business critical that we want to baby step converting it to a better architected solution. One of the ideas we had was to convert the app to read its data using WCF calls to a co-located web server that would cache communication with the database in the United States. The nature of the application is such that everyone will tend to be viewing the same 2000 records or so with only occasional updates and those updates will be made by a limited set of users. I was hoping that AppFabric’s cache mechanism would allow me to set up one global cache and when a user in Asia, for example, requested data that was not in the cache or was stale that the web server would read from the database in the USA, provide the data to the user, then update the cache which would propagate that data to the other web servers so that they would know not to go back to the database themselves. Can AppFabric work this way or should I just have the servers retrieve their own data from the database?

    Read the article

  • Grails - ElasticSearch - QueryParsingException[[index] No query registered for [query]]; with elasticSearchHelper; JSON via curl works fine though

    - by v1p
    I have been working on a Grails project, clubbed with ElasticSearch ( v 20.6 ), with a custom build of elasticsearch-grails-plugin(to support geo_point indexing : v.20.6) have been trying to do a filtered Search, while using script_fields (to calculate distance). Following is Closure & the generated JSON from the GXContentBuilder : Closure records = Domain.search(searchType:'dfs_query_and_fetch'){ query { filtered = { query = { if(queryTxt){ query_string(query: queryTxt) }else{ match_all {} } } filter = { geo_distance = { distance = "${userDistance}km" "location"{ lat = latlon[0]?:0.00 lon = latlon[1]?:0.00 } } } } } script_fields = { distance = { script = "doc['location'].arcDistanceInKm($latlon)" } } fields = ["_source"] } GXContentBuilder generated query JSON : { "query": { "filtered": { "query": { "match_all": {} }, "filter": { "geo_distance": { "distance": "5km", "location": { "lat": "37.752258", "lon": "-121.949886" } } } } }, "script_fields": { "distance": { "script": "doc['location'].arcDistanceInKm(37.752258, -121.949886)" } }, "fields": ["_source"] } The JSON query, using curl-way, works perfectly fine. But when I try to execute it from Groovy Code, I mean with this : elasticSearchHelper.withElasticSearch { Client client -> def response = client.search(request).actionGet() } It throws following error : Failed to execute phase [dfs], total failure; shardFailures {[1][index][3]: SearchParseException[[index][3]: from[0],size[60]: Parse Failure [Failed to parse source [{"from":0,"size":60,"query_binary":"eyJxdWVyeSI6eyJmaWx0ZXJlZCI6eyJxdWVyeSI6eyJtYXRjaF9hbGwiOnt9fSwiZmlsdGVyIjp7Imdlb19kaXN0YW5jZSI6eyJkaXN0YW5jZSI6IjVrbSIsImNvbXBhbnkuYWRkcmVzcy5sb2NhdGlvbiI6eyJsYXQiOiIzNy43NTIyNTgiLCJsb24iOiItMTIxLjk0OTg4NiJ9fX19fSwic2NyaXB0X2ZpZWxkcyI6eyJkaXN0YW5jZSI6eyJzY3JpcHQiOiJkb2NbJ2NvbXBhbnkuYWRkcmVzcy5sb2NhdGlvbiddLmFyY0Rpc3RhbmNlSW5LbSgzNy43NTIyNTgsIC0xMjEuOTQ5ODg2KSJ9fSwiZmllbGRzIjpbIl9zb3VyY2UiXX0=","explain":true}]]]; nested: QueryParsingException[[index] No query registered for [query]]; } The above Closure works if I only use filtered = { ... } script_fields = { ... } but it doesn't return the calculated distance. Anyone had any similar problem ? Thanks in advance :) It's possible that I might have been dim to point out the obvious here :P

    Read the article

< Previous Page | 444 445 446 447 448 449 450 451 452 453 454 455  | Next Page >