Search Results

Search found 7116 results on 285 pages for 'nested queries'.

Page 254/285 | < Previous Page | 250 251 252 253 254 255 256 257 258 259 260 261  | Next Page >

  • Logic in the db for maintaining a points system relationship?

    - by MarcusBooster
    I'm making a little web based game and need to determine where to put logic that checks the integrity of some underlying data in the sql database. Each user keeps track of points assigned to him, and points are awarded by various tasks. I keep a record of each task transaction to make sure they're not repeated, and to keep track of the value of the task at the time of completion, since an individual award level my fluctuate over time. My schema looks like this so far: create table player ( player_ID serial primary key, player_Points int not null default 0 ); create table task ( task_ID serial primary key, task_PointsAwarded int not null ); create table task_list ( player_ID int references player(player_ID), task_ID int references task(task_ID), when_completed timestamp default current_timestamp, point_value int not null, --not fk because task value may change later constraint pk_player_task_id primary key (player_ID, task_ID) ); So, the player.player_Points should be the total of all his cumulative task points in the task_list. Now where do I put the logic to enforce this? Should I do away with player.player_Points altogether and do queries every time I want to know the total score? Which seems wasteful since I'll be doing that query a lot over the course of a game. Or, put a trigger in the task_list that automatically updates the player.player_Points? Is that too much logic to have in the database and should just maintain this relationship in the application? Thanks.

    Read the article

  • Speed up csv export when using php from mysql database query

    - by John
    Ok, so i've got a web system (built on codeigniter & running on mysql) that allows people to query a database of postal address data by making selections in a series of forms until they arrive at the selection that want, pretty standard stuff. They can then buy that information and download it via that system. The queries run very fast, but when it comes to applying that query to the database,and exporting it to csv, once the datasets get to around the 30,000 record mark (each row has around 40 columns of which about 20 are all populated with on average 20 chars of data per cell) it can take 5 or so minutes to export to csv. So, my question is, what is the main cause for the slowness? Is it that the resultset of data from the query is so large, that it is running into memory issues? Therefore should i allow much more memory to the process? Or, is there a much more efficient way of exporting to csv from a mysql query that i'm not doing? Should i save the contents of the query to a temp table and simply export the temp table to csv? Or am i going about this all wrong? Also, is the fact that i'm using Codeigniters Active Record for this prohibitive due to the way that it stores the resultset? Any advice is welcome! Thank you for reading!

    Read the article

  • Grouping by date, with 0 when count() yields no lines

    - by SCO
    I'm using Postgresql 9 and I'm fighting with counting and grouping when no lines are counted. Let's assume the following schema : create table views { date_event timestamp with time zone ; event_id integer; } Let's imagine the following content : 2012-01-01 00:00:05 2 2012-01-01 01:00:05 5 2012-01-01 03:00:05 8 2012-01-01 03:00:15 20 I want to group by hour, and count the number of lines. I wish I could retrieve the following : 2012-01-01 00:00:00 1 2012-01-01 01:00:00 1 2012-01-01 02:00:00 0 2012-01-01 03:00:00 2 2012-01-01 04:00:00 0 2012-01-01 05:00:00 0 . . 2012-01-07 23:00:00 0 I mean that for each time range slot, I count the number of lines in my table whose date correspond, otherwise, I return a line with a count at zero. The following will definitely not work (will yeld only lines with counted lines 0). SELECT extract ( hour from date_event ),count(*) FROM views where date_event > '2012-01-01' and date_event <'2012-01-07' GROUP BY extract ( hour from date_event ); Please note I might also need to group by minute, or by hour, or by day, or by month, or by year (multiple queries is possible of course). I can only use plain old sql, and since my views table can be very big (100M records), I try to keep performance in mind. How can this be achieved ? Thank you !

    Read the article

  • Union and order by

    - by David Lively
    Consider a table like tbl_ranks -------------------------------- family_id | item_id | view_count -------------------------------- 1 10 101 1 11 112 1 13 109 2 21 101 2 22 112 2 23 109 3 30 101 3 31 112 3 33 109 4 40 101 4 51 112 4 63 109 5 80 101 5 81 112 5 88 109 I need to generate a result set with the top two(2) rows for a subset of family ids (say, 1,2,3 and 4) ordered by view count. I'd like to do something like select top 2 * from tbl_ranks where family_id = 1 order by view_count union all select top 2 * from tbl_ranks where family_id = 2 order by view_count union all select top 2 * from tbl_ranks where family_id = 3 order by view_count union all select top 2 * from tbl_ranks where family_id = 4 order by view_count but, of course, order by isn't valid in a union all context in this manner. Any suggestions? I know I could run a set of 4 queries, store the results into a temp table and select the contents of that temp as the final result, but I'd rather avoid using a temp table if possible. Note: in the real app, the number of records per family id is indeterminate, and the view_counts are also not fixed as they appear in the above example.

    Read the article

  • Xpath query to select node when attribute does not exist? [closed]

    - by Antoine
    I want to select nodes for which a specific attribute does not exist. I've tried the Not() function, but it doesn't work. Is there a way for this? Example: The following Xpath query: group/msg[not(@owner)] Should retrieve the first node but not the 2nd one. However, both SketchPath (tool to test Xpath queries) and my C# code consider that the 2 nodes are ok. <group> <msg id="EVENTDATA_CCFLOADED_XMLCONTEXT" numericId="14026" translate="False" topicId="302" status="translated" > <text>Context</text> <comment></comment> </msg> <msg id="EVENTDATA_CCFLOADED_XMLCONTEXT_HELP" numericId="14027" translate="False" topicId="302" status="translated" owner="EVENTDATA_CCFLOADED_XMLCONTEXT" > <text>Provides the new data displayed in the Object.</text> <comment></comment> </msg> </group> In fact the Not() function works correctly, it's just that I had other conditions and parentheses weren't set correctly. errare humanum est.

    Read the article

  • LinqtoSql Pre-compile Query problem with Count() on a group by

    - by Joe Pitz
    Have a LinqtoSql query that I now want to precompile. var unorderedc = from insp in sq.Inspections where insp.TestTimeStamp > dStartTime && insp.TestTimeStamp < dEndTime && insp.Model == "EP" && insp.TestResults != "P" group insp by new { insp.TestResults, insp.FailStep } into grp select new { FailedCount = (grp.Key.TestResults == "F" ? grp.Count() : 0), CancelCount = (grp.Key.TestResults == "C" ? grp.Count() : 0), grp.Key.TestResults, grp.Key.FailStep, PercentFailed = Convert.ToDecimal(1.0 * grp.Count() / tcount * 100) }; I have created this delegate: public static readonly Funct<SQLDataDataContext, int, string, string, DateTime, DateTime, IQueryable<CalcFailedTestResult>> GetInspData = CompiledQuery.Compile((SQLDataDataContext sq, int tcount, string strModel, string strTest, DateTime dStartTime, DateTime dEndTime, IQueryable<CalcFailedTestResult> CalcFailed) => from insp in sq.Inspections where insp.TestTimeStamp > dStartTime && insp.TestTimeStamp < dEndTime && insp.Model == strModel && insp.TestResults != strTest group insp by new { insp.TestResults, insp.FailStep } into grp select new { FailedCount = (grp.Key.TestResults == "F" ? grp.Count() : 0), CancelCount = (grp.Key.TestResults == "C" ? grp.Count() : 0), grp.Key.TestResults, grp.Key.FailStep, PercentFailed = Convert.ToDecimal(1.0 * grp.Count() / tcount * 100) }); The syntax error is on the CompileQuery.Compile() statement It appears to be related to the use of the select new {} syntax. In other pre-compiled queries I have written I have had to just use the select projection by it self. In this case I need to perform the grp.count() and the immediate if logic. I have searched SO and other references but cannot find the answer.

    Read the article

  • Why is it possible to enumerate a LinqToSql query after calling Dispose() on the DataContext?

    - by DanM
    I'm using the Repository Pattern with some LinqToSql objects. My repository objects all implement IDisposable, and the Dispose() method does only thing--calls Dispose() on the DataContext. Whenever I use a repository, I wrap it in a using person, like this: public IEnumerable<Person> SelectPersons() { using (var repository = _repositorySource.GetNew<Person>(dc => dc.Person)) { return repository.GetAll(); } } This method returns an IEnumerable<Person>, so if my understanding is correct, no querying of the database actually takes place until Enumerable<Person> is traversed (e.g., by converting it to a list or array or by using it in a foreach loop), as in this example: var persons = gateway.SelectPersons(); // Dispose() is fired here var personViewModels = ( from b in persons select new PersonViewModel { Id = b.Id, Name = b.Name, Age = b.Age, OrdersCount = b.Order.Count() }).ToList(); // executes queries In this example, Dispose() gets called immediately after setting persons, which is an IEnumerable<Person>, and that's the only time it gets called. So, a couple questions: How does this work? How can a disposed DataContext still query the database for results when I walk the IEnumerable<Person>? What does Dispose() actually do? I've heard that it is not necessary (e.g., see this question) to dispose of a DataContext, but my impression was that it's not a bad idea. Is there any reason not to dispose of it?

    Read the article

  • EF 4 Query - Issue with Multiple Parameters

    - by Brian
    Hello, A trick to avoiding filtering by nullable parameters in SQL was something like the following: select * from customers where (@CustomerName is null or CustomerName = @CustomerName) This worked well for me in LINQ to SQL: string customerName = "XYZ"; var results = (from c in ctx.Customers where (customerName == null || (customerName != null && c.CustomerName == customerName)) select c); But that above query, when in ADO.NET EF, doesn't work for me; it should filter by customer name because it exists, but it doesn't. Instead, it's querying all the customer records. Now, this is a simplified example, because I have many fields that I'm utilizing this kind of logic with. But it never actually filters, queries all the records, and causes a timeout exception. But the wierd thing is another query does something similarly, with no issues. Any ideas why? Seems like a bug to me, or is there a workaround for this? I've since switched to extension methods which works. Thanks.

    Read the article

  • Create an index only on certain rows in mysql

    - by dhruvbird
    So, I have this funny requirement of creating an index on a table only on a certain set of rows. This is what my table looks like: USER: userid, friendid, created, blah0, blah1, ..., blahN Now, I'd like to create an index on: (userid, friendid, created) but only on those rows where userid = friendid. The reason being that this index is only going to be used to satisfy queries where the WHERE clause contains "userid = friendid". There will be many rows where this is NOT the case, and I really don't want to waste all that extra space on the index. Another option would be to create a table (query table) which is populated on insert/update of this table and create a trigger to do so, but again I am guessing an index on that table would mean that the data would be stored twice. How does mysql store Primary Keys? I mean is the table ordered on the Primary Key or is it ordered by insert order and the PK is like a normal unique index? I checked up on clustered indexes (http://dev.mysql.com/doc/refman/5.0/en/innodb-index-types.html), but it seems only InnoDB supports them. I am using MyISAM (I mention this because then I could have created a clustered index on these 3 fields in the query table). I am basically looking for something like this: ALTER TABLE USERS ADD INDEX (userid, friendid, created) WHERE userid=friendid

    Read the article

  • Alternative to Altova's MissionKit

    - by nomad311
    Anyone know of any good alternatives (other than those listed below which really are only good at specific XML development tasks)? The Why (if you're interested): I've been doing XML development on and off for years now, but someone brought XMLSpy to my attention recently, and it is awesome - the price isn't. Lately I've been using a combination of: Notepad++ (modifying XML) EditX (validating/debugging XML) Eclipse (designing schemas) and MS Visual Studio (validating schemas) ...based on which makes the task(s) easiest. But, I've just found out that we will be using XSL transformations to generate XML in the future. I've never used mission kit before, but I'm just short of positive XMLSpy replaces all the above mentioned tool for XML development. And if their XSL tools are anywhere near the caliber of XMLSpy ...simply put I need it. I don't believe that I can convince the budgeting types to buy licenses for MissionKit at $1000 each (won't stop me from trying). In the mean while some research on alternatives won't hurt, but a few Google queries has only revealed that not many people pay for Altova's (overpriced?) software as there are mostly links to P2P sites for downloading a more free-like version of MissionKit.

    Read the article

  • Return the Column Name for row with last non-null value "Ms Access 2007"

    - by bri1969
    I have a Table, which contains a list of league players. Each season, we record their Points per Dart. Their total PPD for that season is stored in other tables and extracted through other queries, which in turn are imported to the master table "Player History" at the end of the season for use as historical data. The current query retrieves each players PPD for each season they played, when they played last, and how many seasons played. The code for Last season Played has become too long and unstable to use. it was originally created, and split into two separate columns because a single SQL was to long. (LSP1) and LSP2) which work, but as I add seasons, Access does not like the length of code. In short, i need to find a more simple code that will look at each row, and look in that row for the last non null cell and report which column that last non null value is in. So if a player played seasons 30 & 31, but did not play 32..but did play 33, the Column with the code should be titled Last Season Played, and for that Player, it would state "33" in that cell, indicating that this player last played season "33" I will provide both tables and the query.. Please help

    Read the article

  • Is MySQL caching occurring, how to fix it?

    - by rlb.usa
    I think that MySQL or ASP.NET is caching my queries. I edited my MySQL sproc to remove some parameters but it keeps saying that those parameters are missing. Here is what happens: ASP.NET app calls a MySQL stored procedure. Everything works perfect. I delete some parameters from the sproc and ASP.NET parameter list accordingly. All parameters exactly match in case and order from the new ASP.NET and MySQL sproc code Upon execution, it fails, saying : System.ArgumentException: Parameter 'deleted_parameter_foo_bar' not found in the collection. at MySql.Data.MySqlClient.MySqlParameterCollection ... I delete the sproc from the database, restart my browser, and reexecute the ASP.NET page. It says the same error, that the parameter is missing - but the sproc itself doesn't exist anymore. ( I know 100% that I am editing/deleting from the right database. ) How do I fix this or make it work again; I want it to use my new sproc instead of the old one ? _o

    Read the article

  • sqlobject: No connection has been defined for this thread or process

    - by Claudiu
    I'm using sqlobject in Python. I connect to the database with conn = connectionForURI(connStr) conn.makeConnection() This succeeds, and I can do queries on the connection: g_conn = conn.getConnection() cur = g_conn.cursor() cur.execute(query) res = cur.fetchall() This works as intended. However, I also defined some classes, e.g: class User(SQLObject): class sqlmeta: table = "gui_user" username = StringCol(length=16, alternateID=True) password = StringCol(length=16) balance = FloatCol(default=0) When I try to do a query using the class: User.selectBy(username="foo") I get an exception: ... File "c:\python25\lib\site-packages\SQLObject-0.12.4-py2.5.egg\sqlobject\main.py", line 1371, in selectBy conn = connection or cls._connection File "c:\python25\lib\site-packages\SQLObject-0.12.4-py2.5.egg\sqlobject\dbconnection.py", line 837, in __get__ return self.getConnection() File "c:\python25\lib\site-packages\SQLObject-0.12.4-py2.5.egg\sqlobject\dbconnection.py", line 850, in getConnection "No connection has been defined for this thread " AttributeError: No connection has been defined for this thread or process How do I define a connection for a thread? I just realized I can pass in a connection keyword which I can give conn to to make it work, but how do I get it to work if I weren't to do that?

    Read the article

  • Get notified when UITableView has finished asking for data?

    - by kennethmac2000
    Hi everyone, Is there some way to find out when a UITableView has finished asking for data from its data source? None of the viewDidLoad/viewWillAppear/viewDidAppear methods of the associated view controller (UITableViewController) are of use here, as they all fire too early. None of them (entirely understandably) guarantee that queries to the data source have finished for the time being (eg, until the view is scrolled). One workaround I have found is to call reloadData in viewDidAppear, since, when reloadData returns, the table view is guaranteed to have finished querying the data source as much as it needs to for the time being. However, this seems rather nasty, as I assume it is causing the data source to be asked for the same information twice (once automatically, and once because of the reloadData call) when it is first loaded. The reason I want to do this at all is that I want to preserve the scroll position of the UITableView - but right down to the pixel level, not just to the nearest row. When restoring the scroll position (using scrollRectToVisible:animated:), I need the table view to already have sufficient data in it, or else the scrollRectToVisible:animated: method call does nothing (which is what happens if you place the call on its own in any of viewDidLoad, viewWillAppear or viewDidAppear). Thanks in advance for your assistance!

    Read the article

  • MYSQL variables - SET @var

    - by Lizard
    I am attempting to create a mysql snippet that will analyse a table and remove duplicate entries (duplicates are based on two fields not entire record) I have the following code that works when I hard code the variables in the queries, but when I take them out and put them as variables I get mysql errors, below is the script SET @tblname = 'mytable'; SET @fieldname = 'myfield'; SET @concat1 = 'checkfield1'; SET @concat2 = 'checkfield2'; ALTER TABLE @tblname ADD `tmpcheck` VARCHAR( 255 ) NOT NULL; UPDATE @tblname SET `tmpcheck` = CONCAT(@concat1,'-',@concat2); CREATE TEMPORARY TABLE `tmp_table` ( `tmpfield` VARCHAR( 100 ) NOT NULL ) ENGINE = MYISAM ; INSERT INTO `tmp_table` (`tmpfield`) SELECT @fieldname FROM @tblname GROUP BY `tmpcheck` HAVING ( COUNT(`tmpcheck`) > 1 ); DELETE FROM @tblname WHERE @fieldname IN (SELECT `tmpfield` FROM `tmp_table`); ALTER TABLE @tblname DROP `tmpcheck`; I am getting the following error: #1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '@tblname ADD `tmpcheck` VARCHAR( 255 ) NOT NULL' at line 1 Is this because I can't use a variable for a table name? What else could be wrong or how wopuld I get around this issue. Thanks in adavnce

    Read the article

  • How can I integrate advanced computations into a database field?

    - by ciclistadan
    My biological research involves the measurement of a cellular structure as it changes length throughout the course of observation (capturing images every minute for several hours). As my data sets have become larger I am trying to store them in an Access database, from which I would like to perform various queries about their changes in size. I know that the SELECT statement can incorporate some mathematical permutations, but I have been unable to incorporate many of my necessary calculations (probably due to my lack of knowledge). For example, one calculation involves determining the rate of change during specifically defined periods of growth. This calculation is entirely dependent on the raw data saved in the table, therefore I didn't this it would be appropriate to just calculate it in excel prior to entry into the field. So my question is, what would be the most appropriate method of performing this calculation. Should I attempt to string together a huge SELECT calculation in my QUERY, or is there a way to use another language (I know perl?) which can be called to populate the new query field? I'm not looking for someone to write the code, just where is it appropriate to incorporate each step. Also, I am currently using Office Access but would be interested in any mySQL answers as I may be moving to this platform at a later date. Thanks all!

    Read the article

  • I have data about deadlocks, but I can't understand why they occur (MS SQL/ASP.NET MVC)

    - by Alex
    I am receiving a lot of deadlocks in my big web application. http://stackoverflow.com/questions/2941233/how-to-automatically-re-run-deadlocked-transaction-asp-net-mvc-sql-server Here I wanted to re-run deadlocked transactions, but I was told to get rid of the deadlocks - it's much better, than trying to catch the deadlocks. So I spent the whole day with SQL profiler, setting the tracing keys etc. And this is what I got. There's a Users table. I have a very high usable page with the following query (it's not the only query, but it's the one that causes troubles) UPDATE Users SET views = views + 1 WHERE ID IN (SELECT AuthorID FROM Articles WHERE ArticleID = @ArticleID) And then there's the following query in ALL pages: User = DB.Users.SingleOrDefault(u => u.Password == password && u.Name == username); That's where I get User from cookies. Very often a deadlock occurs and this second LINQ TO SQL query is chosen as a victim, so it's not run, and users of my site see an error screen. I read a lot about deadlocks... And I don't understand why this is causing a deadlock. So obviously both of this queries run very often. At least once a second. Maybe even more often (300-400 users online). So they can be run at the same time very easily, but why does it cause a deadlock? Please help. Thank you

    Read the article

  • why is Active Record firing extra query when I use Includes method to fetch data

    - by riddhi_agrawal
    I have the following model structure: class Group < ActiveRecord::Base has_many :group_products, :dependent => :destroy has_many :products, :through => :group_products end class Product < ActiveRecord::Base has_many :group_products, :dependent => :destroy has_many :groups, :through => :group_products end class GroupProduct < ActiveRecord::Base belongs_to :group belongs_to :product end I wanted to minimize my database queries so I decided to use includes.In the console I tried something like, groups = Group.includes(:products) my development logs show the following calls, Group Load (403.0ms) SELECT `groups`.* FROM `groups` GroupProduct Load (60.0ms) SELECT `group_products`.* FROM `group_products` WHERE (`group_products`.group_id IN (1,3,14,15,16,18,19,20,21,22,23,24,25,26,27,28,29,30,33,42,49,51)) Product Load (22.0ms) SELECT `products`.* FROM `products` WHERE (`products`.`id` IN (382,304,353,12,63,103,104,105,262,377,263,264,265,283,284,285,286,287,302,306,307,308,328,335,336,337,340,355,59,60,61,247,309,311,66,30,274,294,324,350,140,176,177,178,64,240,327,332,338,380,383,252,254,255,256,257,325,326)) Product Load (10.0ms) SELECT `products`.* FROM `products` WHERE (`products`.`id` = 377) LIMIT 1 I could analyze the initial three calls were necessary but don't get the reason why the last database call is made, Product Load (10.0ms) SELECT `products`.* FROM `products` WHERE (`products`.`id` = 377) LIMIT 1 Any idea why this is happening? Thanks in advance. :)

    Read the article

  • SDL (And Others) Virtual Key Input

    - by David C
    Today I set up the input in my application for all the different keys. This works fine except for virtual keys, for example, caret or ampersand. Keys that normally need shift to be got at. Using SDL these virtual keys don't work. As in they do not register an event. if (event.type == SDL_KEYDOWN) { switch (event.key.keysym.sym) { case SDLK_CARET: Keys[KeyCodes::Caret] = KeyState::Down; break; case SDLK_UP: Keys[KeyCodes::Up] = KeyState::Down; break; default: break; } I am absolutely sure my system works with physical keys like Up. The program queries a keystate like so: if (Keys[KeyCode] == KeyState::Down) { lua_pushboolean(L, true); } else { lua_pushboolean(L, false); } KeyCode is passed in as an argument. So why are virtual keys, or keys that need shift to get at not working using SDL's KeyDown event type? Is more code needed to get to them? Or am I being stupid?

    Read the article

  • How to process a large post array in PHP where item names are all different and not known in advance

    - by Salnajjar
    I have a PHP page that queries a DB to populate a form for the user to modify the data and submit. The query returns a number of rows which contain 3 items: ImageID ImageName ImageDescription The PHP page titles each box in the form with a generic name and appends the ImageID to it. Ie: ImageID_03 ImageName_34 ImageDescription_22 As it's unknown which images are going to have been retrieved from the DB then I can't know in advance what the name of the form entries will be. The form deals with a large number of entries at the same time. My backend PHP form processor that gets the data just sees it as one big array: [imageid_2] => 2 [imagename_2] => _MG_0214 [imageid_10] => 10 [imagename_10] => _MG_0419 [imageid_39] => 39 [imagename_39] => _MG_0420 [imageid_22] => 22 [imagename_22] => Curly Fern [imagedescription_2] => Wibble [imagedescription_10] => Wobble [imagedescription_39] => Fred [imagedescription_22] => Sally I've tried to do an array walk on it to split it into 3 arrays which set places but am stuck: // define empty arrays $imageidarray = array(); $imagenamearray = array(); $imagedescriptionarray = array(); // our function to call when we walk through the posted items array function assignvars($entry, $key) { if (preg_match("/imageid/i", $key)) { array_push($imageidarray, $entry); } elseif (preg_match("/imagename/i", $key)) { // echo " ImageName: $entry"; } elseif (preg_match("/imagedescription/i", $key)) { // echo " ImageDescription: $entry"; } } array_walk($_POST, 'assignvars'); This fails with the error: array_push(): First argument should be an array in... Am I approaching this wrong?

    Read the article

  • MYSQL how to sum rows with same key, then delete the duplicate rows

    - by Bhante-S
    What I have: key data 1      22 1       5 2       6 3       1 3      -3 What I want: key data 1      27 2       6 3      -2 I don’t mind doing this with two or more queries, esp. if they are simple--makes for easier maintenance. Also the tables are fairly small (<2,000 records). The ‘key’ field is indexed and allows duplicates. Muchas Gracias

    Read the article

  • Looking for an email/report templating engine with database backend - for end-users ...

    - by RizwanK
    We have a number of customers that we have to send monthly invoices too. Right now, I'm managing a codebase that does SQL queries against our customer database and billing database and places that data into emails - and sends it. I grow weary of maintaining this every time we want to include a new promotion or change our customer service phone numbers. So, I'm looking for a replacement to move more of this into the hands of those requesting the changes. In my ideal world, I need : A WYSIWYG (man, does anyone even say that anymore?) email editor that generates templates based upon the output from a Database Query. The ability to drag and drop various fields from the database query into the email template. Display of sample email results with the database query. Web application, preferably not requiring IIS. Involve as little code as possible for the end-user, but allow basic functionality (i.e. arrays/for loops) Either comes with it's own email delivery engine, or writes output in a way that I can easily write a Python script to deliver the email. Support for generic Database Connectors. (I need MSSQL and MySQL) F/OSS So ... can anyone suggest a project like this, or some tools that'd be useful for rolling my own? (My current alternative idea is using something like ERB or Tenjin, having them write the code, but not having live-preview for the editor would suck...)

    Read the article

  • The best way to return related data in a SQL statement

    - by Darvis Lombardo
    I have a question on the best method to get back to a piece of data that is in a related table on the other side of a many-to-many relationship table. My first method uses joins to get back to the data, but because there are multiple matching rows in the relationship table, I had to use a TOP 1 to get a single row result. My second method uses a subquery to get the data but this just doesn't feel right. So, my question is, which is the preferred method, or is there a better method? The script needed to create the test tables, insert data, and run the two queries is below. Thanks for your advice! Darvis -------------------------------------------------------------------------------------------- -- Create Tables -------------------------------------------------------------------------------------------- DECLARE @TableA TABLE ( [A_ID] [int] IDENTITY(1,1) NOT NULL, [Description] [varchar](50) NULL) DECLARE @TableB TABLE ( [B_ID] [int] IDENTITY(1,1) NOT NULL, [A_ID] [int] NOT NULL, [Description] [varchar](50) NOT NULL) DECLARE @TableC TABLE ( [C_ID] [int] IDENTITY(1,1) NOT NULL, [Description] [varchar](50) NOT NULL) DECLARE @TableB_C TABLE ( [B_ID] [int] NOT NULL, [C_ID] [int] NOT NULL) -------------------------------------------------------------------------------------------- -- Insert Test Data -------------------------------------------------------------------------------------------- INSERT INTO @TableA VALUES('A-One') INSERT INTO @TableA VALUES('A-Two') INSERT INTO @TableA VALUES('A-Three') INSERT INTO @TableB (A_ID, Description) VALUES(1,'B-One') INSERT INTO @TableB (A_ID, Description) VALUES(1,'B-Two') INSERT INTO @TableB (A_ID, Description) VALUES(1,'B-Three') INSERT INTO @TableB (A_ID, Description) VALUES(2,'B-Four') INSERT INTO @TableB (A_ID, Description) VALUES(2,'B-Five') INSERT INTO @TableB (A_ID, Description) VALUES(3,'B-Six') INSERT INTO @TableC VALUES('C-One') INSERT INTO @TableC VALUES('C-Two') INSERT INTO @TableC VALUES('C-Three') INSERT INTO @TableB_C (B_ID, C_ID) VALUES(1, 1) INSERT INTO @TableB_C (B_ID, C_ID) VALUES(2, 1) INSERT INTO @TableB_C (B_ID, C_ID) VALUES(3, 1) -------------------------------------------------------------------------------------------- -- Get result - method 1 -------------------------------------------------------------------------------------------- SELECT TOP 1 C.*, A.Description FROM @TableC C JOIN @TableB_C BC ON BC.C_ID = C.C_ID JOIN @TableB B ON B.B_ID = BC.B_ID JOIN @TableA A ON B.A_ID = A.A_ID WHERE C.C_ID = 1 -------------------------------------------------------------------------------------------- -- Get result - method 2 -------------------------------------------------------------------------------------------- SELECT C.*, (SELECT A.Description FROM @TableA A WHERE EXISTS (SELECT * FROM @TableB_C BC JOIN @TableB B ON B.B_ID = BC.B_ID WHERE BC.C_ID = C.C_ID AND B.A_ID = A.A_ID)) FROM @TableC C WHERE C.C_ID = 1

    Read the article

  • Why are some classes created on the fly and others aren't in CakePHP 1.2.7?

    - by JoseMarmolejos
    I have the following model classes: class User extends AppModel { var $name= 'User'; var $belongsTo=array('SellerType' => array('className' => 'SellerType'), 'State' => array('className' => 'State'), 'Country' => array('className' => 'Country'), 'AdvertMethod' => array('className' => 'AdvertMethod'), 'UserType' => array('className' => 'UserType')); var $hasMany = array('UserQuery' => array('className' => 'UserQuery'));} And: class UserQuery extends AppModel { var $name = 'UserQuery'; var $belongsTo = array('User', 'ResidenceType', 'HomeType');} Everything works fine with the user class and all its associations, but the UserQuery class is being completely ignored by the orm (table name user_queries and the generated queries do cast it as UserQuery. Another weird thing is that if I delete the code inside the User class I get an error, but if I do the same for the UserQuery class I get no errors. So my question is why does cakephp generate a class on the fly for the UserQuery and ignores my class, and why doesn't it generate a class on the fly for the User as well ?

    Read the article

  • Rails - Searching multiple textboxes and fields

    - by ChrisWesAllen
    I have a model of events that has various information such as date, location, and description of whats going on. I would like for my users to be able to search through the events list through a set of different textboxes but I having a hard time getting the syntax just right in my view I have... <% form_tag users_path, :method => 'get' do %> (<%= text_field_tag :search_keyword, params[:search_keyword] %>) + (<%= text_field_tag :search_zip, params[:search_zip] %>) <%= submit_tag "Find Events!", :name => nil %> <% end %> and in the controller I'm trying to query through the results.... if params[:search_keyword] @events = Event.find(:all, :conditions => [' name LIKE ? ', "%#{params[:search_keyword]}%"]) elsif params[:search_zip] @events = Event.find(:all, :origin=> params[:search_zip], :within=>50 ) else @events = Event.find(:all) end How do I code it so that it will perform the search only if the textbox isnt empty? also if both textboxes are filled then @events should be the product of BOTH queries? if have no idea if this would work =(???@event = @event+ event.find.....???

    Read the article

< Previous Page | 250 251 252 253 254 255 256 257 258 259 260 261  | Next Page >