Search Results

Search found 25113 results on 1005 pages for 'grouped table'.

Page 315/1005 | < Previous Page | 311 312 313 314 315 316 317 318 319 320 321 322  | Next Page >

  • SQL Scenario of allocating ids to user

    - by Enjoy coding
    Hi, I have an sql scenario as follows which I have been trying to improve. There is a table 'Returns' which is having ids of the returned goods against a shop for an item. Its structure is as below. Returns ------------------------- Return ID | Shop | Item ------------------------- 1 Shop1 Item1 2 Shop1 Item1 3 Shop1 Item1 4 Shop1 Item1 5 Shop1 Item1 There is one more table Supplier with Shop, supplier and Item as shown below. Supplier --------------------------------- Supplier | Shop | Item | Volume --------------------------------- supp1 Shop1 Item1 20% supp2 Shop1 Item1 80% Now as you see supp1 is supplying 20 % of total item1 volume and supp2 is supplying 80% of Item1 to shop1. And there were 5 return of items against the same Item1 for same Shop1. Now I need to allocate any four return IDs to Supp1 and remaining one return Id to supp2. This allocation of numbers is based on the ratio of the supplied volume percentage of the supplier. This allocation varies depending on the ratio of volume of supplied items. Now I have tried a method of using RANKs as shown below by use of temp tables. temp table 1 will have Shop, Return Id, Item, Total count of return IDs and Rank of the return id. temp table 2 will have shop, Supplier, Item and his proportion and rank of proportion. Now I am facing the difficulty in allocating top return ids to top supplier as illustrated above. As SQL doesnt have loops how can I achieve this. I have been tying several ways of doing this. Please advice. My environment is Teradata (ANSI SQL is enough). Thanks in advance.

    Read the article

  • Creating the same event for 2 different elements in jQuery (not for each one, for both!)

    - by danfromisrael
    Hey guys, I'd like to create a toggle event for 2 different TD's in my table row. the event should show / hide the next table row. <table> <tr> <td>1</td> <td>2</td> <td>3</td> <td>4</td> <td>5</td> <td class="clickable1">6</td> <td class="clickable2">7</td> </tr> <tr><td>this row should be toggled between show/hide once one of the clickable TDs were clicked</td></tr> </table> here's the code i tried to apply but it has applied each one of the classes the event: $('.clickable1,.clickable2').toggle(function() { $(this).parent() .next('tr') .show(); }, function() { $(this).parent() .next('tr') .hide(); }); One more thing: i'm applying on each TR a css hover psuedo class. how can i make the two TRs to be highlighted (like hover effect on two of them) ? Thanks in advanced! :-Dan

    Read the article

  • HABTM selection seemingly ignores joinTable

    - by TheCapn
    I'm attempting to do a HABTM relationship between a Users table and Groups table. The problem is, that I when I issue this call: $this->User->Group->find('list'); The query that is issued is: SELECT [Group].[id] AS [Group__id], [Group].[name] AS [Group__name] FROM [groups] AS [Group] WHERE 1 = 1 I can only assume at this point that I have defined my relationship wrong as I would expect behavior to use the groups_users table that is defined on the database as per convention. My relationships: class User extends AppModel { var $name = 'User'; //...snip... var $hasAndBelongsToMany = array( 'Group' => array( 'className' => 'Group', 'foreignKey' => 'user_id', 'associationForeignKey' => 'group_id', 'joinTable' => 'groups_users', 'unique' => true, ) ); //...snip... } class Group extends AppModel { var $name = 'Group'; var $hasAndBelongsToMany = array ( 'User' => array( 'className' => 'User', 'foreignKey' => 'group_id', 'associationForeignKey' => 'user_id', 'joinTable' => 'groups_users', 'unique' => true, )); } Is my understanding of HABTM wrong? How would I implement this Many to Many relationship where I can use CakePHP to query the groups_users table such that a list of groups the currently authenticated user is associated with is returned?

    Read the article

  • Is there a set based solution for this problem?

    - by NYSystemsAnalyst
    We have a table set up as follows: |ID|EmployeeID|Date |Category |Hours| |1 |1 |1/1/2010 |Vacation Earned|2.0 | |2 |2 |2/12/2010|Vacation Earned|3.0 | |3 |1 |2/4/2010 |Vacation Used |1.0 | |4 |2 |5/18/2010|Vacation Earned|2.0 | |5 |2 |7/23/2010|Vacation Used |4.0 | The business rules are: Vacation balance is calculated by vacation earned minus vacation used. Vacation used is always applied against the oldest vacation earned amount first. We need to return the rows for Vacation Earned that have not been offset by vacation used. If vacation used has only offset part of a vacation earned record, we need to return that record showing the difference. For example, using the above table, the result set would look like: |ID|EmployeeID|Date |Category |Hours| |1 |1 |1/1/2010 |Vacation Earned|1.0 | |4 |2 |5/18/2010|Vacation Earned|1.0 | Note that record 2 was eliminated because it was completely offset by used time, but records 1 and 4 were only partially used, so they were calculated and returned as such. The only way we have thought of to do this is to get all of the vacation earned records in a temporary table. Then, get the total vacation used and loop through the temporary table, deleting the oldest record and subtracting that value from the total vacation used until the total vacation used is zero. We could clean it up for when the remaining vacation used is only part of the oldest vacation earned record. This would leave us with just the outstanding vacation earned records. This works, but it is very inefficient and performs poorly. Also, the performance will just degrade over time as more and more records are added. Are there any suggestions for a better solution, preferable set based? If not, we'll just have to go with this.

    Read the article

  • remove row on click on specific td of that row by jquery

    - by I Like PHP
    i have an table <table class="oldService"> <thead> <th>name</th> <th>age</th> <th>action</th> </thead> <tbody> <?php foreach($array as $k=>$v){ ?> <tr> <td><?php echo $k[name] ?></td> <td><?php echo $k[age]?></td> <td id="<?php $k[id]" class="delme">X</td> </tr> <?php } ?> </tbody> <table> now i want to delete any row by clicking on X of each row except first and last row, and also need to confirm before deletion. i used below jquery <script type="text/javascript"> jQuery(document).ready(function(){ jQuery('table.oldService>tbody tr').not(':first').not(':last').click(function(){ if(confirm('want to delete!')){ jQuery(jQuery(this).addClass('del').fadeTo(400, 0, function() { jQuery(this).remove()})); jQuery.get('deleteService.php', {id:jQuery(this).attr('id')}); } else return false;}); }); </script> this is working perfect,but it execute by click on that row( means any td), i want that this event only occour when user click on X(third td) . please suggest me how to modify this jquery so that the event occur on click of X.

    Read the article

  • Database PK-FK design for future-effective-date entries?

    - by Scott Balmos
    Ultimately I'm going to convert this into a Hibernate/JPA design. But I wanted to start out from purely a database perspective. We have various tables containing data that is future-effective-dated. Take an employee table with the following pseudo-definition: employee id INT AUTO_INCREMENT ... data fields ... effectiveFrom DATE effectiveTo DATE employee_reviews id INT AUTO_INCREMENT employee_id INT FK employee.id Very simplistic. But let's say Employee A has id = 1, effectiveFrom = 1/1/2011, effectiveTo = 1/1/2099. That employee is going to be changing jobs in the future, which would in theory create a new row, id = 2 with effectiveFrom = 7/1/2011, effectiveTo = 1/1/2099, and id = 1's effectiveTo updated to 6/30/2011. But now, my program would have to go through any table that has a FK relationship to employee every night, and update those FK to reference the newly-effective employee entry. I have seen various postings in both pure SQL and Hibernate forums that I should have a separate employee_versions table, which is where I would have all effective-dated data stored, resulting in the updated pseudo-definition below: employee id INT AUTO_INCREMENT employee_versions id INT AUTO_INCREMENT employee_id INT FK employee.id ... data fields ... effectiveFrom DATE effectiveTo DATE employee_reviews id INT AUTO_INCREMENT employee_id INT FK employee.id Then to get any actual data, one would have to actually select from employee_versions with the proper employee_id and date range. This feels rather unnatural to have this secondary "versions" table for each versioned entity. Anyone have any opinions, suggestions from your own prior work, etc? Like I said, I'm taking this purely from a general SQL design standpoint first before layering in Hibernate on top. Thanks!

    Read the article

  • Static selection and Ruby on Rails objects

    - by Dave
    Hi all- I have a simple problem, but am having trouble wrapping my head around it. I have an video object that should have one or more "genres". This list of genres should be prepopulated and then the user should just select one or more using autocomplete or some such. Here is the question: Is it worth creating a table with genres for the static selection? Or should it just be included in the presentation layer? If there is a static table, how do we name it correctly. I envision something like this class Video < ActiveRecord::Base has_many :genres ... end class Genre < ... belongs_to :video ... end But then we get a table called genre, that basically maps all the selected genres to their parent videos. There would need to be some static table to reference the static genres. Is this the best way to do it? Sorry if this was rambl-y a little stream of conciousness. Thanks!

    Read the article

  • SQL Server insert performance with and without primary key

    - by Eric
    Summary: I have a table populated via the following: insert into the_table (...) select ... from some_other_table Running the above query with no primary key on the_table is ~15x faster than running it with a primary key, and I don't understand why. The details: I think this is best explained through code examples. I have a table: create table the_table ( a int not null, b smallint not null, c tinyint not null ); If I add a primary key, this insert query is terribly slow: alter table the_table add constraint PK_the_table primary key(a, b); -- Inserting ~880,000 rows insert into the_table (a,b,c) select a,b,c from some_view; Without the primary key, the same insert query is about 15x faster. However, after populating the_table without a primary key, I can add the primary key constraint and that only takes a few seconds. This one really makes no sense to me. More info: The estimated execution plan shows 0% total query time spent on the clustered index insert SQL Server 2008 R2 Developer edition, 10.50.1600 Any ideas?

    Read the article

  • Reporting Services Sum of Inner Group in Outer Group

    - by Spoonybard
    I have a report in Reporting Services 2008 using ASP.net 3.5 and SQL Server 2008. The report has 2 groupings and a detail row. This is the current format: Outer Group Inner Group Detail Row The Detail Row represents an item on a receipt and a receipt can have multiple items. Each receipt was paid with a certain payment method. So the Outer Group is grouped by payment type, the Inner Group is grouped by the receipt's ID, and the Detail Row is each item for the given receipt. My raw data result set has two important columns: The Amount Received and the Amount Applied. The Amount Received is how much money in total was collected for all the items on the receipt. The Amount Applied is how much money each item got from the total Amount Received. Sample Result Set: ReceiptID Item ItemID AmountReceived AmountApplied Payment Method ------------------------------------------------------------------------------------------ 1 Book 1 $200.00 $40.00 Cash 1 CD 2 $200.00 $20.00 Cash 1 Software 3 $200.00 $100.00 Cash 1 Backpack 4 $200.00 $40.00 Cash The Inner Group displays the AmountReceived correctly as $200. However, the Outer Group displays the AmountReceived as $800, because I believe that it is going off each detail row which in this case is a count of 4 items. What I want is to see in the Outer Group that the Amount Received is $200. I tried restricting the scope in my SUM function to be the Inner Group, but I get the error "The scope parameter must be set to a string constant that is equal to either the name of a containing group, the name of a containing data region, or the name of a dataset." Does anyone have any suggestions on how to solve this issue? Thanks.

    Read the article

  • Add to exisiting db values, rather than overwrite - PDO

    - by sam
    Im trying to add to existing decimal value in table, for which im using the sql below: UPDATE Funds SET Funds = Funds + :funds WHERE id = :id Im using a pdo class to handle my db calls, with the method below being used to update the db, but i couldnt figure out how to amend it to output the above query, any ideas ? public function add_to_values($table, $info, $where, $bind="") { $fields = $this->filter($table, $info); $fieldSize = sizeof($fields); $sql = "UPDATE " . $table . " SET "; for($f = 0; $f < $fieldSize; ++$f) { if($f > 0) $sql .= ", "; $sql .= $fields[$f] . " = :update_" . $fields[$f]; } $sql .= " WHERE " . $where . ";"; $bind = $this->cleanup($bind); foreach($fields as $field) $bind[":update_$field"] = $info[$field]; return $this->run($sql, $bind); }

    Read the article

  • Need help with transferring data between MySQL db's using PHP

    - by JM4
    In one of the sites I manage, the client has decided to take on ACH/Bank Account administration where it was previously outsourced. As a result, the information submitted in our online form which used to simply store in a single database for processing now must sit in 'limbo' until the funds used for payment have been verified. My original plan is as follows: At the end of an enrollment, all form data is collected and stored in a single MySQL database. Our internal administrator will receive an email notification reminding him enrollments have taken place. He will process the ACH information collected and wait the 3-4 business days needed for payment to clear. Once the payment information has been returned as Good (haven't considered what I will do with the 'bad' yet), the administrator can log into a secure portal which allows him to click a button to 'process' the full information once compared and verified. the process is simplified as: Enrollment complete: data stored in DB 'A' Funds verified and link clicked: data from 'A' is copied to DB 'B' and 'A' is deleted. I have run similar processes with CSV output before and simply used //transfers old data to archive $transfer = mysql_query('INSERT INTO '.$archive.' SELECT * FROM '.$table) or die(mysql_error()); //empties existing table $query = mysql_query('TRUNCATE TABLE '.$table) or die(mysql_error()); but in those cases, ALL data returned was copied and deleted. I only want to copy and delete a single record. Any idea how to accomplish this?

    Read the article

  • Are reads and (transactional) writes faster for entities of the same group than otherwise?

    - by indiehacker
    What advantage is there to designing child-parent relationships, which allow us to do writes in transactions, when there is never a real concern for consistency and contention and those sort of more complex issues? Does it make writes and reads faster? Consider my situation where there are many .png images that are referenced to one mosaic layer, and these .png images are written just once by a single user. The user can design many mosaic layers and her mosaic layers and referenced image entities are never changed/updated, they are just deleted some time in the future. Other users can come to the web project site and interactively view the mosaic layer as different layouts/configurations of the images as they play (query) with different criteria. So reads should be very fast. So there is no real worry of contention, or users conflicting with one another with writing new image entities. And because of that I am assuming there is no "requirement" for the .png image entities to be grouped by their same mosaic layer in child-parent relationship. However, perhaps, since the documentation says they are stored close to one another, if the many image entities were grouped as children to a single mosaic layer parent than this has the advantage that the writing (in transaction) and reading will happen much faster?

    Read the article

  • getting BR-separated text via DOM in JS/JQuery?

    - by Hellion
    Hi all, I am writing a greasemonkey script that is parsing a page with the following general structure: <table> <tr><td><center> <b><a href="show.php?who=IDNumber">(Account Name)</a></b> (#IDNumber) <br> (Rank) <br> (Title) <p> <b>Statistics:</b> <br> <table> <tr><td>blah blah etc. </td></tr></table></center></table> I'm specifically trying to grab the (Title) part out of that. As you can see, however, it's set off only by a <BR> tag, has no ID of its own, is just part of the text of a <CENTER> tag, and that tag has a whole raft of other text associated with it. Right now what I'm doing to get that is taking the innerHTML of the Center tag and using a regex on it to match for /<br>([A-Za-z ]*)<p><b>Statistics/. That is working okay for me, but it feels like there's gotta be a better way to pick that particular text out of there. ... So, is there a better way? Or should I complain to the site programmer that he needs to make that text more accessible? :-)

    Read the article

  • Stored Procedure, 'incorrect syntax error'

    - by jacksonSD
    Attempting to figure out sp's, and I'm getting this error: "Msg 156, Level 15, State 1, Line 5 Incorrect syntax near the keyword 'Procedure'." the error seems to be on the if, but I can drop other existing tables with stored procedures the exact same way so I'm not clear on why this isn't working. can anyone shed some light? Begin Set nocount on Begin Try Create Procedure uspRecycle as if OBJECT_ID('Recycle') is not null Drop Table Recycle create table Recycle (RecycleID integer constraint PK_integer primary key, RecycleType nchar(10) not null, RecycleDescription nvarchar(100) null) insert into Recycle (RecycleID,RecycleType,RecycleDescription) values ('1','Compost','Product is compostable, instructions included in packaging') insert into Recycle (RecycleID,RecycleType,RecycleDescription) values ('2','Return','Product is returnable to company for 100% reuse') insert into Recycle (RecycleID,RecycleType,RecycleDescription) values ('3','Scrap','Product is returnable and will be reclaimed and reprocessed') insert into Recycle (RecycleID,RecycleType,RecycleDescription) values ('4','None','Product is not recycleable') End Try Begin Catch DECLARE @ErrMsg nvarchar(4000); SELECT @ErrMsg = ERROR_MESSAGE(); Throw 50001, @ErrMsg, 1; End Catch -- checking to see if table exists and is loaded: If (Select count(*) from Recycle) >1 begin Print 'Recycle table created and loaded '; Print getdate() End set nocount off End

    Read the article

  • How do I get the earlist DateTime of a set, where there is a few conditions

    - by radbyx
    Create script for Product SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO SET ANSI_PADDING ON GO CREATE TABLE [dbo].[Product]( [ProductID] [int] IDENTITY(1,1) NOT NULL, [ProductName] [varchar](50) NOT NULL, CONSTRAINT [PK_Products] PRIMARY KEY CLUSTERED ( [ProductID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO SET ANSI_PADDING OFF GO Create script for StateLog SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[StateLog]( [StateLogID] [int] IDENTITY(1,1) NOT NULL, [ProductID] [int] NOT NULL, [Status] [bit] NOT NULL, [TimeStamp] [datetime] NOT NULL, CONSTRAINT [PK_Uptime] PRIMARY KEY CLUSTERED ( [StateLogID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO ALTER TABLE [dbo].[StateLog] WITH CHECK ADD CONSTRAINT [FK_Uptime_Products] FOREIGN KEY([ProductID]) REFERENCES [dbo].[Product] ([ProductID]) GO ALTER TABLE [dbo].[StateLog] CHECK CONSTRAINT [FK_Uptime_Products] GO I have this and it's not enough: select top 5 [ProductName], [TimeStamp] from [Product] inner join StateLog on [Product].ProductID = [StateLog].ProductID where [Status] = 0 order by TimeStamp desc; (My query givess the 5 lastest TimeStamp's where Status is 0(false).) But I need a thing more: Where there is a set of lastest TimeStamps for a product where Status is 0, i only want the earlist of them (not the lastet). Example: Let's say for Product X i have: TimeStamp1(status = 0) TimeStamp2(status = 1) TimeStamp3(status = 0) TimeStamp4(status = 0) TimeStamp5(status = 1) TimeStamp6(status = 0) TimeStamp7(status = 0) TimeStamp8(status = 0) Correct answer would then be:: TimeStamp6, because it's the first of the lastest timestamps.

    Read the article

  • Data mixing SQL Server

    - by Pythonizo
    I have three tables and a range of two dates: Services ServicesClients ServicesClientsDone @StartDate @EndDate Services: ID | Name 1 | Supervisor 2 | Monitor 3 | Manufacturer ServicesClients: IDServiceClient | IDClient | IDService 1 | 1 | 1 2 | 1 | 2 3 | 2 | 2 4 | 2 | 3 ServicesClientsDone: IDServiceClient | Period 1 | 201208 3 | 201210 Period = YYYYMM I need to insert into ServicesClientsDone the months range from @StartDate up @EndDate. I have also a temporary table (#Periods) with the following list: Period 201208 201209 201210 The query I need is to give me back the following list: IDServiceClient | Period 1 | 201209 1 | 201210 2 | 201208 2 | 201209 2 | 201210 3 | 201208 3 | 201209 4 | 201208 4 | 201209 4 | 201210 Which are client services but the ranks of the temporary table, not those who are already inserted This is what i have: Table periods: DECLARE @i int DECLARE @mm int DECLARE @yyyy int, DECLARE @StartDate datetime DECLARE @EndDate datetime set @EndDate = (SELECT GETDATE()) set @StartDate = (SELECT DATEADD(MONTH, -3,GETDATE())) CREATE TABLE #Periods (Period int) set @i = 0 WHILE @i <= DATEDIFF(MONTH, @StartDate , @EndDate ) BEGIN SET @mm= DATEPART(MONTH, DATEADD(MONTH, @i, @FechaInicio)) SET @yyyy= DATEPART(YEAR, DATEADD(MONTH, @i, @FechaInicio)) INSERT INTO #Periods (Period) VALUES (CAST(@yyyy as varchar(4)) + RIGHT('00'+CONVERT(varchar(6), @mm), 2)) SET @i = @i + 1; END Relation between ServicesClients and Services: SELECT s.Name, sc.IDClient FROM Services JOIN ServicesClients AS sc ON sc.IDService = s.ID Services already done and when: SELECT s.Name, scd.Period FROM Services JOIN ServicesClients AS sc ON sc.IDService = s.ID JOIN ServicesClientsDone AS scd ON scd.IDServiceClient = sc.IDServiceClient

    Read the article

  • FreeText COUNT query on multiple tables is super slow

    - by Eric P
    I have two tables: **Product** ID Name SKU **Brand** ID Name Product table has about 120K records Brand table has 30K records I need to find count of all the products with name and brand matching a specific keyword. I use freetext 'contains' like this: SELECT count(*) FROM Product inner join Brand on Product.BrandID = Brand.ID WHERE (contains(Product.Name, 'pants') or contains(Brand.Name, 'pants')) This query takes about 17 secs. I rebuilt the FreeText index before running this query. If I only check for Product.Name. They query is less then 1 sec. Same, if I only check the Brand.Name. The issue occurs if I use OR condition. If I switch query to use LIKE: SELECT count(*) FROM Product inner join Brand on Product.BrandID = Brand.ID WHERE Product.Name LIKE '%pants%' or Brand.Name LIKE '%pants%' It takes 1 secs. I read on MSDN that: http://msdn.microsoft.com/en-us/library/ms187787.aspx To search on multiple tables, use a joined table in your FROM clause to search on a result set that is the product of two or more tables. So I added an INNER JOINED table to FROM: SELECT count(*) FROM (select Product.Name ProductName, Product.SKU ProductSKU, Brand.Name as BrandName FROM Product inner join Brand on product.BrandID = Brand.ID) as TempTable WHERE contains(TempTable.ProductName, 'pants') or contains(TempTable.BrandName, 'pants') This results in error: Cannot use a CONTAINS or FREETEXT predicate on column 'ProductName' because it is not full-text indexed. So the question is - why OR condition could be causing such as slow query?

    Read the article

  • PHP & MySQL storing multiple values in a database.

    - by R.I.P.coalMINERS
    I'm basically a front end designer and new to PHP and MySQL but I want a user to be able to store multiple names and there meanings in a MySQL database table named names using PHP I will dynamically create form fields with JQuery every time a user clicks on a link so a user can enter 1 to 1,000,000 different names and there meanings which will be stored in a table called names. All I want to know is how can I store multiple names and there meanings in a database and then display them back to the user for them to edit or delete using PHP MySQL? I created the MySQL table already. CREATE TABLE names ( id INT UNSIGNED NOT NULL AUTO_INCREMENT, userID INT NOT NULL, name VARCHAR(255) NOT NULL, meaning VARCHAR(255) NOT NULL, PRIMARY KEY (id) ); Here is the HTML. <form method="post" action="index.php"> <ul> <li><label for="name">Name: </label><input type="text" name="name" id="name" /></li> <li><label for="meaning">Meaning: </label><input type="text" name="meaning" id="meaning" /></li> <li><input type="submit" name="submit" value="Save" /> <input type="hidden" name="submit" value="true" /></li> </ul> </form>

    Read the article

  • ZF2 - ServiceManager injecting into 84 tables... tedious

    - by Dominic Watson
    I originally made another thread about this a couple of months ago in regards to ZF2 injecting into tables with DI during Beta 1 and figured back then that it wasn't really possible. Now ZF2 has been released as version 2.0.0 and ServiceManager is defaulted to instead of DI I guess I have the same question now I'm refactoring. I've got 84 tables that need the DbAdapter injecting into them and I'm sure there has to be a better way as I'm replicating myself SO much. public function getServiceConfig() { return array( 'factories' => array( 'accountTable' => function ($sm) { $dbAdapter = $sm->get('Zend\Db\Adapter\Adapter'); $table = new Model\DbTable\AccountTable($dbAdapter); return $table; }, 'userTable' => function ($sm) { $dbAdapter = $sm->get('Zend\Db\Adapter\Adapter'); $table = new Model\DbTable\UserTable($dbAdapter); return $table; }, // another 82 tables of the above ) ) } With the EventsManager and ServiceManager I have no idea where I stand in getting my application's instances/resources. Thanks, Dom

    Read the article

  • Audit many-to-many relationship in NHibernate

    - by Kendrick
    I have implemented listeners to audit changes to tables in my application using IPreUpdateEventListener and IPreInsertEventListener and everything works except for my many-to-many relationships that don't have additional data in the joining table (i.e. I don't have a POCO for the joining table). Each auditable object implements an IAuditable interface, so the event listener checks to see if a POCO is of type IAuditable, and if it is it records any changes to the object. Look up tables implement an IAuditableProperty inteface, so if a property of the IAuditable POCO is pointing to a lookup table, the changes are recorded in the log for the main POCO. So, the question is, how should I determine I'm working with a many-to-many collection and record the changes in my audit table? //first two checks for LastUpdated and LastUpdatedBy ommitted for brevity else if (newState[i] is IAuditable) { //Do nothing, these will record themselves separately } else if (!(newState[i] is IAuditableProperty) && (newState[i] is IList<object> || newState[i] is ISet)) { //Do nothing, this is a collection and individual items will update themselves if they are auditable //I believe this is where my many-to-many values are being lost } else if (!isUpdateEvent || !Equals(oldState[i], newState[i]))//Record only modified fields when updating { changes.Append(preDatabaseEvent.Persister.PropertyNames[i]) .Append(": "); if (newState[i] is IAuditableProperty) { //Record changes to values in lookup tables if (isUpdateEvent) { changes.Append(((IAuditableProperty)oldState[i]).AuditPropertyValue) .Append(" => "); } changes.Append(((IAuditableProperty)newState[i]).AuditPropertyValue); } else { //Record changes for primitive values if(isUpdateEvent) { changes.Append(oldState[i]) .Append(" => "); } changes.Append(newState[i]); } changes.AppendLine(); }

    Read the article

  • Best Approach for Checking and Inserting Records

    - by nevets1219
    In one of our existing C programs which purpose is: Open connection to DB for record in all_record: if record contain certain data: if record is NOT in table A: // see #1 insert record information into table A and B // see #2 Close connection to DB select field from table where field=XXX 2 inserts This is typically done every X months to sync everything up or so I'm told. I've also been told that this process takes roughly a couple of days. There is (currently) at most 2.5million records (though not necessarily all 2.5m will be inserted). One of the table contains 10 fields and the other 5 fields. There isn't much to be done about iterating through the records since that part can't be changed at the moment. What I would like to do is speed up the part where I query MySQL. I'm not sure if I have left out any important details -- please let me know! I'm also no SQL expert so feel free to point out the obvious. I thought about: Putting all the inserts into a transaction (at the moment I'm not sure how important it is for the transaction to be all-or-none or if this affects performance) Using Insert X Where Not Exists Y LOAD DATA INFILE (but that would require I create a (possibly) large temp file) I read that (hopefully someone can confirm) I should drop indexes so they aren't re-calculated. mysql Ver 14.7 Distrib 4.1.22, for sun-solaris2.10 (sparc) using readline 4.3

    Read the article

  • Best way to store this data?

    - by Malfist
    I have just been assigned to renovate an old website, and I get to move it from some old archaic system to drupal. The only problem is that it's a real-estate system and a lot of data is stored. Currently all the information is stored in a single table, an id represents the house and then everything else is key/value pairs. There are a possible 243 keys per estate, there are 23840 estates in the system. As you can imagine the system is slow and difficult to query. I don't think a table with 243 rows would be a very good idea, and probably worse than the current situation. I've done some investigating and here's what I've found out: Missing data does not indicate a 0 value, data is merged from two, unique sources/formats. Some guessing is involved. I have no control over the source of the data. There are 4 keys that are common to all estates, all values look like something that is commonly searched for and could be indexed There are 10 keys that are in the [90-100)% range 8 of these are information like who's selling it, and it's address. The other two seem to belong with the below range There are 80 keys that are in the [80-90)% range This range seems to mostly just list room types and how many the house has (e.g. bedrooms_possible, bathrooms, family_room_3rd, etc) This range also includes some minor information like school districts, one or two more pieces of data on the address. The 179 keys that are in the [0-80)% range include all sorts of miscellaneous information about the estate My best idea was a hybrid approach, create a table that stores important, common information and keep a smaller key/value table. How would you store this information?

    Read the article

  • Unnecessary Redundancy with Tables.

    - by Stacey
    My items are listed as follows; This is just a summary of course. But I'm using a method shown for the "Detail" table to represent a type of 'inheritence', so to speak - since "Item" and "Downloadable" are going to be identical except that each will have a few additional fields relevant only to them. My question is in this design pattern. This sort of thing appears many, many times in our projects - is there a more intelligent way to handle it? I basically need to normalize the tables as much as possible. I'm extremely new to databases and so this is all very confusing to me. There are 5 items. Awards, Items, Purchases, Tokens, and Downloads. They are all very, very similar, except each has a few pieces of data relevant only to itself. I've tried to use a declaration field (like an enumerator 'Type' field) in conjunction with nullable columns, but I was told that is a bad approach. What I have done is take everything similar and place it in a single table, and then each type has its own table that references a column in the 'base' table. The problem occurs with relationships, or junctions. Linking all of these back to a customer. Each type takes around 2 additional tables to properly junction all of the data together- and as such, my database is growing very, very large. Is there a smarter practice for this kind of behavior? Item ID | GUID Name | varchar(64) Product ID | GUID Name | varchar(64) Store | GUID [ FK ] Details | GUID [FK] Downloadable ID | GUID Name | varchar(64) Url | nvarchar(2048) Details | GUID [FK] Details ID | GUID Price | decimal Description | text Peripherals [ JUNCTION ] ID | GUID Detail | GUID [FK] Store ID | GUID Addresses | GUID Addresses ID | GUID Name | nvarchar(64) State | int [FK] ZipCode | int Address | nvarchar(64) State ID | int Name | varchar(32)

    Read the article

  • Jquery each function promblem.

    - by Cesar Lopez
    I have the following function: var emptyFields = false; function checkNotEmpty(tblName, columns, className){ emptyFields = false; $("."+className+"").each(function() { if($.trim($(this).val()) === "" &amp;&amp; $(this).is(":visible")){ emptyFields = true; return false; // break out of the each-loop } }); if (emptyFields) { alert("Please fill in current row before adding a new one.") } else { AddRow_OnButtonClick(tblName,columns); } } Its checking that all elements in the table are not empty before adding a new row, and I only need to check that the last row of the table has at least an element which its not empty and not the whole table. The className its applied to the table. Please notice, that the problem I have its checking the last row and only one element of the row has to have some text in it. (e.g. each row has 5 textboxes at least one textbox need to have some text inside in order to be able to add another row, otherwise, the alert comes up). Any help would be apreciated. Thanks

    Read the article

  • Best way to store data in database when you don't know the type

    - by stiank81
    I have a table in my database that represents datafields in a custom form. The DataField gives some representation of what kind of control it should be represented with, and what value type it should take. Simplified you can say that I have 2 entities in this table - Textbox taking any string and Textbox only taking numbers. Now I have the different values stored in a separate table, referencing the datafield definition. What is the best way to store the data value here, when the type differs? One possible solution is to have the FieldValue table hold one field per possible value type. Now this would certainly be redundant, but at least I would get the value stored in its correct form - simplifying queries later. FieldValue ---------- Id DataFieldId IntValue DoubleValue BoolValue DataValue .. Another possibility is just storing everything as String, and casting this in the queries. I am using .Net with NHibernate, and I see that at least here there is a Projections.Cast that can be used to cast e.g. string to int in the query. Either way in these two solutions I need to know which type to use when doing the query, but I will know that from the DataField, so that won't be a problem. Anyway; I don't think any of these solutions sounds good. Are they? Or is there a better way?

    Read the article

< Previous Page | 311 312 313 314 315 316 317 318 319 320 321 322  | Next Page >