Search Results

Search found 8943 results on 358 pages for 'audit tables'.

Page 300/358 | < Previous Page | 296 297 298 299 300 301 302 303 304 305 306 307  | Next Page >

  • NSFetchedResultsController: changing predicate not working?

    - by icerelic
    Hi, I'm writing an app with two tables on one screen. The left table is a list of folders and the right table shows a list of files. When tapped on a row on the left, the right table will display the files belonging to that folder. I'm using Core Data for storage. When the selection of folder changes, the fetch predicate of the right table's NSFetchedResultsController will change and perform a new fetch, then reload the table data. I used the following code snippet: NSPredicate *predicate = [NSPredicate predicateWithFormat:@"list = %@",self.list]; [fetchedResultsController.fetchRequest setPredicate:predicate]; NSError *error = nil; if (![[self fetchedResultsController] performFetch:&error]) { NSLog(@"Unresolved error %@, %@", error, [error userInfo]); abort(); } [table reloadData]; However the fetch results are still the same. I've NSLog'ed "predicate" before and after the fetch, and they were correct with updated information. The fetch results stay the same as initial fetch (when view is loaded). I'm not very familiar with the way Core Data fetches objects (is there a caching system?), but I've done similar things before(changing predicates, re-fetching data, and refreshing table) with single table views and everything went well. If someone could gave me a hint I would be very appreciated. Thanks in advance.

    Read the article

  • Movies recommendation engine conceptual database design

    - by Supyxy
    I am working at an movie recommendations engine and i'm facing a DB design issue. My actual database looks like this: MOVIES [ID,TITLE] KEYWORDS_TABLE [ID,KEY_ID] - where ID is Foreign Key for MOVIES.id and KEY_ID is a key for a text keywords table This is not the entire DB, but i showed here what's important for my problem. I have about 50,000 movies and about 1,3 milion keywords correlations, and basically my algorithm consists in extracting all the who have the same keywords with a given movie, then ordering them by the number of keywords correlations. For example i looked for a movie similar to 'Cast away' and it returned 'Six days and six nights' because it had the most keywords correlations (4 keywords): Island Airplane crash Stranded Pilot The algorithm is based on more factors, but this one is the most important and the most difficult for the approach. Basically what i do now is getting all the movies that have at least one keyword similar to the given movie and then ordering them by other factors which are not important for a moment. There wouldn't be any problem if there weren't so many records, a query lasts in many cases up to 10-20 seconds and some of them return even over 5000 movies. Someone already helped me on here (thanks Mark Byers) with optimizing the query but that's not enough because it takes too longer SELECT DISTINCT M.title FROM keywords_table K1 JOIN keywords_table K2 ON K2.key_id = K1.key_id JOIN movies M ON K2.id = M.id WHERE K1.id = 4 So i thought it would be better if i pre-made those lists with movies recommendations for each movie, but i'm not sure how to design the tables.. whatever is it a good idea or how would you take this approach?

    Read the article

  • Which Table Should be Master and Child in Database Design

    - by Jason
    I am quickly learning the ins and outs of database design (something that, as of a week ago, was new to me), but I am running across some questions that don't seem immediately obvious, so I was hoping to get some clarification. The question I have right is about foreign keys. As part of my design, I have a Company table. Originally, I had included address information directly within the table, but, as I was hoping to achieve 3NF, I broke out the address information into its own table, Address. In order to maintain data integrity, I created a row in Company called "addressId" as an INT and the Address table has a corresponding addressId as its primary key. What I'm a little bit confused about (or what I want to make sure I'm doing correctly) is determining which table should be the master (referenced) table and which should be the child (referencing) table. When I originally set this up, I made the Address table the master and the Company the child. However, I now believe this is wrong due to the fact that there should be only one address per Company and, if a Company row is deleted, I would want the corresponding Address to be removed as well (CASCADE deletion). I may be approaching this completely wrong, so I would appreciate any good rules of thumb on how to best think about the relationship between tables when using foreign keys. Thanks!

    Read the article

  • RESTful design, how to name pages outside CRUD et al?

    - by sscirrus
    Hi all, I'm working on a site that has quite a few pages that fall outside my limited understanding of RESTful design, which is essentially: Create, Read, Update, Delete, Show, List Here's the question: what is a good system for labeling actions/routes when a page doesn't neatly fall into CRUD/show/list? Some of my pages have info about multiple tables at once. I am building a site that gives some customers a 'home base' after they log on. It does NOT give them any information about themselves so it shouldn't be, for example, /customers/show/1. It does have information about companies, but there are other pages on the site that do that differently. What do you do when you have these situations? This 'home-base' is shown to customers and it mainly has info about companies (but not uniquely so). Second case: I have a table called 'Matchings' in between customers and companies. These matchings are accessed in completely different ways on different parts of the site (different layouts, different CSS sheets, different types of users accessing them, etc. They can't ALL be matchings/show. What's the best way to label the others? Thanks very much. =)

    Read the article

  • Match entities fulfilling filter (strict superset of search)

    - by Jon
    I have an entity (let's say Person) with a set of arbitrary attributes with a known subset of values. I need to search for all of these entities that match all my filter conditions. That is, given a set of Attributes A, I need to find all people that have a set of Attributes that are a superset of A. For example, my table structures look like this: Person: id | name 1 | John Doe 2 | Jane Roe 3 | John Smith Attribute: id | attr_name 1 | Sex 2 | Eye Color ValidValue: id | attr_id | value_name 1 | 1 | Male 2 | 1 | Female 3 | 2 | Blue 4 | 2 | Green 5 | 2 | Brown PersonAttributes id | person_id | attr_id | value_id 1 | 1 | 1 | 1 2 | 1 | 2 | 3 3 | 2 | 1 | 2 4 | 2 | 2 | 4 5 | 3 | 1 | 1 6 | 3 | 2 | 4 In JPA, I have entities built for all of these tables. What I'd like to do is perform a search for all entities matching a given set of attribute-value pairs. For instance, I'd like to be able to find all males (John Doe and John Smith), all people with green eyes (Jane Roe or John Smith), or all females with green eyes (Jane Roe). I see that I can already take advantage of the fact that I only really need to match on value_id, since that's already unique and tied to the attr_id. But where can I go from there? I've been trying to do something like the following, given that the ValidValue is unique in all cases: select distinct p from Person p join p.personAttributes a where a.value IN (:values) Then I've tried putting my set of required values in as "values", but that gives me errors no matter how I try to structure that. I also have to get a little more complicated, as follows, but at this point I'd be happy with solving the first problem cleanly. However, if it's possible, the Attribute table actually has a field for default value: id | attr_name | default_value 1 | Sex | 1 2 | Eye Color | 5 If the value you're searching on happens to be the default value, I want it to return any people that have no explicit value set for that attribute, because in the application logic, that means they inherit the default value. Again, I'm more concerned about the primary question, but if someone who can help with that also has some idea of how to do this one, I'd be extremely grateful.

    Read the article

  • HTML Agility Pack

    - by Harikrishna
    I have html tables in one webpage like <table border=1> <tr><td>sno</td><td>sname</td></tr> <tr><td>111</td><td>abcde</td></tr> <tr><td>213</td><td>ejkll</td></tr> </table> <table border=1> <tr><td>adress</td><td>phoneno</td><td>note</td></tr> <tr><td>asdlkj</td><td>121510</td><td>none</td></tr> <tr><td>asdlkj</td><td>214545</td><td>none</td></tr> </table> Now from this webpage using html agility pack I want to extract the data of the column address and phone no only. It means for that I have find first in which table there is column address and phoneno.After finding that table I want to extract the data of that column address and phoneno what should I do ? I can get the table. But after that what should I do don't understand.

    Read the article

  • DAL Layer : EF 4.0 or Normal Data access layer with Stored Procedure

    - by Harryboy
    Hello Experts, Application : I am working on one mid-large size application which will be used as a product, we need to decide on our DAL layer. Application UI is in Silverlight and DAL layer is going to be behind service layer. We are also moving ahead with domain model, so our DB tables and domain classes are not having same structure. So patterns like Data Mapper and Repository will definitely come into picture. I need to design DAL Layer considering below mentioned factors in priority manner Speed of Development with above average performance Maintenance Future support and stability of the technology Performance Limitation : 1) As we need to strictly go ahead with microsoft, we can not use NHibernate or any other ORM except EF 4.0 2) We can use any code generation tool (Should be Open source or very cheap) but it should only generate code in .Net, so there would not be any licensing issue on per copy basis. Questions I read so many articles about EF 4.0, on outset it looks like that it is still lacking in features from NHibernate but it is considerably better then EF 1.0 So, Do you people feel that we should go ahead with EF 4.0 or we should stick to ADO .Net and use any code geneartion tool like code smith or any other you feel best Also i need to answer questions like what time it will take to port application from EF 4.0 to ADO .Net if in future we stuck up with EF 4.0 for some features or we are having serious performance issue. In reverse case if we go ahead and choose ADO .Net then what time it will take to swith to EF 4.0 Lastly..as i was going through the article i found the code only approach (with POCO classes) seems to be best suited for our requirement as switching is really easy from one technology to other. Please share your thoughts on the same and please guide on the above questions

    Read the article

  • What other things would be good to include in CSS reset (along with eric meyer reset) for any projec

    - by metal-gear-solid
    I know and use eric meyer CSS reset, but is there any more things which would be good to add in reset css? and can save our time and increase compatibility. This is default meyer's latest CSS reset code. /* v1.0 | 20080212 */ html, body, div, span, applet, object, iframe, h1, h2, h3, h4, h5, h6, p, blockquote, pre, a, abbr, acronym, address, big, cite, code, del, dfn, em, font, img, ins, kbd, q, s, samp, small, strike, strong, sub, sup, tt, var, b, u, i, center, dl, dt, dd, ol, ul, li, fieldset, form, label, legend, table, caption, tbody, tfoot, thead, tr, th, td { margin: 0; padding: 0; border: 0; outline: 0; font-size: 100%; vertical-align: baseline; background: transparent; } body { line-height: 1; } ol, ul { list-style: none; } blockquote, q { quotes: none; } blockquote:before, blockquote:after, q:before, q:after { content: ''; content: none; } /* remember to define focus styles! */ :focus { outline: 0; } /* remember to highlight inserts somehow! */ ins { text-decoration: none; } del { text-decoration: line-through; } /* tables still need 'cellspacing="0"' in the markup */ table { border-collapse: collapse; border-spacing: 0; }

    Read the article

  • Help needed in grokking password hashes and salts

    - by javafueled
    I've read a number of SO questions on this topic, but grokking the applied practice of storing a salted hash of a password eludes me. Let's start with some ground rules: a password, "foobar12" (we are not discussing the strength of the password). a language, Java 1.6 for this discussion a database, postgreSQL, MySQL, SQL Server, Oracle Several options are available to storing the password, but I want to think about one (1): Store the password hashed with random salt in the DB, one column Found on SO and elsewhere is the automatic fail of plaintext, MD5/SHA1, and dual-columns. The latter have pros and cons MD5/SHA1 is simple. MessageDigest in Java provides MD5, SHA1 (through SHA512 in modern implementations, certainly 1.6). Additionally, most RDBMSs listed provide methods for MD5 encryption functions on inserts, updates, etc. The problems become evident once one groks "rainbow tables" and MD5 collisions (and I've grokked these concepts). Dual-column solutions rest on the idea that the salt does not need to be secret (grok it). However, a second column introduces a complexity that might not be a luxury if you have a legacy system with one (1) column for the password and the cost of updating the table and the code could be too high. But it is storing the password hashed with a random salt in single DB column that I need to understand better, with practical application. I like this solution for a couple of reasons: a salt is expected and considers legacy boundaries. Here's where I get lost: if the salt is random and hashed with the password, how can the system ever match the password? I have theory on this, and as I type I might be grokking the concept: Given a random salt of 128 bytes and a password of 8 bytes ('foobar12'), it could be programmatically possible to remove the part of the hash that was the salt, by hashing a random 128 byte salt and getting the substring of the original hash that is the hashed password. Then re hashing to match using the hash algorithm...??? So... any takers on helping. :) Am I close?

    Read the article

  • Toggle row visibility one at a time

    - by kuswantin
    I have a couple of tables with similar structures like this: <table> <tbody> <tr>content</tr> <tr>content</tr> <tr>content</tr> <tr>content</tr> <tr>content</tr> <tr>content</tr> <tr>content</tr> <tr>content</tr> ..etc --- The fake button is added here <div class="addrow">Add another</div> </tbody> </table> Since this is a long list, I have a need to toggle the rows one at a time. I just need to show the first row, of course, the rest should be toggled. The action is when I click a dynamic fake button, it will show row no. 2, and clicking again will show another next row. This is what I have done so far: $("table#field_fruit_values tr.draggable").not(':first').hide(); $("table#field_vegetables_values tr.draggable").not(':first').hide(); $("body.form table.content-multiple-table tbody").append('<div class="addrow">Add</div>'); $(".addrow").click(function() { var hiddenRow = $(this).prev('tr.draggable'); $(this).prev(hiddenRow + 1).show(); //if (hiddenRow + ':last').length) { // <= silly logic // $(this).hide(); //} }); The button only works for one row. I must have done something wrong :) When the final is reached, I also want the button to disappear. Sorry if this question sound silly. Any help would be very much appreciated. Thanks.

    Read the article

  • JPA @ManyToMany on only one side?

    - by Ethan Leroy
    I am trying to refresh the @ManyToMany relation but it gets cleared instead... My Project class looks like this: @Entity public class Project { ... @ManyToMany(cascade = CascadeType.ALL, fetch = FetchType.EAGER) @JoinTable(name = "PROJECT_USER", joinColumns = @JoinColumn(name = "PROJECT_ID", referencedColumnName = "ID"), inverseJoinColumns = @JoinColumn(name = "USER_ID", referencedColumnName = "ID")) private Collection<User> users; ... } But I don't have - and I don't want - the collection of Projects in the User entity. When I look at the generated database tables, they look good. They contain all columns and constraints (primary/foreign keys). But when I persist a Project that has a list of Users (and the users are still in the database), the mapping table doesn't get updated gets updated but when I refresh the project afterwards, the list of Users is cleared. For better understanding: Project project = ...; // new project with users that are available in the db System.out.println(project getUsers().size()); // prints 5 em.persist(project); System.out.println(project getUsers().size()); // prints 5 em.refresh(project); System.out.println(project getUsers().size()); // prints 0 So, how can I refresh the relation between User and Project?

    Read the article

  • Excel VBA Macro for Pivot Table with Dynamic Data Range

    - by John Ziebro
    CODE IS WORKING! THANKS FOR THE HELP! I am attempting to create a dynamic pivot table that will work on data that varies in the number of rows. Currently, I have 28,300 rows, but this may change daily. Example of data format as follows: Case Number Branch Driver 1342 NYC Bob 4532 PHL Jim 7391 CIN John 8251 SAN John 7211 SAN Mary 9121 CLE John 7424 CIN John Example of finished table: Driver NYC PHL CIN SAN CLE Bob 1 0 0 0 0 Jim 0 1 0 0 0 John 0 0 2 1 1 Mary 0 0 0 1 0 Code as follows: Sub CreateSummaryReportUsingPivot() ' Use a Pivot Table to create a static summary report ' with model going down the rows and regions across Dim WSD As Worksheet Dim PTCache As PivotCache Dim PT As PivotTable Dim PRange As Range Dim FinalRow As Long Dim FinalCol As Long Set WSD = Worksheets("PivotTable") 'Name active worksheet as "PivotTable" ActiveSheet.Name = "PivotTable" ' Delete any prior pivot tables For Each PT In WSD.PivotTables PT.TableRange2.Clear Next PT ' Define input area and set up a Pivot Cache FinalRow = WSD.Cells(Application.Rows.Count, 1).End(xlUp).Row FinalCol = WSD.Cells(1, Application.Columns.Count). _ End(xlToLeft).Column Set PRange = WSD.Cells(1, 1).Resize(FinalRow, FinalCol) Set PTCache = ActiveWorkbook.PivotCaches.Add(SourceType:= _ xlDatabase, SourceData:=PRange) ' Create the Pivot Table from the Pivot Cache Set PT = PTCache.CreatePivotTable(TableDestination:=WSD. _ Cells(2, FinalCol + 2), TableName:="PivotTable1") ' Turn off updating while building the table PT.ManualUpdate = True ' Set up the row fields PT.AddFields RowFields:="Driver", ColumnFields:="Branch" ' Set up the data fields With PT.PivotFields("Case Number") .Orientation = xlDataField .Function = xlCount .Position = 1 End With With PT .ColumnGrand = False .RowGrand = False .NullString = "0" End With ' Calc the pivot table PT.ManualUpdate = False PT.ManualUpdate = True End Sub

    Read the article

  • Linq-to-SQL: How to shape the data with group by?

    - by Cheeso
    I have an example database, it contains tables for Movies, People and Credits. The Movie table contains a Title and an Id. The People table contains a Name and an Id. The Credits table relates Movies to the People that worked on those Movies, in a particular role. The table looks like this: CREATE TABLE [dbo].[Credits] ( [Id] [int] IDENTITY (1, 1) NOT NULL PRIMARY KEY, [PersonId] [int] NOT NULL FOREIGN KEY REFERENCES People(Id), [MovieId] [int] NOT NULL FOREIGN KEY REFERENCES Movies(Id), [Role] [char] (1) NULL In this simple example, the [Role] column is a single character, by my convention either 'A' to indicate the person was an actor on that particular movie, or 'D' for director. I'd like to perform a query on a particular person that returns the person's name, plus a list of all the movies the person has worked on, and the roles in those movies. If I were to serialize it to json, it might look like this: { "name" : "Clint Eastwood", "movies" : [ { "title": "Unforgiven", "roles": ["actor", "director"] }, { "title": "Sands of Iwo Jima", "roles": ["director"] }, { "title": "Dirty Harry", "roles": ["actor"] }, ... ] } How can I write a LINQ-to-SQL query that shapes the output like that? I'm having trouble doing it efficiently. if I use this query: int personId = 10007; var persons = from p in db.People where p.Id == personId select new { name = p.Name, movies = (from m in db.Movies join c in db.Credits on m.Id equals c.MovieId where (c.PersonId == personId) select new { title = m.Title, role = (c.Role=="D"?"director":"actor") }) }; I get something like this: { "name" : "Clint Eastwood", "movies" : [ { "title": "Unforgiven", "role": "actor" }, { "title": "Unforgiven", "role": "director" }, { "title": "Sands of Iwo Jima", "role": "director" }, { "title": "Dirty Harry", "role": "actor" }, ... ] } ...but as you can see there's a duplicate of each movie for which Eastwood played multiple roles. How can I shape the output the way I want?

    Read the article

  • MYSQL fetch 10 posts, each w/ vote count, sorted by vote count, limited by where clause on posts

    - by nibblebot
    I want to fetch a set of Posts w/ vote count listed, sorted by vote count (e.g.) Post 1 - Post Body blah blah - Votes: 500 Post 2 - Post Body blah blah - Votes: 400 Post 3 - Post Body blah blah - Votes: 300 Post 4 - Post Body blah blah - Votes: 200 I have 2 tables: Posts - columns - id, body, is_hidden Votes - columns - id, post_id, vote_type_id Here is the query I've tried: SELECT p.*, v.yes_count FROM posts p LEFT JOIN (SELECT post_id, vote_type_id, COUNT(1) AS yes_count FROM votes WHERE (vote_type_id = 1) GROUP BY post_id ORDER BY yes_count DESC LIMIT 0, 10) v ON v.post_id = p.id WHERE (p.is_hidden = 0) ORDER BY yes_count DESC LIMIT 0, 10 Correctness: The above query almost works. The subselect is including votes for posts that have is_hidden = 1, so when I left join it to posts, if a hidden post is in the top 10 (ranked by votes), I can end up with records with NULL on the yes_count field. Performance: I have ~50k posts and ~500k votes. On my dev machine, the above query is running in .4sec. I'd like to stay at or below this execution time. Indexes: I have an index on the Votes table that covers the fields: vote_type_id and post_id EXPLAIN id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY p ALL NULL NULL NULL NULL 45985 Using where; Using temporary; Using filesort 1 PRIMARY <derived2> ALL NULL NULL NULL NULL 10 2 DERIVED votes ref VotingPost VotingPost 4 319881 Using where; Using index; Using temporary; Using filesort

    Read the article

  • How to make a small engine like Wolfram|Alpha?

    - by Koning WWWWWWWWWWWWWWWWWWWWWWW
    Lets say I have three models/tables: operating_systems, words, and programming_languages: # operating_systems name:string created_by:string family:string Windows Microsoft MS-DOS Mac OS X Apple UNIX Linux Linus Torvalds UNIX UNIX AT&T UNIX # words word:string defenitions:string window (serialized hash of defenitions) hello (serialized hash of defenitions) UNIX (serialized hash of defenitions) # programming_languages name:string created_by:string example_code:text C++ Bjarne Stroustrup #include <iostream> etc... HelloWorld Jeff Skeet h AnotherOne Jon Atwood imports 'SORULEZ.cs' etc... When a user searches hello, the system shows the defenitions of 'hello'. This is relatively easy to implement. However, when a user searches UNIX, the engine must choose: word or operating_system. Also, when a user searches windows (small letter 'w'), the engine chooses word, but should also show Assuming 'windows' is a word. Use as an <a href="etc..">operating system</a> instead. Can anyone point me in the right direction with parsing and choosing the topic of the search query? Thanks. Note: it doesn't need to be able to perform calculations as WA can do.

    Read the article

  • django powering multiple shops from one code base on a single domain

    - by imanc
    Hey, I am new to django and python and am trying to figure out how to modify an existing app to run multiple shops through a single domain. Django's sites middleware seems inappropriate in this particular case because it manages different domains, not sites run through the same domain, e.g. : domain.com/uk domain.com/us domain.com/es etc. Each site will need translated content - and minor template changes. The solution needs to be flexible enough to allow for easy modification of templates. The forms will also need to vary a bit, e.g minor variances in fields and validation for each country specific shop. I am thinking along the lines of the following as a solution and would love some feedback from experienced django-ers: In short: same codebase, but separate country specific urls files, separate templates and separate database Create a middleware class that does IP localisation, determines the country based on the URL and creates a database connection, e.g. /au/ will point to the au specific database and so on. in root urls.py have routes that point to a separate country specific routing file, e..g (r'^au/',include('urls_au')), (r'^es/',include('urls_es')), use a single template directory but in that directory have a localised directory structure, e.g. /base.html and /uk/base.html and write a custom template loader that looks for local templates first. (or have a separate directory for each shop and set the template directory path in middleware) use the django internationalisation to manage translation strings throughout slight variances in forms and models (e.g. ZA has an ID field, France has 'door code' and 'floor' etc.) I am unsure how to handle these variations but I suspect the tables will contain all fields but allowing nulls and the model will have all fields but allowing nulls. The forms will to be modified slightly for each shop. Anyway, I am keen to get feedback on the best way to go about achieving this multi site solution. It seems like it would work, but feels a bit "hackish" and I wonder if there's a more elegant way of getting this solution to work. Thanks, imanc

    Read the article

  • Why put a DAO layer over a persistence layer (like JDO or Hibernate)

    - by Todd Owen
    Data Access Objects (DAOs) are a common design pattern, and recommended by Sun. But the earliest examples of Java DAOs interacted directly with relational databases -- they were, in essence, doing object-relational mapping (ORM). Nowadays, I see DAOs on top of mature ORM frameworks like JDO and Hibernate, and I wonder if that is really a good idea. I am developing a web service using JDO as the persistence layer, and am considering whether or not to introduce DAOs. I foresee a problem when dealing with a particular class which contains a map of other objects: public class Book { // Book description in various languages, indexed by ISO language codes private Map<String,BookDescription> descriptions; } JDO is clever enough to map this to a foreign key constraint between the "BOOKS" and "BOOKDESCRIPTIONS" tables. It transparently loads the BookDescription objects (using lazy loading, I believe), and persists them when the Book object is persisted. If I was to introduce a "data access layer" and write a class like BookDao, and encapsulate all the JDO code within this, then wouldn't this JDO's transparent loading of the child objects be circumventing the data access layer? For consistency, shouldn't all the BookDescription objects be loaded and persisted via some BookDescriptionDao object (or BookDao.loadDescription method)? Yet refactoring in that way would make manipulating the model needlessly complicated. So my question is, what's wrong with calling JDO (or Hibernate, or whatever ORM you fancy) directly in the business layer? Its syntax is already quite concise, and it is datastore-agnostic. What is the advantage, if any, of encapsulating it in Data Access Objects?

    Read the article

  • Doctrine 1.2: How do i prevent a contraint from being assigned to both sides of a One-to-many relati

    - by prodigitalson
    Is there a way to prevent Doctrine from assigning a contraint on both sides of a one-to-one relationship? Ive tried moving the definition from one side to the other and using owning side but it still places a constraint on both tables. when I only want the parent table to have a constraint - ie. its possible for the parent to not have an associated child. For example iwant the following sql schema essentially: CREATE TABLE `parent_table` ( `child_id` varchar(50) NOT NULL, `id` integer UNSIGNED NOT NULL auto_increment, PRIMARY KEY (`id`) ); CREATE TABLE `child_table` ( `id` integer UNSIGNED NOT NULL auto_increment, `child_id` varchar(50) NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY (`child_id`), CONSTRAINT `parent_table_child_id_FK_child_table_child_id` FOREIGN KEY (`child_id`) REFERENCES `parent_table` (`child_id`) ); However im getting something like this: CREATE TABLE `parent_table` ( `child_id` varchar(50) NOT NULL, `id` integer UNSIGNED NOT NULL auto_increment, PRIMARY KEY (`id`), CONSTRAINT `child_table_child_id_FK_parent_table_child_id` FOREIGN KEY (`child_id`) REFERENCES `child_table` (`child_id`) ); CREATE TABLE `child_table` ( `id` integer UNSIGNED NOT NULL auto_increment, `child_id` varchar(50) NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY (`child_id`), CONSTRAINT `parent_table_child_id_FK_child_table_child_id` FOREIGN KEY (`child_id`) REFERENCES `parent_table` (`child_id`) ); I could just remove the constraint manually or modify my accessors to return/set a single entity in the collection (using a one-to-many) but it seems like there should built in way to handle this. Also im using Symfony 1.4.4 (pear installtion ATM) - in case its an sfDoctrinePlugin issue and not necessarily Doctrine itself.

    Read the article

  • How do I make software that preserves database integrity and correctness?

    - by user287745
    I have made an application project in Visual Studio 2008 C#, SQL Server from Visual Studio 2008. The database has like 20 tables and many fields in each. I have made an interface for adding deleting editing and retrieving data according to predefined needs of the users. Now I have to Make to project into software which I can deliver to my professor. That is, he can just double click the icon and the software simply starts. No Visual Studio 2008 needed to start the debugging. The database will be on one powerful computer (dual core latest everything Windows XP) and the user will access it from another computer connected using LAN. I am able to change the connection string to the shared database using Visual Studio 2008/ debugger whenever the server changes but how am I supposed to do that when it's software? There will by many clients. Am I supposed to give the same software to every one, so they all can connect to the database? How will the integrity and correctness of the database be maintained? I mean the db.mdf file will be in a folder which will be shared with read and write access. So it's not necessary that only one user will write at a time. So is there any coding for this or?

    Read the article

  • Using A Local file path in a Streamwriter object ASP.Net

    - by Nick LaMarca
    I am trying to create a csv file of some data. I have wrote a function that successfully does this.... Private Sub CreateCSVFile(ByVal dt As DataTable, ByVal strFilePath As String) Dim sw As New StreamWriter(strFilePath, False) ''# First we will write the headers. ''EDataTable dt = m_dsProducts.Tables[0]; Dim iColCount As Integer = dt.Columns.Count For i As Integer = 0 To iColCount - 1 sw.Write(dt.Columns(i)) If i < iColCount - 1 Then sw.Write(",") End If Next sw.Write(sw.NewLine) ''# Now write all the rows. For Each dr As DataRow In dt.Rows For i As Integer = 0 To iColCount - 1 If Not Convert.IsDBNull(dr(i)) Then sw.Write(dr(i).ToString()) End If If i < iColCount - 1 Then sw.Write(",") End If Next sw.Write(sw.NewLine) Next sw.Close() End Sub The problem is I am not using the streamwriter object correctly for what I trying to accomplish. Since this is an asp.net I need the user to pick a local filepath to put the file on. If I pass any path to this function its gonna try to write it to the directory specified on the server where the code is. I would like this to popup and let the user select a place on their local machine to put the file.... Dim exData As Byte() = File.ReadAllBytes(Server.MapPath(eio)) File.Delete(Server.MapPath(eio)) Response.AddHeader("content-disposition", String.Format("attachment; filename={0}", fn)) Response.ContentType = "application/x-msexcel" Response.BinaryWrite(exData) Response.Flush() Response.End() I am calling the first function in code like this... Dim emplTable As DataTable = SiteAccess.DownloadEmployee_H() CreateCSVFile(emplTable, "C:\\EmplTable.csv") Where I dont want to have specify the file loaction (because this will put the file on the server and not on a client machine) but rather let the user select the location on their client machine. Can someone help me put this together? Thanks in advance.

    Read the article

  • Adaptive user interface/environment algorithm

    - by WowtaH
    Hi all, I'm working on an information system (in C#) that (while my users use it) gathers statistical data on what pieces of information (tables & records) each user is requesting the most, and what parts of the interface he/she uses most. I'm using this statistical data to make the application adaptive to the user's needs, both in the way the interface presents itself (eg: tab/pane-ordering) as in the way of using the frequently viewed information to (eg:) show higher in search results/suggestion-lists. What i'm looking for is an algorithm/formula to determine the current 'hotness'/relevance of these objects for a specific user. A simple 'hitcounter' for each object won't be sufficient because the user might view some information quite frequently for a period of time, and then moving on to the next, making the old information less relevant. So i think my algorithm also needs some sort of sliding/historical principle to account for the changing popularity of the objects in the application over time. So, the question is: Does anybody have some sort of algorithm that accounts for that 'popularity over time' ? Preferably with some explanation on the parameters :) Thanks! PS I've looked at other posts like http://stackoverflow.com/questions/32397/popularity-algorithm but i could't quite port it to my specific case. Any help is appreciated.

    Read the article

  • Minimum privileges to read SQL Jobs using SQL SMO

    - by Gustavo Cavalcanti
    I wrote an application to use SQL SMO to find all SQL Servers, databases, jobs and job outcomes. This application is executed through a scheduled task using a local service account. This service account is local to the application server only and is not present in any SQL Server to be inspected. I am having problems getting information on job and job outcomes when connecting to the servers using a user with dbReader rights on system tables. If we set the user to be sysadmin on the server it all works fine. My question to you is: What are the minimum privileges a local SQL Server user needs to have in order to connect to the server and inspect jobs/job outcomes using the SQL SMO API? I connect to each SQL Server by doing the following: var conn = new ServerConnection { LoginSecure = false, ApplicationName = "SQL Inspector", ServerInstance = serverInstanceName, ConnectAsUser = false, Login = user, Password = password }; var smoServer = new Server (conn); I read the jobs by reading smoServer.JobServer.Jobs and read the JobSteps property on each of these jobs. The variable server is of type Microsoft.SqlServer.Management.Smo.Server. user/password are of the user found in each SQL Server to be inspected. If "user" is SysAdmin on the SQL Server to be inspected all works ok, as well as if we set ConnectAsUser to true and execute the scheduled task using my own credentials, which grants me SysAdmin privileges on SQL Server per my Active Directory membership. Thanks!

    Read the article

  • problem with mysql character set & GWT

    - by Ehsan Khodarahmi
    Hi I've a SmartGWT application which interacts with a mysql database using rpc services. Suppose it as a simple form with a textbox & two save & load buttons. My database & tables & all fields collation is utf8_persian_ci. All java source files & module html & xml files have saved with utf8 character set. & also I've a meta tag in module html file which contains my form : <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> my application works correctly in eclipse develpment mode & also in my local tomcat server. Then i put it on remote server (I compress it using jar.exe into a war file with -cvf flag & then upload it using my server's plesk control panel). In this mode, when I load data from a mysql table (load a record from any table), data will load into my form with no problem, but when I want to save some data (in persian language), mysql just writes some ? (question sign) in characteristic table fields. Any idea ?

    Read the article

  • How to combine a Distance and Keyword SQL query?

    - by Jason
    Hi Folks, I have a tables in my database called "points" and "category". A user will input info into both a location input and a keyword input text box. Then I want to find points in my table where the keyword matches either the "title" field in the points table, or the "category" but are within a certain distance from the user's location. I want to order the results by distance. Here are the 2 queries which btoh work independently: $mysql = "SELECT *, ( 3959 * acos( cos( radians('$search_lat') ) * cos( radians( lat ) ) * cos( radians( longi ) - radians('$search_lng') ) + sin( radians('$search_lat') ) * sin( radians( lat ) ) ) ) AS distance FROM points HAVING distance < '$radius'"; $mysql2 = "SELECT * FROM `points` LEFT JOIN category USING ( category_id ) WHERE (point_title LIKE '%$esc_catsearch%' OR category.title LIKE '%$esc_catsearch%')"; Here is what I tried: $sql_search = sprintf("SELECT *,point_id FROM points WHERE point_title LIKE '%%%s%%' UNION SELECT *, ( 3959 * acos( cos( radians('%s') ) * cos( radians( lat ) ) * cos( radians( longi ) - radians('%s') ) + sin( radians('%s') ) * sin( radians( lat ) ) ) ) AS distance FROM points HAVING distance < '%s' ORDER BY distance LIMIT %d , %d", $esc_catsearch, mysql_real_escape_string($search_lat), mysql_real_escape_string($search_lng), mysql_real_escape_string($search_lat), mysql_real_escape_string($radius), $offset, $rowsPerPage); But it tells me there is no know column "distance". If I remove the "Order By" phrase then it works but I'm still not sure this is giving me the results I want. I also tried the query the other way around with the distance search first but that seems to ignore my keyword. Any thoughts would be much appreciated!

    Read the article

  • Model login constraints based on time

    - by DaDaDom
    Good morning, for an existing web application I need to implement "time based login constraints". It means that for each user, later maybe each group, I can define timeslots when they are (not) allowed to log in into the system. As all data for the application is stored in database tables, I need to somehow create a way to model this idea in that way. My first approach, I will try to explain it here: Create a tree of login constraints (called "timeslots") with the main "categories", like "workday", "weekend", "public holiday", etc. on the top level, which are in a "sorted" order (meaning "public holiday" has a higher priority than "weekday") for each top level node create subnodes, which have a finer timespan, like "monday", "tuesday", ... below that, create an "hour" level: 0, 1, 2, ..., 23. No further details are necessary. set every member to "allowed" by default For every member of the system create a 1:n relationship member:timeslots which defines constraints, e.g. a member A may have A:monday-forbidden and A:tuesday-forbidden Do a depth-first search at every login and check if the member has a constraint. Why a depth first search? Well, I thought that it may be that a member has the rules: A:monday->forbidden, A:monday-10->allowed, A:mondey-11->allowed So a login on monday at 12:30 would fail, but one at 10:30 succeed. For performance reasons I could break the relational database paradigm and set a flag for every entry in the member-to-timeslots-table which is set to true if the member has information set for "finer" timeslots, but that's a second step. Is this model in principle a good idea? Are there existing models? Thanks.

    Read the article

< Previous Page | 296 297 298 299 300 301 302 303 304 305 306 307  | Next Page >