Search Results

Search found 31319 results on 1253 pages for 'source engine'.

Page 358/1253 | < Previous Page | 354 355 356 357 358 359 360 361 362 363 364 365  | Next Page >

  • how to use an mdf in App_Data with shared hosting

    - by name
    If I create a website that uses an mdf in App_Data with the connection string: Server=.\SQLExpress;AttachDbFilename=|DataDirectory|mydbfile.mdf;Database=dbname; Trusted_Connection=Yes; what do I need to do to run the site in a shared hosting environment? Do I need to copy the contents of my mdf to the main SQL Server engine of my host? Is there a way to use the non-SQLExpress engine of my host and still keep my mdf in my App_Data?

    Read the article

  • Sqlite / SQLAlchemy: how to enforce Foreign Keys?

    - by Nick Perkins
    The new version of SQLite has the ability to enforce Foreign Key constraints, but for the sake of backwards-compatibility, you have to turn it on for each database connection separately! sqlite> PRAGMA foreign_keys = ON; I am using SQLAlchemy -- how can I make sure this always gets turned on? What I have tried is this: engine = sqlalchemy.create_engine('sqlite:///:memory:', echo=True) engine.execute('pragma foreign_keys=on') ...but it is not working!...What am I missing?

    Read the article

  • Post-loading : check if an image is in the browser cache

    - by Mathieu
    Short version question : Is there navigator.mozIsLocallyAvailable equivalent function that works on all browsers, or an alternative? Long version :) Hi, Here is my situation : I want to implement an HtmlHelper extension for asp.net MVC that handle image post-loading easily (using jQuery). So i render the page with empty image sources with the source specified in the "alt" attribute. I insert image sources after the "window.onload" event, and it works great. I did something like this : $(window).bind('load', function() { var plImages = $(".postLoad"); plImages.each(function() { $(this).attr("src", $(this).attr("alt")); }); }); The problem is : After the first loading, post-loaded images are cached. But if the page takes 10 seconds to load, the cached post-loaded images will be displayed after this 10 seconds. So i think to specify image sources on the "document.ready" event if the image is cached to display them immediatly. I found this function : navigator.mozIsLocallyAvailable to check if an image is in the cache. Here is what I've done with jquery : //specify cached image sources on dom ready $(document).ready(function() { var plImages = $(".postLoad"); plImages.each(function() { var source = $(this).attr("alt") var disponible = navigator.mozIsLocallyAvailable(source, true); if (disponible) $(this).attr("src", source); }); }); //specify uncached image sources after page loading $(window).bind('load', function() { var plImages = $(".postLoad"); plImages.each(function() { if ($(this).attr("src") == "") $(this).attr("src", $(this).attr("alt")); }); }); It works on Mozilla's DOM but it doesn't works on any other one. I tried navigator.isLocallyAvailable : same result. Is there any alternative?

    Read the article

  • How to optimize this mysql query - explain output included

    - by Sandeepan Nath
    This is the query (a search query basically, based on tags):- select SUM(DISTINCT(ttagrels.id_tag in (2105,2120,2151,2026,2046) )) as key_1_total_matches, td.*, u.* from Tutors_Tag_Relations AS ttagrels Join Tutor_Details AS td ON td.id_tutor = ttagrels.id_tutor JOIN Users as u on u.id_user = td.id_user where (ttagrels.id_tag in (2105,2120,2151,2026,2046)) group by td.id_tutor HAVING key_1_total_matches = 1 And following is the database dump needed to execute this query:- CREATE TABLE IF NOT EXISTS `Users` ( `id_user` int(10) unsigned NOT NULL auto_increment, `id_group` int(11) NOT NULL default '0', PRIMARY KEY (`id_user`), KEY `Users_FKIndex1` (`id_group`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=730 ; INSERT INTO `Users` (`id_user`, `id_group`) VALUES (303, 1); CREATE TABLE IF NOT EXISTS `Tutor_Details` ( `id_tutor` int(10) unsigned NOT NULL auto_increment, `id_user` int(10) NOT NULL default '0', PRIMARY KEY (`id_tutor`), KEY `Users_FKIndex1` (`id_user`), KEY `id_user` (`id_user`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=58 ; INSERT INTO `Tutor_Details` (`id_tutor`, `id_user`) VALUES (26, 303); CREATE TABLE IF NOT EXISTS `Tags` ( `id_tag` int(10) unsigned NOT NULL auto_increment, `tag` varchar(255) default NULL, PRIMARY KEY (`id_tag`), UNIQUE KEY `tag` (`tag`), KEY `id_tag` (`id_tag`), KEY `tag_2` (`tag`), KEY `tag_3` (`tag`), KEY `tag_4` (`tag`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=2957 ; INSERT INTO `Tags` (`id_tag`, `tag`) VALUES (2026, 'Brendan.\nIn'), (2046, 'Brendan.'), (2105, 'Brendan'), (2120, 'Brendan''s'), (2151, 'Brendan)'); CREATE TABLE IF NOT EXISTS `Tutors_Tag_Relations` ( `id_tag` int(10) unsigned NOT NULL default '0', `id_tutor` int(10) unsigned default NULL, `tutor_field` varchar(255) default NULL, `cdate` timestamp NOT NULL default CURRENT_TIMESTAMP, `udate` timestamp NULL default NULL, KEY `Tutors_Tag_Relations` (`id_tag`), KEY `id_tutor` (`id_tutor`), KEY `id_tag` (`id_tag`), KEY `id_tutor_2` (`id_tutor`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; INSERT INTO `Tutors_Tag_Relations` (`id_tag`, `id_tutor`, `tutor_field`, `cdate`, `udate`) VALUES (2105, 26, 'firstname', '2010-06-17 17:08:45', NULL); ALTER TABLE `Tutors_Tag_Relations` ADD CONSTRAINT `Tutors_Tag_Relations_ibfk_2` FOREIGN KEY (`id_tutor`) REFERENCES `Tutor_Details` (`id_tutor`) ON DELETE NO ACTION ON UPDATE NO ACTION, ADD CONSTRAINT `Tutors_Tag_Relations_ibfk_1` FOREIGN KEY (`id_tag`) REFERENCES `Tags` (`id_tag`) ON DELETE NO ACTION ON UPDATE NO ACTION; What the query does? This query actually searches tutors which contain "Brendan"(as their name or biography or something). The id_tags 2105,2120,2151,2026,2046 are nothing but the tags which are LIKE "%Brendan%". My question is :- 1.In the explain of this query, the reference column shows NULL for ttagrels, but there are possible keys (Tutors_Tag_Relations,id_tutor,id_tag,id_tutor_2). So, why is no key being taken. How to make the query take references. Is it possible at all? 2. The other two tables td and u are using references. Any indexing needed in those? I think not. Check the explain query output here http://www.test.examvillage.com/explain.png

    Read the article

  • MySQL InnoDB insertion is very slow

    - by dharmapurikar
    We use MySQL server 5.1.43 64-bit edition. InnoDB is used as engine. We have a sql script which we execute every time we build the application. On ubuntu machine with MySQL server and InnoDB engine it takes about 55 seconds to complete the execution. If I run the same script on OSX, it takes close to 3 minutes! Any ideas why OSX is so slow while executing this script?

    Read the article

  • wsdl2 java :Java heap space

    - by PiNY
    Hi! I'm working with web services. I have a file wdsl and I trasnformed it in two java files: wsdl2java -uri nameFile.wsdl One of the java file created has 87kb. When I try it to open with eclipse I have this error: java.lang.OutOfMemoryError: Java heap space at java.util.Arrays.copyOf(Unknown Source) at java.lang.AbstractStringBuilder.expandCapacity(Unknown Source) at java.lang.AbstractStringBuilder.append(Unknown Source) at java.lang.StringBuffer.append(Unknown Source) at org.eclipse.core.internal.filebuffers.FileStoreTextFileBuffer.setDocumentContent(FileStoreTextFileBuffer.java:586) at org.eclipse.core.internal.filebuffers.FileStoreTextFileBuffer.initializeFileBufferContent(FileStoreTextFileBuffer.java:352) at org.eclipse.core.internal.filebuffers.FileStoreFileBuffer.create(FileStoreFileBuffer.java:63) at org.eclipse.core.internal.filebuffers.TextFileBufferManager.connectFileStore(TextFileBufferManager.java:150) at org.eclipse.ui.editors.text.TextFileDocumentProvider.createFileInfo(TextFileDocumentProvider.java:567) at org.eclipse.jdt.internal.ui.javaeditor.CompilationUnitDocumentProvider.createFileInfo(CompilationUnitDocumentProvider.java:969) at org.eclipse.ui.editors.text.TextFileDocumentProvider.connect(TextFileDocumentProvider.java:478) at org.eclipse.jdt.internal.ui.javaeditor.CompilationUnitDocumentProvider.connect(CompilationUnitDocumentProvider.java:1229) at org.eclipse.ui.texteditor.AbstractTextEditor.doSetInput(AbstractTextEditor.java:4056) at org.eclipse.ui.texteditor.StatusTextEditor.doSetInput(StatusTextEditor.java:217) at org.eclipse.ui.texteditor.AbstractDecoratedTextEditor.doSetInput(AbstractDecoratedTextEditor.java:1444) at org.eclipse.jdt.internal.ui.javaeditor.JavaEditor.internalDoSetInput(JavaEditor.java:2578) at org.eclipse.jdt.internal.ui.javaeditor.JavaEditor.doSetInput(JavaEditor.java:2551) at org.eclipse.jdt.internal.ui.javaeditor.CompilationUnitEditor.doSetInput(CompilationUnitEditor.java:1371) at org.eclipse.ui.texteditor.AbstractTextEditor$19.run(AbstractTextEditor.java:3043) at org.eclipse.jface.operation.ModalContext.runInCurrentThread(ModalContext.java:464) at org.eclipse.jface.operation.ModalContext.run(ModalContext.java:372) at org.eclipse.jface.window.ApplicationWindow$1.run(ApplicationWindow.java:759) at org.eclipse.swt.custom.BusyIndicator.showWhile(BusyIndicator.java:70) at org.eclipse.jface.window.ApplicationWindow.run(ApplicationWindow.java:756) at org.eclipse.ui.internal.WorkbenchWindow.run(WorkbenchWindow.java:2600) at org.eclipse.ui.texteditor.AbstractTextEditor.internalInit(AbstractTextEditor.java:3061) at org.eclipse.ui.texteditor.AbstractTextEditor.init(AbstractTextEditor.java:3088) at org.eclipse.ui.internal.EditorManager.createSite(EditorManager.java:798) at org.eclipse.ui.internal.EditorReference.createPartHelper(EditorReference.java:647) at org.eclipse.ui.internal.EditorReference.createPart(EditorReference.java:465) at org.eclipse.ui.internal.WorkbenchPartReference.getPart(WorkbenchPartReference.java:595) at org.eclipse.ui.internal.EditorReference.getEditor(EditorReference.java:289) I want to now if : 1) It problem to the arguments wsdl2java . It means some way to create more java files intead of the big one. 2) Memory problem eclipse How can I resolve it? Thanks

    Read the article

  • How do I set YUI2 paginator to select a page other than the first page?

    - by Jeremy Weathers
    I have a YUI DataTable (YUI 2.8.0r4) with AJAX pagination. Each row in the table links to a details/editing page and I want to link from that details page back to the list page that includes the record from the details page. So I have to a) offset the AJAX data correctly and b) tell YAHOO.widget.Paginator which page to select. According to my reading of the YUI API docs, I have to pass in the initialPage configuration option. I've attempted this, but it doesn't take (the data from AJAX is correctly offset, but the paginator thinks I'm on page 1, so clicking "next" takes me from e.g. page 6 to page 2. What am I not doing (or doing wrong)? Here's my DataTable building code: (function() { var columns = [ {key: "retailer", label: "Retailer", sortable: false, width: 80}, {key: "publisher", label: "Publisher", sortable: false, width: 300}, {key: "description", label: "Description", sortable: false, width: 300} ]; var source = new YAHOO.util.DataSource("/sales_data.json?"); source.responseType = YAHOO.util.DataSource.TYPE_JSON; source.responseSchema = { resultsList: "records", fields: [ {key: "url"}, {key: "retailer"}, {key: "publisher"}, {key: "description"} ], metaFields: { totalRecords: "totalRecords" } }; var LoadingDT = function(div, cols, src, opts) { LoadingDT.superclass.constructor.call( this, div, cols, src, opts); // hide the message tbody this._elMsgTbody.style.display = "none"; }; YAHOO.extend(LoadingDT, YAHOO.widget.DataTable, { showTableMessage: function(msg) { $('sales_table_overlay').clonePosition($('sales_table').down('table')). show(); }, hideTableMessage: function() { $('sales_table_overlay').hide(); } }); var table = new LoadingDT("sales_table", columns, source, { initialRequest: "startIndex=125&results=25", dynamicData: true, paginator: new YAHOO.widget.Paginator({rowsPerPage: 25, initialPage: 6}) }); table.handleDataReturnPayload = function(oRequest, oResponse, oPayload) { oPayload.totalRecords = oResponse.meta.totalRecords; return oPayload; }; })();

    Read the article

  • Foreign key,local key?

    - by user198729
    CREATE TABLE products ( id integer unsigned auto_increment primary key ) ENGINE=INNODB; CREATE TABLE orders ( id integer PRIMARY KEY auto_increment, product_id integer unsigned, quantity integer, INDEX product_id_idx (product_id), FOREIGN KEY (product_id) REFERENCES products (id) ) ENGINE=INNODB; Here the products and orders obviously have some kind of relation--foreign key relation. Also a coin has two sides,so I'm doubting how do we say which table is the foreign key side or local key side?

    Read the article

  • How to add objects to association in OnPreInsert, OnPreUpdate

    - by Dmitriy Nagirnyak
    Hi, I have an event listener (for Audit Logs) which needs to append audit log entries to the association of the object: public Company : IAuditable { // Other stuff removed for bravety IAuditLog IAuditable.CreateEntry() { var entry = new CompanyAudit(); this.auditLogs.Add(entry); return entry; } public virtual IEnumerable<CompanyAudit> AuditLogs { get { return this.auditLogs } } } The AuditLogs collection is mapped with cascading: public class CompanyMap : ClassMap<Company> { public CompanyMap() { // Id and others removed fro bravety HasMany(x => x.AuditLogs).AsSet() .LazyLoad() .Access.ReadOnlyPropertyThroughCamelCaseField() .Cascade.All(); } } And the listener just asks the auditable object to create log entries so it can update them: internal class AuditEventListener : IPreInsertEventListener, IPreUpdateEventListener { public bool OnPreUpdate(PreUpdateEvent ev) { var audit = ev.Entity as IAuditable; if (audit == null) return false; Log(audit); return false; } public bool OnPreInsert(PreInsertEvent ev) { var audit = ev.Entity as IAuditable; if (audit == null) return false; Log(audit); return false; } private static void LogProperty(IAuditable auditable) { var entry = auditable.CreateAuditEntry(); entry.CreatedAt = DateTime.Now; entry.Who = GetCurrentUser(); // Might potentially execute a query. // Also other information is set for entry here } } The problem with it though is that it throws TransientObjectException when commiting the transaction: NHibernate.TransientObjectException : object references an unsaved transient instance - save the transient instance before flushing. Type: PropConnect.Model.UserAuditLog, Entity: PropConnect.Model.UserAuditLog at NHibernate.Engine.ForeignKeys.GetEntityIdentifierIfNotUnsaved(String entityName, Object entity, ISessionImplementor session) at NHibernate.Type.EntityType.GetIdentifier(Object value, ISessionImplementor session) at NHibernate.Type.ManyToOneType.NullSafeSet(IDbCommand st, Object value, Int32 index, Boolean[] settable, ISessionImplementor session) at NHibernate.Persister.Collection.AbstractCollectionPersister.WriteElement(IDbCommand st, Object elt, Int32 i, ISessionImplementor session) at NHibernate.Persister.Collection.AbstractCollectionPersister.PerformInsert(Object ownerId, IPersistentCollection collection, IExpectation expectation, Object entry, Int32 index, Boolean useBatch, Boolean callable, ISessionImplementor session) at NHibernate.Persister.Collection.AbstractCollectionPersister.Recreate(IPersistentCollection collection, Object id, ISessionImplementor session) at NHibernate.Action.CollectionRecreateAction.Execute() at NHibernate.Engine.ActionQueue.Execute(IExecutable executable) at NHibernate.Engine.ActionQueue.ExecuteActions(IList list) at NHibernate.Engine.ActionQueue.ExecuteActions() at NHibernate.Event.Default.AbstractFlushingEventListener.PerformExecutions(IEventSource session) at NHibernate.Event.Default.DefaultFlushEventListener.OnFlush(FlushEvent event) at NHibernate.Impl.SessionImpl.Flush() at NHibernate.Transaction.AdoTransaction.Commit() As the cascading is set to All I expected NH to handle this. I also tried to modify the collection using state but pretty much the same happens. So the question is what is the last chance to modify object's associations before it gets saved? Thanks, Dmitriy.

    Read the article

  • Get Attribute value in ViewEngine ASP.NET MVC 3

    - by Kushan Fernando
    I'm writting my own view engine. public class MyViewEngine : RazorViewEngine { public override ViewEngineResult FindView(ControllerContext controllerContext, string viewName, string masterName, bool useCache) { // Here, how do I get attributes defined on top of the Action ? } } ASP.NET MVC Custom Attributes within Custom View Engine Above SO Question has how to get attributes defined on top of the Controller. But I need to get attributes defined on Action.

    Read the article

  • How to minimize total cost of shortest path tree

    - by Michael
    I have a directed acyclic graph with positive edge-weights. It has a single source and a set of targets (vertices furthest from the source). I find the shortest paths from the source to each target. Some of these paths overlap. What I want is a shortest path tree which minimizes the total sum of weights over all edges. For example, consider two of the targets. Given all edge weights equal, if they share a single shortest path for most of their length, then that is preferable to two mostly non-overlapping shortest paths (fewer edges in the tree equals lower overall cost). Another example: two paths are non-overlapping for a small part of their length, with high cost for the non-overlapping paths, but low cost for the long shared path (low combined cost). On the other hand, two paths are non-overlapping for most of their length, with low costs for the non-overlapping paths, but high cost for the short shared path (also, low combined cost). There are many combinations. I want to find solutions with the lowest overall cost, given all the shortest paths from source to target. Does this ring any bells with anyone? Can anyone point me to relevant algorithms or analogous applications? Cheers!

    Read the article

  • Problem with relative path to image in XAML?

    - by Giri
    I am trying to reference a PNG file in my applications working directory through XAML with the following: <Image Name="contactImage"> <Image.Source> <BitmapImage UriSource="/Images/contact.png"> </Image.Source> </Image> Now in my code-behind I try to get the height of the image with contactImage.Source.Height This fails with System.IOException - cannot locate resource 'images/contact.png'. If I use something like PngBitmapDecoder p = new PngBitmapDecoder(new Uri("./Images/contact.png"), UriKind.Relative, BitmapCreateOptions.PreservePixelFormat, BitmapCacheOption.Default); Everything is happy. How can I reference an image in xaml to a path relative to the working deirectory of the app. BTW- this is being run on a remote machine (if that makes a difference). I have tried "./Images/contact.png" and ".\Images\contact.png" and several other combinations of back/forward slashes and dots. Here is the primary difference- Any time the file is referenced in XAML, it shows up as pack://aplication:,,, blah blah blah when I use the PngBitmapDecoder, it shows up correctly as "./Images/contact.png". How do I reference the image file in XAML and get it show a source as "./Images/contact.png" instead of a pack://application,,,blah blah blah?

    Read the article

  • Django sphinx works only after app restart.

    - by Lhiash
    Hi, I've set up django-sphinx in my project, which works perfectly only for some time. Later it always returns empty result set. Surprisingly restarting django app fixes it. And search works again but again only for short time (or very limiter number of queries). Heres my sphinx.conf: source src_questions { # data source type = mysql sql_host = xxxxxx sql_user = xxxxxx #replace with your db username sql_pass = xxxxxx #replace with your db password sql_db = xxxxxx #replace with your db name # these two are optional sql_port = xxxxxx #sql_sock = /var/lib/mysql/mysql.sock # pre-query, executed before the main fetch query sql_query_pre = SET NAMES utf8 # main document fetch query sql_query = SELECT q.id AS id, q.title AS title, q.tagnames AS tags, q.html AS text, q.level AS level \ FROM question AS q \ WHERE q.deleted=0 \ # optional - used by command-line search utility to display document information sql_query_info = SELECT title, id, level FROM question WHERE id=$id sql_attr_uint = level } index questions { # which document source to index source = src_questions # this is path and index file name without extension # you may need to change this path or create this folder path = /home/rafal/core_index/index_questions # docinfo (ie. per-document attribute values) storage strategy docinfo = extern # morphology morphology = stem_en # stopwords file #stopwords = /var/data/sphinx/stopwords.txt # minimum word length min_word_len = 3 # uncomment next 2 lines to allow wildcard (*) searches min_infix_len = 1 enable_star = 1 # charset encoding type charset_type = utf-8 } # indexer settings indexer { # memory limit (default is 32M) mem_limit = 64M } # searchd settings searchd { # IP address on which search daemon will bind and accept # optional, default is to listen on all addresses, # ie. address = 0.0.0.0 address = 127.0.0.1 # port on which search daemon will listen port = 3312 # searchd run info is logged here - create or change the folder log = ../log/sphinx.log # all the search queries are logged here query_log = ../log/query.log # client read timeout, seconds read_timeout = 5 # maximum amount of children to fork max_children = 30 # a file which will contain searchd process ID pid_file = searchd.pid # maximum amount of matches this daemon would ever retrieve # from each index and serve to client max_matches = 1000 } and heres my search part from views.py: content = Question.search.query(keywords) if level: content = content.filter(level=level)#level is array of integers There are no errors in any logs, it just isnt returning any results. All help would be most appreciated.

    Read the article

  • Microsoft TTS (Text to Speech) Dat File Locations

    - by neddy
    Ok, so I've downloaded some TTS engines to replace the default microsoft TTS engine, and make my program sound a little more 'human' -- basically i am wondering where abouts the TTS engine files are stored on the local pc (windows 7) -- the files i have are in .Dat format, does anyone have any idea where abouts the should go to be registered as a voice for Text-to-Speech? Cheers.

    Read the article

  • searching for a programming platform with hot code swap

    - by Andreas
    I'm currently brainstorming over the idea how to upgrade a program while it is running. (Not while debugging, a "production" system.) But one thing that is required for it, is to actually submit the changed source code or compiled byte code into the running process. Pseudo Code var method = typeof(MyClass).GetMethod("Method1"); var content = //get it from a database (bytecode or source code) SELECT content FROM methods WHERE id=? AND version=? method.SetContent(content); At first, I want to achieve the system to work without the complexity of object-orientation. That leads to the following requirements: change source code or byte code of function drop functions add new functions change the signature of a function With .NET (and others) I could inject a class via an IoC and could thus change the source code. But the loading would be cumbersome, because everything has to be in an Assembly or created via Emit. Maybe with Java this would be easier? The whole ClassLoader is replacable, I think. With JavaScript I could achieve many of the goals. Simply eval a new function (MyMethod_V25) and assign it to MyClass.prototype.MyMethod. I think one can also drop functions somehow with "del" Which general-purpose platform can handle such things?

    Read the article

  • New NSData with range of old NSData maintaining bytes.

    - by umop
    I have a fairly large NSData (or NSMutableData if necessary) object which I want to take a small chunk out of and leave the rest. Since I'm working with large amounts of NSData bytes, I don't want to make a big copy, but instead just truncate the existing bytes. Basically: NSData *source: < a few bytes I want to discard + < big chunk of bytes I want to keep NSData *destination: < big chunk of bytes I want to keep There are truncation methods in NSMutableData, but they only truncate the end of it, whereas I want to truncate the beginning. My thoughts are to do this with the methods: - getBytes:range: and - initWithBytesNoCopy:length:freeWhenDone: However, I'm trying to figure out how to manage memory with these. I'm guessing the process will be like this (I've placed ????s where I don't know what to do): void *buffer // Get range of bytes [source getBytes:buffer range:NSMakeRange(myStart, myLength)]; // Somehow (m)alloc the memory which will be freed up in the following step ????? // Release the source, now that I've allocated the bytes [source release]; // Create a new data, recycling the bytes so they don't have to be copied NSData destination = [[NSData alloc] initWithBytesNoCopy:buffer length:myLength freeWhenDone:YES]; Thanks for the help!

    Read the article

  • Can this Query be corrected or different table structure needed? (database dumps provided)

    - by sandeepan
    This is a bit lengthy but I have provided sufficient details and kept things very clear. Please see if you can help. (I will surely accept answer if it solves my problem) I am sure a person experienced with this can surely help or suggest me to decide the tables structure. About the system:- There are tutors who create classes A tags based search approach is being followed Tag relations are created/edited when new tutors registers/edits profile data and when tutors create classes (this makes tutors and classes searcheable).For simplicity, let us consider only tutor name and class name are the fields which are matched against search keywords. In this example, I am considering - tutor "Sandeepan Nath" has created a class called "first class" tutor "Bob Cratchit" has created a class called "new class" Desired search results- AND logic to be appied on the search keywords and match against class and tutor data(class name + tutor name), in other words, All those classes be shown such that all the search terms are present in the class name or its tutor name. Example to be clear - Searching "first class" returns class with id_wc = 1. Working Searching "Sandeepan class" should also return class with id_wc = 1. Not working in System 2. Problem with profile editing and searching To tell in one sentence, I am facing a conflict between the ease of profile edition (edition of tag relations when tutor profiles are edited) and the ease of search logic. In the beginning, we had one table structure and search was easy but tag edition logic was very clumsy and unmaintainable(Check System 1 in the section below) . So we created separate tag relations tables to make profile edition simpler but search has become difficult. Please dump the tables so that you can run the search query I have given below and see the results. System 1 (previous system - search easy - profile edition difficult):- Only one table called All_Tag_Relations table had the all the tag relations. The tags table below is common to both systems 1 and 2. CREATE TABLE IF NOT EXISTS `all_tag_relations` ( `id_tag_rel` int(10) NOT NULL AUTO_INCREMENT, `id_tag` int(10) unsigned NOT NULL DEFAULT '0', `id_tutor` int(10) DEFAULT NULL, `id_wc` int(10) unsigned DEFAULT NULL, PRIMARY KEY (`id_tag_rel`), KEY `All_Tag_Relations_FKIndex1` (`id_tag`), KEY `id_wc` (`id_wc`), KEY `id_tag` (`id_tag`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; INSERT INTO `all_tag_relations` (`id_tag_rel`, `id_tag`, `id_tutor`, `id_wc`) VALUES (1, 1, 1, NULL), (2, 2, 1, NULL), (3, 1, 1, 1), (4, 2, 1, 1), (5, 3, 1, 1), (6, 4, 1, 1), (7, 6, 2, NULL), (8, 7, 2, NULL), (9, 6, 2, 2), (10, 7, 2, 2), (11, 5, 2, 2), (12, 4, 2, 2); CREATE TABLE IF NOT EXISTS `tags` ( `id_tag` int(10) unsigned NOT NULL AUTO_INCREMENT, `tag` varchar(255) DEFAULT NULL, PRIMARY KEY (`id_tag`), UNIQUE KEY `tag` (`tag`), KEY `id_tag` (`id_tag`), KEY `tag_2` (`tag`), KEY `tag_3` (`tag`), KEY `tag_4` (`tag`), FULLTEXT KEY `tag_5` (`tag`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=8 ; INSERT INTO `tags` (`id_tag`, `tag`) VALUES (1, 'Sandeepan'), (2, 'Nath'), (3, 'first'), (4, 'class'), (5, 'new'), (6, 'Bob'), (7, 'Cratchit'); Please note that for every class, the tag rels of its tutor have to be duplicated. Example, for class with id_wc=1, the tag rel records with id_tag_rel = 3 and 4 are actually extras if you compare with the tag rel records with id_tag_rel = 1 and 2. System 2 (present system - profile edition easy, search difficult) Two separate tables Tutors_Tag_Relations and Webclasses_Tag_Relations have the corresponding tag relations data (Please dump into a separate database)- CREATE TABLE IF NOT EXISTS `tutors_tag_relations` ( `id_tag_rel` int(10) NOT NULL AUTO_INCREMENT, `id_tag` int(10) unsigned NOT NULL DEFAULT '0', `id_tutor` int(10) DEFAULT NULL, PRIMARY KEY (`id_tag_rel`), KEY `All_Tag_Relations_FKIndex1` (`id_tag`), KEY `id_tag` (`id_tag`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; INSERT INTO `tutors_tag_relations` (`id_tag_rel`, `id_tag`, `id_tutor`) VALUES (1, 1, 1), (2, 2, 1), (3, 6, 2), (4, 7, 2); CREATE TABLE IF NOT EXISTS `webclasses_tag_relations` ( `id_tag_rel` int(10) NOT NULL AUTO_INCREMENT, `id_tag` int(10) unsigned NOT NULL DEFAULT '0', `id_tutor` int(10) DEFAULT NULL, `id_wc` int(10) DEFAULT NULL, PRIMARY KEY (`id_tag_rel`), KEY `webclasses_Tag_Relations_FKIndex1` (`id_tag`), KEY `id_wc` (`id_wc`), KEY `id_tag` (`id_tag`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; INSERT INTO `webclasses_tag_relations` (`id_tag_rel`, `id_tag`, `id_tutor`, `id_wc`) VALUES (1, 3, 1, 1), (2, 4, 1, 1), (3, 5, 2, 2), (4, 4, 2, 2); CREATE TABLE IF NOT EXISTS `tags` ( `id_tag` int(10) unsigned NOT NULL AUTO_INCREMENT, `tag` varchar(255) DEFAULT NULL, PRIMARY KEY (`id_tag`), UNIQUE KEY `tag` (`tag`), KEY `id_tag` (`id_tag`), KEY `tag_2` (`tag`), KEY `tag_3` (`tag`), KEY `tag_4` (`tag`), FULLTEXT KEY `tag_5` (`tag`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=8 ; INSERT INTO `tags` (`id_tag`, `tag`) VALUES (1, 'Sandeepan'), (2, 'Nath'), (3, 'first'), (4, 'class'), (5, 'new'), (6, 'Bob'), (7, 'Cratchit'); CREATE TABLE IF NOT EXISTS `all_tag_relations` ( `id_tag_rel` int(10) NOT NULL AUTO_INCREMENT, `id_tag` int(10) unsigned NOT NULL DEFAULT '0', `id_tutor` int(10) DEFAULT NULL, `id_wc` int(10) unsigned DEFAULT NULL, PRIMARY KEY (`id_tag_rel`), KEY `All_Tag_Relations_FKIndex1` (`id_tag`), KEY `id_wc` (`id_wc`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; insert into All_Tag_Relations select NULL,id_tag,id_tutor,NULL from Tutors_Tag_Relations; insert into All_Tag_Relations select NULL,id_tag,id_tutor,id_wc from Webclasses_Tag_Relations; Here you can see how easily tutor first name can be edited only in one place. But search has become really difficult, so on being advised to use a Temporary table, I am creating one at every search request, then dumping all the necessary data and then searching from it, I am creating this All_Tag_Relations table at search run time. Here I am just dumping all the data from the two tables Tutors_Tag_Relations and Webclasses_Tag_Relations. But, I am still not able to get classes if I search with tutor name This is the query which searches "first class". Running them on both the systems shows correct results (returns the class with id_wc = 1). SELECT wtagrels.id_wc,SUM(DISTINCT( wtagrels.id_tag =3)) AS key_1_total_matches, SUM(DISTINCT( wtagrels.id_tag =4)) AS key_2_total_matches FROM all_tag_relations AS wtagrels WHERE ( wtagrels.id_tag =3 OR wtagrels.id_tag =4 ) GROUP BY wtagrels.id_wc HAVING key_1_total_matches = 1 AND key_2_total_matches = 1 LIMIT 0, 20 But, searching for "Sandeepan class" works only with the 1st system Here is the query which searches "Sandeepan class" SELECT wtagrels.id_wc,SUM(DISTINCT( wtagrels.id_tag =1)) AS key_1_total_matches, SUM(DISTINCT( wtagrels.id_tag =4)) AS key_2_total_matches FROM all_tag_relations AS wtagrels WHERE ( wtagrels.id_tag =1 OR wtagrels.id_tag =4 ) GROUP BY wtagrels.id_wc HAVING key_1_total_matches = 1 AND key_2_total_matches = 1 LIMIT 0, 20 Can anybody alter this query and somehow do a proper join or something to get correct results. That solves my problem in a nice way. As you can figure out, the reason why it does not work in system 2 is that in system 1, for every class, one additional tag relation linking class and tutor name is present. e.g. for class first class, (records with id_tag_rel 3 and 4) which returns the class on searching with tutor name. So, you see the trade-off between the search and profile edition difficulty with the two systems. How do I overcome both. I have to reach a conclusion soon. So far my reasoning is it is definitely not good from a code maintainability point of view to follow the single tag rel table structure of system one, because in a real system while editing a field like "tutor qualifications", there can be as many records in tag rels table as there are words in qualification of a tutor (one word in a field = one tag relation). Now suppose a tutor has 100 classes. When he edits his qualification, all the tag rel rows corresponding to him are deleted and then as many copies are to be created (as per the new qualification data) as there are classes. This becomes particularly difficult if later more searcheable fields are added. The code cannot be robust. Is the best solution to follow system 2 (edition has to be in one table - no extra work for each and every class) and somehow re-create the all_tag_relations table like system 1 (from the tables tutor_tag_relations and webclasses_tag_relations), creating the extra tutor tag rels for each and every class by a tutor (which is currently missing in system 2's temporary all_tag_relations table). That would be a time consuming logic script. I doubt that table can be recreated without resorting to PHP sript (mysql alone cannot do that). But the problem is that running all this at search time will make search definitely slow. So, how do such systems work? How are such situations handled? I thought about we can run a cron which initiates that PHP script, say every 1 minute and replaces the existing all_tag_relations table as per new tag rels from tutor_tag_relations and webclasses_tag_relations (replaces means creates a new table, deletes the original and renames the new one as all_tag_relations, otherwise search won't work during that period- or is there any better way to that?). Anyway, the result would be that any changes by tutors will reflect in search in the next 1 minute and not immediately. An alternateve would be to initate that PHP script every time a tutor edits his profile. But here again, since many users may edit their profiles concurrently, will the creation of so many tables be a burden and can mysql make the server slow? Any help would be appreciated and working solution will be accepted as answer. Thanks, Sandeepan

    Read the article

  • Webstart omits cookie, resulting in EOFException in ObjectInputStream when accessing Servlets?!

    - by Houtman
    Hi, My app. is started from both the commandline and by using an JNLP file. Im running java version 1.6.0_14 First i had the problem that i created the Buffered input and output streams in incorrect order. Found the solution here at StackOverflow . So starting from the commandline works fine now. But when starting the app using Webstart, it ends here java.io.EOFException at java.io.ObjectInputStream$PeekInputStream.readFully(Unknown Source) at java.io.ObjectInputStream$BlockDataInputStream.readShort(Unknown Source) at java.io.ObjectInputStream.readStreamHeader(Unknown Source) at java.io.ObjectInputStream.<init>(Unknown Source) at <..>remoting.thinclient.RemoteSocketChannel.<init>(RemoteSocketChannel.java:76) I found some posts regarding similar problems; at ibm.com - identifies cookies problem at bugs.sun.com - identifies problem as solved in 6u10(b12)? The first suggests that there is a problem in Webstart with cookies. It doesn't seem to be acknowledged as a proper java bug though.. Still i am a bit lost in the solution provided regarding the cookies.(ibm link) Can anyone expand on the cookie solution? I can't find information on how the cookie is generated in the first place. Many thanks.

    Read the article

  • NSPredicate and arrays

    - by bend0r
    Hello, I've got a short question. I have an NSArray filled with Cars (inherits from NSObject) Car has the @property NSString *engine (also regarded @synthesize ...) Now I want tu filter the array using NSPredicate predicate = [NSPredicate predicateWithFormat:[NSString stringWithFormat:@"(engine like %@)", searchText]]; newArray = [ArrayWithCars filteredArrayUsingPredicate:predicate]; This throws an valueForUndefinedKey error. Is the predicateWithFormat correct? thanks for your responses

    Read the article

  • SQL2k8 T-SQL: Output into XML file

    - by Nai
    I have two tables Table Name: Graph UID1 UID2 ----------- 12 23 12 32 41 51 32 41 Table Name: Profiles NodeID UID Name ----------------- 1 12 Robs 2 23 Jones 3 32 Lim 4 41 Teo 5 51 Zacks I want to get an xml file like this: <graph directed="0"> <node id="1"> <att name="UID" value="12"/> <att name="Name" value="Robs"/> </node> <node id="2"> <att name="UID" value="23"/> <att name="Name" value="Jones"/> </node> <node id="3"> <att name="UID" value="32"/> <att name="Name" value="Lim"/> </node> <node id="4"> <att name="UID" value="41"/> <att name="Name" value="Teo"/> </node> <node id="5"> <att name="UID" value="51"/> <att name="Name" value="Zacks"/> </node> <edge source="12" target="23" /> <edge source="12" target="32" /> <edge source="41" target="51" /> <edge source="32" target="41" /> </graph> Thanks very much!

    Read the article

  • vs10 not deploying all required files - then not over-writing updated files

    - by justSteve
    I'm in the habit of deploying to alternating folders (/inetpub/wwwroot/mySite & /inetpub/wwwroot/mySite2) so if something unexpected happens with the deploy i can quickly swap back to a previous version just by changing the path in IIS So i was deploying an MVC2 webapp to a empty folder figuring that VS would send up all the files it needs. Not even close. Initially, it didn't even upload a couple required nHibernate.dlls. Later, after manually copying files referenced in the thrown exceptions, i just copied all the files from the previous compile and then re-published over the top expecting VS to over-write the changed files. Failed that too. No reports of errors by VS....just failed to over-write a number of pre-existing (but changed/updated) files. Hard to believe these kinds of errors (and lack of feedback that errors were encountered) in a state of the art tool like VS. Clearly, I'm doing something wrong. I'm using VisualSVN for source control and connect to my colocated server via a VPN-based mapped network drive (so I can use FileSystem to publish). (both of which can complicate file properties) VS08 had more choices for which files it would send up - i found i needed to use the 'All files in source' on an initial deployment, the 'Replace Matching'. If I choose 'delete all existing...' I'd be back to square 1 and have to deploy with the 'All files in source project folder'. But VS10 doesn't have the 'All files in source project folder. I ended up manually copying the files - which seems not right in the extreme. Are these known issues others have to deal with? What's best practice for deploying a web-app? thx

    Read the article

  • Automated Oracle Schema Migration Tool

    - by Dave Jarvis
    What are some tools (commercial or OSS) that provide a GUI-based mechanism for creating schema upgrade scripts? To be clear, here are the tool responsibilities: Obtain connection to recent schema version (called "source"). Obtain connection to previous schema version (called "target"). Compare all schema objects between source and target. Create a script to make the target schema equivalent to the source schema ("upgrade script"). Create a rollback script to revert the source schema, used if the upgrade script fails (at any point). Create individual files for schema objects. The software must: Use ALTER TABLE instead of DROP and CREATE for renamed columns. Work with Oracle 10g or greater. Create scripts that can be batch executed (via command-line). Trivial installation process. (Bonus) Create scripts that can be executed with SQL*Plus. Here are some examples (from StackOverflow, ServerFault, and Google searches): Change Manager Oracle SQL Developer Software that does not meet the criteria, or cannot be evaluated, includes: TOAD PL/SQL Developer - Invalid SQL*Plus statements. Does not produce ALTER statements. SQL Fairy - No installer. Complex installation process. Poorly documented. DBDiff - Crippled data set evaluation, poor customer support. OrbitDB - Crippled data set evaluation. SchemaCrawler - No easily identifiable download version for Oracle databases. SQL Compare - SQL Server, not Oracle. LiquiBase - Requires changing the development process. No installer. Manually edit config files. Does not recognize its own baseUrl parameter. The only acceptable crippling of the evaluation version is by time. Crippling by restricting the number of tables and views hides possible bugs that are only visible in the software during the attempt to migrate hundreds of tables and views.

    Read the article

  • asp.net membership create user error

    - by nitroxn
    Hi, I encountered an issue while creating users using the asp.net user membership. The membership provider configuration is as follows- The error generated by ASP.net configuration application is as follows- An error was encountered. Please return to the previous page and try again. The following message may help in diagnosing the problem: Exception has been thrown by the target of an invocation. at System.RuntimeMethodHandle._InvokeMethodFast(IRuntimeMethodInfo method, Object target, Object[] arguments, SignatureStruct& sig, MethodAttributes methodAttributes, RuntimeType typeOwner) at System.RuntimeMethodHandle.InvokeMethodFast(IRuntimeMethodInfo method, Object target, Object[] arguments, Signature sig, MethodAttributes methodAttributes, RuntimeType typeOwner) at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture, Boolean skipVisibilityChecks) at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture) at System.Web.Administration.WebAdminMembershipProvider.CallWebAdminMembershipProviderHelperMethodOutParams(String methodName, Object[] parameters, Type[] paramTypes) at System.Web.Administration.WebAdminMembershipProvider.CreateUser(String username, String password, String email, String passwordQuestion, String passwordAnswer, Boolean isApproved, Object providerUserKey, MembershipCreateStatus& status) at System.Web.UI.WebControls.CreateUserWizard.AttemptCreateUser() at System.Web.UI.WebControls.CreateUserWizard.OnNextButtonClick(WizardNavigationEventArgs e) at System.Web.UI.WebControls.Wizard.OnBubbleEvent(Object source, EventArgs e) at System.Web.UI.WebControls.CreateUserWizard.OnBubbleEvent(Object source, EventArgs e) at System.Web.UI.WebControls.Wizard.WizardChildTable.OnBubbleEvent(Object source, EventArgs args) at System.Web.UI.Control.RaiseBubbleEvent(Object source, EventArgs args) at System.Web.UI.WebControls.Button.OnCommand(CommandEventArgs e) at System.Web.UI.WebControls.Button.RaisePostBackEvent(String eventArgument) at System.Web.UI.WebControls.Button.System.Web.UI.IPostBackEventHandler.RaisePostBackEvent(String eventArgument) at System.Web.UI.Page.RaisePostBackEvent(IPostBackEventHandler sourceControl, String eventArgument) at System.Web.UI.Page.RaisePostBackEvent(NameValueCollection postData) at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint)

    Read the article

  • Can this Query can be corrected or different table structure needed? (question is clear, detailed, d

    - by sandeepan
    This is a bit lengthy but I have provided sufficient details and kept things very clear. Please see if you can help. (I will surely accept answer if it solves my problem) I am sure a person experienced with this can surely help or suggest me to decide the tables structure. About the system:- There are tutors who create classes A tags based search approach is being followed Tag relations are created/edited when new tutors registers/edits profile data and when tutors create classes (this makes tutors and classes searcheable).For simplicity, let us consider only tutor name and class name are the fields which are matched against search keywords. In this example, I am considering - tutor "Sandeepan Nath" has created a class called "first class" tutor "Bob Cratchit" has created a class called "new class" Desired search results- AND logic to be appied on the search keywords and match against class and tutor data(class name + tutor name), in other words, All those classes be shown such that all the search terms are present in the class name or its tutor name. Example to be clear - Searching "first class" returns class with id_wc = 1. Working Searching "Sandeepan class" should also return class with id_wc = 1. Not working in System 2. Problem with profile editing and searching To tell in one sentence, I am facing a conflict between the ease of profile edition (edition of tag relations when tutor profiles are edited) and the ease of search logic. In the beginning, we had one table structure and search was easy but tag edition logic was very clumsy and unmaintainable(Check System 1 in the section below) . So we created separate tag relations tables to make profile edition simpler but search has become difficult. Please dump the tables so that you can run the search query I have given below and see the results. System 1 (previous system - search easy - profile edition difficult):- Only one table called All_Tag_Relations table had the all the tag relations. The tags table below is common to both systems 1 and 2. CREATE TABLE IF NOT EXISTS `all_tag_relations` ( `id_tag_rel` int(10) NOT NULL AUTO_INCREMENT, `id_tag` int(10) unsigned NOT NULL DEFAULT '0', `id_tutor` int(10) DEFAULT NULL, `id_wc` int(10) unsigned DEFAULT NULL, PRIMARY KEY (`id_tag_rel`), KEY `All_Tag_Relations_FKIndex1` (`id_tag`), KEY `id_wc` (`id_wc`), KEY `id_tag` (`id_tag`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; INSERT INTO `all_tag_relations` (`id_tag_rel`, `id_tag`, `id_tutor`, `id_wc`) VALUES (1, 1, 1, NULL), (2, 2, 1, NULL), (3, 1, 1, 1), (4, 2, 1, 1), (5, 3, 1, 1), (6, 4, 1, 1), (7, 6, 2, NULL), (8, 7, 2, NULL), (9, 6, 2, 2), (10, 7, 2, 2), (11, 5, 2, 2), (12, 4, 2, 2); CREATE TABLE IF NOT EXISTS `tags` ( `id_tag` int(10) unsigned NOT NULL AUTO_INCREMENT, `tag` varchar(255) DEFAULT NULL, PRIMARY KEY (`id_tag`), UNIQUE KEY `tag` (`tag`), KEY `id_tag` (`id_tag`), KEY `tag_2` (`tag`), KEY `tag_3` (`tag`), KEY `tag_4` (`tag`), FULLTEXT KEY `tag_5` (`tag`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=8 ; INSERT INTO `tags` (`id_tag`, `tag`) VALUES (1, 'Sandeepan'), (2, 'Nath'), (3, 'first'), (4, 'class'), (5, 'new'), (6, 'Bob'), (7, 'Cratchit'); Please note that for every class, the tag rels of its tutor have to be duplicated. Example, for class with id_wc=1, the tag rel records with id_tag_rel = 3 and 4 are actually extras if you compare with the tag rel records with id_tag_rel = 1 and 2. System 2 (present system - profile edition easy, search difficult) Two separate tables Tutors_Tag_Relations and Webclasses_Tag_Relations have the corresponding tag relations data (Please dump into a separate database)- CREATE TABLE IF NOT EXISTS `tutors_tag_relations` ( `id_tag_rel` int(10) NOT NULL AUTO_INCREMENT, `id_tag` int(10) unsigned NOT NULL DEFAULT '0', `id_tutor` int(10) DEFAULT NULL, PRIMARY KEY (`id_tag_rel`), KEY `All_Tag_Relations_FKIndex1` (`id_tag`), KEY `id_tag` (`id_tag`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; INSERT INTO `tutors_tag_relations` (`id_tag_rel`, `id_tag`, `id_tutor`) VALUES (1, 1, 1), (2, 2, 1), (3, 6, 2), (4, 7, 2); CREATE TABLE IF NOT EXISTS `webclasses_tag_relations` ( `id_tag_rel` int(10) NOT NULL AUTO_INCREMENT, `id_tag` int(10) unsigned NOT NULL DEFAULT '0', `id_tutor` int(10) DEFAULT NULL, `id_wc` int(10) DEFAULT NULL, PRIMARY KEY (`id_tag_rel`), KEY `webclasses_Tag_Relations_FKIndex1` (`id_tag`), KEY `id_wc` (`id_wc`), KEY `id_tag` (`id_tag`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; INSERT INTO `webclasses_tag_relations` (`id_tag_rel`, `id_tag`, `id_tutor`, `id_wc`) VALUES (1, 3, 1, 1), (2, 4, 1, 1), (3, 5, 2, 2), (4, 4, 2, 2); CREATE TABLE IF NOT EXISTS `tags` ( `id_tag` int(10) unsigned NOT NULL AUTO_INCREMENT, `tag` varchar(255) DEFAULT NULL, PRIMARY KEY (`id_tag`), UNIQUE KEY `tag` (`tag`), KEY `id_tag` (`id_tag`), KEY `tag_2` (`tag`), KEY `tag_3` (`tag`), KEY `tag_4` (`tag`), FULLTEXT KEY `tag_5` (`tag`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=8 ; INSERT INTO `tags` (`id_tag`, `tag`) VALUES (1, 'Sandeepan'), (2, 'Nath'), (3, 'first'), (4, 'class'), (5, 'new'), (6, 'Bob'), (7, 'Cratchit'); CREATE TABLE IF NOT EXISTS `all_tag_relations` ( `id_tag_rel` int(10) NOT NULL AUTO_INCREMENT, `id_tag` int(10) unsigned NOT NULL DEFAULT '0', `id_tutor` int(10) DEFAULT NULL, `id_wc` int(10) unsigned DEFAULT NULL, PRIMARY KEY (`id_tag_rel`), KEY `All_Tag_Relations_FKIndex1` (`id_tag`), KEY `id_wc` (`id_wc`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; insert into All_Tag_Relations select NULL,id_tag,id_tutor,NULL from Tutors_Tag_Relations; insert into All_Tag_Relations select NULL,id_tag,id_tutor,id_wc from Webclasses_Tag_Relations; Here you can see how easily tutor first name can be edited only in one place. But search has become really difficult, so on being advised to use a Temporary table, I am creating one at every search request, then dumping all the necessary data and then searching from it, I am creating this All_Tag_Relations table at search run time. Here I am just dumping all the data from the two tables Tutors_Tag_Relations and Webclasses_Tag_Relations. But, I am still not able to get classes if I search with tutor name This is the query which searches "first class". Running them on both the systems shows correct results (returns the class with id_wc = 1). SELECT wtagrels.id_wc,SUM(DISTINCT( wtagrels.id_tag =3)) AS key_1_total_matches, SUM(DISTINCT( wtagrels.id_tag =4)) AS key_2_total_matches FROM all_tag_relations AS wtagrels WHERE ( wtagrels.id_tag =3 OR wtagrels.id_tag =4 ) GROUP BY wtagrels.id_wc HAVING key_1_total_matches = 1 AND key_2_total_matches = 1 LIMIT 0, 20 But, searching for "Sandeepan class" works only with the 1st system Here is the query which searches "Sandeepan class" SELECT wtagrels.id_wc,SUM(DISTINCT( wtagrels.id_tag =1)) AS key_1_total_matches, SUM(DISTINCT( wtagrels.id_tag =4)) AS key_2_total_matches FROM all_tag_relations AS wtagrels WHERE ( wtagrels.id_tag =1 OR wtagrels.id_tag =4 ) GROUP BY wtagrels.id_wc HAVING key_1_total_matches = 1 AND key_2_total_matches = 1 LIMIT 0, 20 Can anybody alter this query and somehow do a proper join or something to get correct results. That solves my problem in a nice way. As you can figure out, the reason why it does not work in system 2 is that in system 1, for every class, one additional tag relation linking class and tutor name is present. e.g. for class first class, (records with id_tag_rel 3 and 4) which returns the class on searching with tutor name. So, you see the trade-off between the search and profile edition difficulty with the two systems. How do I overcome both. I have to reach a conclusion soon. So far my reasoning is it is definitely not good from a code maintainability point of view to follow the single tag rel table structure of system one, because in a real system while editing a field like "tutor qualifications", there can be as many records in tag rels table as there are words in qualification of a tutor (one word in a field = one tag relation). Now suppose a tutor has 100 classes. When he edits his qualification, all the tag rel rows corresponding to him are deleted and then as many copies are to be created (as per the new qualification data) as there are classes. This becomes particularly difficult if later more searcheable fields are added. The code cannot be robust. Is the best solution to follow system 2 (edition has to be in one table - no extra work for each and every class) and somehow re-create the all_tag_relations table like system 1 (from the tables tutor_tag_relations and webclasses_tag_relations), creating the extra tutor tag rels for each and every class by a tutor (which is currently missing in system 2's temporary all_tag_relations table). That would be a time consuming logic script. I doubt that table can be recreated without resorting to PHP sript (mysql alone cannot do that). But the problem is that running all this at search time will make search definitely slow. So, how do such systems work? How are such situations handled? I thought about we can run a cron which initiates that PHP script, say every 1 minute and replaces the existing all_tag_relations table as per new tag rels from tutor_tag_relations and webclasses_tag_relations (replaces means creates a new table, deletes the original and renames the new one as all_tag_relations, otherwise search won't work during that period- or is there any better way to that?). Anyway, the result would be that any changes by tutors will reflect in search in the next 1 minute and not immediately. An alternateve would be to initate that PHP script every time a tutor edits his profile. But here again, since many users may edit their profiles concurrently, will the creation of so many tables be a burden and can mysql make the server slow? Any help would be appreciated and working solution will be accepted as answer. Thanks, Sandeepan

    Read the article

< Previous Page | 354 355 356 357 358 359 360 361 362 363 364 365  | Next Page >