Search Results

Search found 4640 results on 186 pages for 'unique'.

Page 168/186 | < Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >

  • how to find maximum frequent item sets from large transactional data file

    - by ANIL MANE
    Hi, I have the input file contains large amount of transactions like Transaction ID Items T1 Bread, milk, coffee, juice T2 Juice, milk, coffee T3 Bread, juice T4 Coffee, milk T5 Bread, Milk T6 Coffee, Bread T7 Coffee, Bread, Juice T8 Bread, Milk, Juice T9 Milk, Bread, Coffee, T10 Bread T11 Milk T12 Milk, Coffee, Bread, Juice i want the occurrence of every unique item like Item Name Count Bread 9 Milk 8 Coffee 7 Juice 6 and from that i want an a fp-tree now by traversing this tree i want the maximal frequent itemsets as follows The basic idea of method is to dispose nodes in each “layer” from bottom to up. The concept of “layer” is different to the common concept of layer in a tree. Nodes in a “layer” mean the nodes correspond to the same item and be in a linked list from the “Head Table”. For nodes in a “layer” NBN method will be used to dispose the nodes from left to right along the linked list. To use NBN method, two extra fields will be added to each node in the ordered FP-Tree. The field tag of node N stores the information of whether N is maximal frequent itemset, and the field count’ stores the support count information in the nodes at left. In Figure, the first node to be disposed is “juice: 2”. If the min_sup is equal to or less than 2 then “bread, milk, coffee, juice” is a maximal frequent itemset. Firstly output juice:2 and set the field tag of “coffee:3” as “false” (the field tag of each node is “true” initially ). Next check whether the right four itemsets juice:1 be the subset of juice:2. If the itemset one node “juice:1” corresponding to is the subset of juice:2 set the field tag of the node “false”. In the following process when the field tag of the disposed node is FALSE we can omit the node after the same tagging. If the min_sup is more than 2 then check whether the right four juice:1 is the subset of juice:2. If the itemset one node “juice:1” corresponding to is the subset of juice:2 then set the field count’ of the node with the sum of the former count’ and 2 After all the nodes “juice” disposed ,begin to dispose the node “coffee:3”. Any suggestions or available source code, welcome. thanks in advance

    Read the article

  • boost multi_index_container and erase performance

    - by rjoshi
    I have a boost multi_index_container declared as below which is indexed by hash_unique id(unsigned long) and hash_non_unique transaction id(long). Insertion and retrieval of elements is fast but when I delete elements, it is much slower. I was expecting it to be constant time as key is hashed. e.g To erase elements from container for 10,000 elements, it takes around 2.53927016 seconds for 15,000 elements, it takes around 7.137100068 seconds for 20,000 elements, it takes around 21.391720757 seconds Is it something I am missing or is it expected behavior? class Session { public: Session() { //increment unique id static unsigned long counter = 0; boost::mutex::scoped_lock guard(mx); counter++; m_nId = counter; } unsigned long GetId() { return m_nId; } long GetTransactionHandle(){ return m_nTransactionHandle; } .... private: unsigned long m_nId; long m_nTransactionHandle; boost::mutext mx; .... }; typedef multi_index_container< Session*, indexed_by< hashed_unique< mem_fun<Session,unsigned long,&Session::GetId> >, hashed_non_unique< mem_fun<Session,unsigned long,&Session::GetTransactionHandle> > > //end indexed_by > SessionContainer; typedef SessionContainer::nth_index<0>::type SessionById; int main() { ... SessionContainer container; SessionById *pSessionIdView = &get<0>(container); unsigned counter = atoi(argv[1]); vector<Session*> vSes(counter); //insert for(unsigned i = 0; i < counter; i++) { Session *pSes = new Session(); container.insert(pSes); vSes.push_back(pSes); } timespec ts; lock_settime(CLOCK_PROCESS_CPUTIME_ID, &ts); //erase for(unsigned i = 0; i < counter; i++) { pSessionIdView->erase(vSes[i]->getId()); delete vSes[i]; } lock_gettime(CLOCK_PROCESS_CPUTIME_ID, &ts); std::cout << "Total time taken for erase:" << ts.tv_sec << "." << ts.tv_nsec << "\n"; return (EXIST_SUCCESS); }

    Read the article

  • Best Practices / Patterns for Enterprise Protection/Remediation of SSNs (Social Security Numbers)

    - by Erik Neu
    I am interested in hearing about enterprise solutions for SSN handling. (I looked pretty hard for any pre-existing post on SO, including reviewing the terriffic SO automated "Related Questions" list, and did not find anything, so hopefully this is not a repeat.) First, I think it is important to enumerate the reasons systems/databases use SSNs: (note—these are reasons for de facto current state—I understand that many of them are not good reasons) Required for Interaction with External Entities. This is the most valid case—where external entities your system interfaces with require an SSN. This would typically be government, tax and financial. SSN is used to ensure system-wide uniqueness. SSN has become the default foreign key used internally within the enterprise, to perform cross-system joins. SSN is used for user authentication (e.g., log-on) The enterprise solution that seems optimum to me is to create a single SSN repository that is accessed by all applications needing to look up SSN info. This repository substitutes a globally unique, random 9-digit number (ASN) for the true SSN. I see many benefits to this approach. First of all, it is obviously highly backwards-compatible—all your systems "just" have to go through a major, synchronized, one-time data-cleansing exercise, where they replace the real SSN with the alternate ASN. Also, it is centralized, so it minimizes the scope for inspection and compliance. (Obviously, as a negative, it also creates a single point of failure.) This approach would solve issues 2 and 3, without ever requiring lookups to get the real SSN. For issue #1, authorized systems could provide an ASN, and be returned the real SSN. This would of course be done over secure connections, and the requesting systems would never persist the full SSN. Also, if the requesting system only needs the last 4 digits of the SSN, then that is all that would ever be passed. Issue #4 could be handled the same way as issue #1, though obviously the best thing would be to move away from having users supply an SSN for log-on. There are a couple of papers on this: UC Berkely: http://bit.ly/bdZPjQ Oracle Vault: bit.ly/cikbi1

    Read the article

  • How to properly test Hibernate length restriction?

    - by Cesar
    I have a POJO mapped with Hibernate for persistence. In my mapping I specify the following: <class name="ExpertiseArea"> <id name="id" type="string"> <generator class="assigned" /> </id> <version name="version" column="VERSION" unsaved-value="null" /> <property name="name" type="string" unique="true" not-null="true" length="100" /> ... </class> And I want to test that if I set a name longer than 100 characters, the change won't be persisted. I have a DAO where I save the entity with the following code: public T makePersistent(T entity){ transaction = getSession().beginTransaction(); transaction.begin(); try{ getSession().saveOrUpdate(entity); transaction.commit(); }catch(HibernateException e){ logger.debug(e.getMessage()); transaction.rollback(); } return entity; } Actually the code above is from a GenericDAO which all my DAOs inherit from. Then I created the following test: public void testNameLengthMustBe100orLess(){ ExpertiseArea ea = new ExpertiseArea( "1234567890" + "1234567890" + "1234567890" + "1234567890" + "1234567890" + "1234567890" + "1234567890" + "1234567890" + "1234567890" + "1234567890"); assertTrue("Name should be 100 characters long", ea.getName().length() == 100); ead.makePersistent(ea); List<ExpertiseArea> result = ead.findAll(); assertEquals("Size must be 1", result.size(),1); ea.setName(ea.getName()+"1234567890"); ead.makePersistent(ea); ExpertiseArea retrieved = ead.findById(ea.getId(), false); assertTrue("Both objects should be equal", retrieved.equals(ea)); assertTrue("Name should be 100 characters long", (retrieved.getName().length() == 100)); } The object is persisted ok. Then I set a name longer than 100 characters and try to save the changes, which fails: 14:12:14,608 INFO StringType:162 - could not bind value '12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890' to parameter: 2; data exception: string data, right truncation 14:12:14,611 WARN JDBCExceptionReporter:100 - SQL Error: -3401, SQLState: 22001 14:12:14,611 ERROR JDBCExceptionReporter:101 - data exception: string data, right truncation 14:12:14,614 ERROR AbstractFlushingEventListener:324 - Could not synchronize database state with session org.hibernate.exception.DataException: could not update: [com.exp.model.ExpertiseArea#33BA7E09-3A79-4C9D-888B-4263314076AF] //Stack trace 14:12:14,615 DEBUG GenericDAO:87 - could not update: [com.exp.model.ExpertiseArea#33BA7E09-3A79-4C9D-888B-4263314076AF] 14:12:14,616 DEBUG JDBCTransaction:186 - rollback 14:12:14,616 DEBUG JDBCTransaction:197 - rolled back JDBC Connection That's expected behavior. However when I retrieve the persisted object to check if its name is still 100 characters long, the test fails. The way I see it, the retrieved object should have a name that is 100 characters long, given that the attempted update failed. The last assertion fails because the name is 110 characters long now, as if the ea instance was indeed updated. What am I doing wrong here?

    Read the article

  • zend_navigation and modules

    - by Grant Collins
    Hi, I am developing an application at the moment with zend and I have seperated the app into modules. The default module is the main site where unlogged in users access and have free reign to look around. When you log in, depending on the user type you either go to module A or module B, which is controlled by simple ACLs. If you have access to Module A you can not access Module B and visa versa. Both user types can see the default module. Now I want to use Zend_Navigation to manage the entire applications navigation in all modules. I am not sure how to go about this, as all the examples that I have seen work within a module or very simple application. I've tried to have my navigation.xml file look like this: <configdata> <navigation> <label>Home</label> <controller>index</controller> <action>index</action> <module>default</module> <pages> <tour> <label>tour</label> <controller>tour</controller> <action>index</action> <module>default</module> </tour> <blog> <label>blog</label> <url>http://blog.mysite.com</url> </blog> <support> <label>Support</label> <controller>support</controller> <action>index</action> <module>default</module> </support> </pages> </navigation> </configdata> This if fine for the default module, but how would I go about the other modules to this navigation page? Each module has it's own home page, and others etc. Would I be better off adding a unique navigation.xml file for each module that is loaded in the preDispatch plugin that I have written to handle my ACLs?? Or keep them in one massive navigation file? Any tips would be fantastic. Thanks, Grant

    Read the article

  • What is causing this SQL 2005 Primary Key Deadlock between two real-time bulk upserts?

    - by skimania
    Here's the scenario: I've got a table called MarketDataCurrent (MDC) that has live updating stock prices. I've got one process called 'LiveFeed' which reads prices streaming from the wire, queues up inserts, and uses a 'bulk upload to temp table then insert/update to MDC table.' (BulkUpsert) I've got another process which then reads this data, computes other data, and then saves the results back into the same table, using a similar BulkUpsert stored proc. Thirdly, there are a multitude of users running a C# Gui polling the MDC table and reading updates from it. Now, during the day when the data is changing rapidly, things run pretty smoothly, but then, after market hours, we've recently started seeing an increasing number of Deadlock exceptions coming out of the database, nowadays we see 10-20 a day. The imporant thing to note here is that these happen when the values are NOT changing. Here's all the relevant info: Table Def: CREATE TABLE [dbo].[MarketDataCurrent]( [MDID] [int] NOT NULL, [LastUpdate] [datetime] NOT NULL, [Value] [float] NOT NULL, [Source] [varchar](20) NULL, CONSTRAINT [PK_MarketDataCurrent] PRIMARY KEY CLUSTERED ( [MDID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] - stackoverflow wont let me post images until my reputation goes up to 10, so i'll add them as soon as you bump me up, hopefully as a result of this question. ![alt text][1] [1]: http://farm5.static.flickr.com/4049/4690759452_6b94ff7b34.jpg I've got a Sql Profiler Trace Running, catching the deadlocks, and here's what all the graphs look like. stackoverflow wont let me post images until my reputation goes up to 10, so i'll add them as soon as you bump me up, hopefully as a result of this question. ![alt text][2] [2]: http://farm5.static.flickr.com/4035/4690125231_78d84c9e15_b.jpg Process 258 is called the following 'BulkUpsert' stored proc, repeatedly, while 73 is calling the next one: ALTER proc [dbo].[MarketDataCurrent_BulkUpload] @updateTime datetime, @source varchar(10) as begin transaction update c with (rowlock) set LastUpdate = getdate(), Value = t.Value, Source = @source from MarketDataCurrent c INNER JOIN #MDTUP t ON c.MDID = t.mdid where c.lastUpdate < @updateTime and c.mdid not in (select mdid from MarketData where LiveFeedTicker is not null and PriceSource like 'LiveFeed.%') and c.value <> t.value insert into MarketDataCurrent with (rowlock) select MDID, getdate(), Value, @source from #MDTUP where mdid not in (select mdid from MarketDataCurrent with (nolock)) and mdid not in (select mdid from MarketData where LiveFeedTicker is not null and PriceSource like 'LiveFeed.%') commit And the other one: ALTER PROCEDURE [dbo].[MarketDataCurrent_LiveFeedUpload] AS begin transaction -- Update existing mdid UPDATE c WITH (ROWLOCK) SET LastUpdate = t.LastUpdate, Value = t.Value, Source = t.Source FROM MarketDataCurrent c INNER JOIN #TEMPTABLE2 t ON c.MDID = t.mdid; -- Insert new MDID INSERT INTO MarketDataCurrent with (ROWLOCK) SELECT * FROM #TEMPTABLE2 WHERE MDID NOT IN (SELECT MDID FROM MarketDataCurrent with (NOLOCK)) -- Clean up the temp table DELETE #TEMPTABLE2 commit To clarify, those Temp Tables are being created by the C# code on the same connection and are populated using the C# SqlBulkCopy class. To me it looks like it's deadlocking on the PK of the table, so I tried removing that PK and switching to a Unique Constraint instead but that increased the number of deadlocks 10-fold. I'm totally lost as to what to do about this situation and am open to just about any suggestion. HELP!!

    Read the article

  • How to designing a generic databse whos layout may change over time?

    - by mawg
    Here's a tricky one - how do I programatically create and interrogate a database who's contents I can't really foresee? I am implementing a generic input form system. The user can create PHP forms with a WYSIWYG layout and use them for any purpose he wishes. He can also query the input. So, we have three stages: a form is designed and generated. This is a one-off procedure, although the form can be edited later. This designs the database. someone or several people make use of the form - say for daily sales reports, stock keeping, payroll, etc. Their input to the forms is written to the database. others, maybe management, can query the database and generate reports. Since these forms are generic, I can't predict the database structure - other than to say that it will reflect HTML form fields and consist of a the data input from collection of edit boxes, memos, radio buttons and the like. Questions and remarks: A) how can I best structure the database, in terms of tables and columns? What about primary keys? My first thought was to use the control name to identify each column, then I realized that the user can edit the form and rename, so that maybe "name" becomes "employee" or "wages" becomes ":salary". I am leaning towards a unique number for each. B) how best to key the rows? I was thinking of a timestamp to allow me to query and a column for the row Id from A) C) I have to handle column rename/insert/delete. Foe deletion, I am unsure whether to delete the data from the database. Even if the user is not inputting it from the form any more he may wish to query what was previously entered. Or there may be some legal requirements to retain the data. Any gotchas in column rename/insert/delete? D) For the querying, I can have my PHP interrogate the database to get column names and generate a form with a list where each entry has a database column name, a checkbox to say if it should be used in the query and, based on column type, some selection criteria. That ought to be enough to build searches like "position = 'senior salesman' and salary 50k". E) I probably have to generate some fancy charts - graphs, histograms, pie charts, etc for query results of numerical data over time. I need to find some good FOSS PHP for this. F) What else have I forgotten? This all seems very tricky to me, but I am database n00b - maybe it is simple to you gurus?

    Read the article

  • Hibernate mapping to object that already exists

    - by teehoo
    I have two classes, ServiceType and ServiceRequest. Every ServiceRequest must specify what kind of ServiceType it is. All ServiceType's are predefined in the database, and ServiceRequest is created at runtime by the client. Here are my .hbm files: <hibernate-mapping> <class dynamic-insert="false" dynamic-update="false" mutable="true" name="xxx.model.entity.ServiceRequest" optimistic-lock="version" polymorphism="implicit" select-before-update="false"> <id column="USER_ID" name="id"> <generator class="native"/> </id> <property name="quantity"> <column name="quantity" not-null="true"/> </property> <many-to-one cascade="all" class="xxx.model.entity.ServiceType" column="service_type" name="serviceType" not-null="false" unique="false"/> </class> </hibernate-mapping> and <hibernate-mapping> <class dynamic-insert="false" dynamic-update="false" mutable="true" name="xxx.model.entity.ServiceType" optimistic-lock="version" polymorphism="implicit" select-before-update="false"> <id column="USER_ID" name="id"> <generator class="native"/> </id> <property name="description"> <column name="description" not-null="false"/> </property> <property name="cost"> <column name="cost" not-null="true"/> </property> <property name="enabled"> <column name="enabled" not-null="true"/> </property> </class> </hibernate-mapping> When I run this, I get com.mysql.jdbc.exceptions.MySQLIntegrityConstraintViolationException: Cannot add or update a child row: a foreign key constraint fails I think my problem is that when I create a new ServiceRequest object, ServiceType is one of its properties, and therefore when I'm saving ServiceRequest to the database, Hibernate attempts to insert the ServiceType object once again, and finds that it is already exists. If this is the case, how do I make it so that Hibernate points to the exists ServiceType instead of trying to insert it again?

    Read the article

  • Segmenting a double array of labels

    - by Ami
    The Problem: I have a large double array populated with various labels. Each element (cell) in the double array contains a set of labels and some elements in the double array may be empty. I need an algorithm to cluster elements in the double array into discrete segments. A segment is defined as a set of pixels that are adjacent within the double array and one label that all those pixels in the segment have in common. (Diagonal adjacency doesn't count and I'm not clustering empty cells). |-------|-------|------| | Jane | Joe | | | Jack | Jane | | |-------|-------|------| | Jane | Jane | | | | Joe | | |-------|-------|------| | | Jack | Jane | | | Joe | | |-------|-------|------| In the above arrangement of labels distributed over nine elements, the largest cluster is the “Jane” cluster occupying the four upper left cells. What I've Considered: I've considered iterating through every label of every cell in the double array and testing to see if the cell-label combination under inspection can be associated with a preexisting segment. If the element under inspection cannot be associated with a preexisting segment it becomes the first member of a new segment. If the label/cell combination can be associated with a preexisting segment it associates. Of course, to make this method reasonable I'd have to implement an elaborate hashing system. I'd have to keep track of all the cell-label combinations that stand adjacent to preexisting segments and are in the path of the incrementing indices that are iterating through the double array. This hash method would avoid having to iterate through every pixel in every preexisting segment to find an adjacency. Why I Don't Like it: As is, the above algorithm doesn't take into consideration the case where an element in the double array can be associated with two unique segments, one in the horizontal direction and one in the vertical direction. To handle these cases properly, I would need to implement a test for this specific case and then implement a method that will both associate the element under inspection with a segment and then concatenate the two adjacent identical segments. On the whole, this method and the intricate hashing system that it would require feels very inelegant. Additionally, I really only care about finding the large segments in the double array and I'm much more concerned with the speed of this algorithm than with the accuracy of the segmentation, so I'm looking for a better way. I assume there is some stochastic method for doing this that I haven't thought of. Any suggestions?

    Read the article

  • How do I create a list or set object in a class in Python?

    - by Az
    For my project, the role of the Lecturer (defined as a class) is to offer projects to students. Project itself is also a class. I have some global dictionaries, keyed by the unique numeric id's for lecturers and projects that map to objects. Thus for the "lecturers" dictionary (currently): lecturer[id] = Lecturer(lec_name, lec_id, max_students) I'm currently reading in a white-space delimited text file that has been generated from a database. I have no direct access to the database so I haven't much say on how the file is formatted. Here's a fictionalised snippet that shows how the text file is structured. Please pardon the cheesiness. 0001 001 "Miyamoto, S." "Even Newer Super Mario Bros" 0002 001 "Miyamoto, S." "Legend of Zelda: Skies of Hyrule" 0003 002 "Molyneux, P." "Project Milo" 0004 002 "Molyneux, P." "Fable III" 0005 003 "Blow, J." "Ponytail" The structure of each line is basically proj_id, lec_id, lec_name, proj_name. Now, I'm currently reading the relevant data into the relevant objects. Thus, proj_id is stored in class Project whereas lec_name is a class Lecturer object, et al. The Lecturer and Project classes are not currently related. However, as I read in each line from the text file, for that line, I wish to read in the project offered by the lecturer into the Lecturer class; I'm already reading the proj_id into the Project class. I'd like to create an object in Lecturer called offered_proj which should be a set or list of the projects offered by that lecturer. Thus whenever, for a line, I read in a new project under the same lec_id, offered_proj will be updated with that project. If I wanted to get display a list of projects offered by a lecturer I'd ideally just want to use print lecturers[lec_id].offered_proj. My Python isn't great and I'd appreciate it if someone could show me a way to do that. I'm not sure if it's better as a set or a list, as well. Update After the advice from Alex Martelli and Oddthinking I went back and made some changes and tried to print the results. Here's the code snippet: for line in csv_file: proj_id = int(line[0]) lec_id = int(line[1]) lec_name = line[2] proj_name = line[3] projects[proj_id] = Project(proj_id, proj_name) lecturers[lec_id] = Lecturer(lec_id, lec_name) if lec_id in lecturers.keys(): lecturers[lec_id].offered_proj.add(proj_id) print lec_id, lecturers[lec_id].offered_proj The print lecturers[lec_id].offered_proj line prints the following output: 001 set([0001]) 001 set([0002]) 002 set([0003]) 002 set([0004]) 003 set([0005]) It basically feels like the set is being over-written or somesuch. So if I try to print for a specific lecturer print lec_id, lecturers[001].offered_proj all I get is the last the proj_id that has been read in.

    Read the article

  • Problem with final branch in a parallel activity

    - by Dan Revell
    This might seem like a silly thing to say, the final branch in a parallel activity so I'll clarify. It's a parallel activity with three branches each containing a simple create task, on task changed and complete task. The branch containing the task that is last to complete seems to break. So every task works in it's own right, but the last one encounters a problem. Say the user clicks the final tasks link to open the attached infopath form and submits that. Execution gets to the event handler for that onTaskChanged where a taskCompleted variable gets set to true which will exit the while loop. I've successfully hit a breakpoint on this line so I know that happens. However the final activity in that branch, the completeTask doesn't get hit. When submit is clicked in the final form, the operation in progess screen says of for quite a while before returning to the workflow status page. The task that was opened and submitted says "Not Started". I can disable any of the branches to leave only two, but the same problem happens with the last to be completed. Earlier on in the workflow I do essencially the same thing. I have another 3 branch parallel activity with each brach containing a task. This one works correctly which leads me to believe that it might be a problem with having two parallel activites in the same sequential workflow. I've considered the possibility that it might be a correlation token problem. The token that every task branch uses is unique to that branch and it's owner activity name is est to that of the branch. It stands to reason that if the task complete variable is indeed getting set to true but the while loop isn't being exited, then there's a wire crossing with the variable somewhere. However I'd still have thought that the task status back on the workflow status page would at least say that the task is in progress. This is a frustrating show stopper of a bug for me. Any thoughts or suggestions would be much appricated so I can investigate them.

    Read the article

  • Do Websites need Local Databases Anymore?

    - by viatropos
    If there's a better place to ask this, please let me know. Every time I build a new website/blog/shopping-cart/etc., I keep trying to do the following: Extract out common functionality into reusable code (Rubygems and jQuery plugins mostly) If possible, convert that gem into a small service so I never have to deal with a database for the objects involved (by service, I mean something lean and mean, usually built with the Sinatra Web Framework with a few core models). My assumption is, if I can remove dependencies on local databases, that will make it easier and more scalable in the long run (scalable in terms of reusability and manageability, not necessarily database/performance). I'm not sure if that's a good or bad assumption yet. What do you think? I've made this assumption because of the following reason: Most serious database/model functionality has been built on the internet somewhere. Just to name a few: Social Network API: Facebook Messaging API: Twitter Mailing API: Google Event API: Eventbrite Shopping API: Shopify Comment API: Disqus Form API: Wufoo Image API: Picasa Video API: Youtube ... Each of those things are fairly complicated to build from scratch and to make as optimized, simple, and easy to use as those companies have made them. So if I build an app that shows pictures (picasa) on an Event page (eventbrite), and you can see who joined the event (facebook events), and send them emails (google apps api), and have them fill out monthly surveys (wufoo), and watch a video when they're done (youtube), all integrated into a custom, easy to use website, and I can do that without ever creating a local database, is that a good thing? I ask because there's two things missing from the puzzle that keep forcing me to create that local database: Post API RESTful/Pretty Url API While there's plenty of Blogging systems and APIs for them, there is no one place where you can just write content and have it part of some massive thing. For every app, I have to use code for creating pretty/restful urls, and that saves posts. But it seems like that should be a service! Question is, is that what the website is? ...That place to integrate the worlds services for my specific cause... and, sigh, to store posts that only my site has access to. Will everyone always need "their own blog"? Why not just have a profile and write lots of content on an established platform like StackOverflow or Facebook? ... That way I can write apps entirely without a database and know that I'm doing it right. Note: Of course at some point you'd need a database, if you were doing something unique or new. But for the case where you're just rewiring information or creating things like videos, events, and products, is it really necessary anymore??

    Read the article

  • How do I de-duplicate a list of nodes in XSLT - and return the last node encountered?

    - by Broam
    I've seen lots of "de-duplicate this xml" questions but everyone wants the first node or the nodes are identical. I have a bit of a bigger puzzle. I have a list of articles in XML, a relevant snippet is shown: <item><key>Article1</key><stamp>100</stamp></item> <item><key>Article1</key><stamp>130</stamp></item> <item><key>Article2</key><stamp>800</stamp></item> <item><key>Article1</key><stamp>180</stamp></item> <item><key>Article3</key><stamp>900</stamp></item> <item><key>Article3</key><stamp>950</stamp></item> <item><key>Article4</key><stamp>990</stamp></item> <item><key>Article5</key><stamp>999</stamp></item> I'd like a list of nodes where the keys are unique and where the last instance is returned, not the first: Stamp (integer) is always increasing for elements of a particular key. Ideally I'd like "largest stamp" but they're always in order so the shortcut is ok. Desired result: (Order doesn't really matter.) <item><key>Article2</key><stamp>800</stamp></item> <item><key>Article1</key><stamp>180</stamp></item> <item><key>Article3</key><stamp>950</stamp></item> <item><key>Article4</key><stamp>990</stamp></item> <item><key>Article5</key><stamp>999</stamp></item> I'm somewhat confused on how to get this list. Any ideas? I'm using the Saxon processor if it matters.

    Read the article

  • Multiple Cookie Generation Issue

    - by Shannon
    Hi all, jQuery newbie here. I need to be able to set multiple cookies within the code without have to change out this variable each and every time. Is there any way to make this code generate unique cookies for different pages? As it is now, I'm having to rename that variable for each page that the jQuery animations exist on. (sbbcookiename) Background on the issue: We are having issues with the sliders not autoplaying once one has already been triggered, due to it the cookie having been cached. Thanks for your help. (function(){ jQuery.noConflict(); var _TIMEOUT= 1000, initTimer= 0, sbLoaded= false, _re= null ; initTimer= setTimeout(initSlider, _TIMEOUT); jQuery(document).ready(initSlider); function initSlider(){ if(sbLoaded) return; if (jQuery('#campaign_name').length > 0) { var sbbcookiename = jQuery('#campaign_name').attr('class'); } else { var sbbcookiename = "slider728x90"; } var slideTimeout //timer ,sbTrigger = jQuery('#slidebartrigger') //convenience ,sbFirstSlide = (document.cookie.indexOf(sbbcookiename) == -1) //check cookie for 'already seen today' ; clearTimeout(initTimer); sbLoaded= true; function toggleSlideboxes(){ if(slideTimeout) clearTimeout(slideTimeout); var isDown = sbTrigger.is('.closeSlide'); jQuery('#slidebar')['slide' + (isDown ? 'Up' : 'Down')]((isDown ? 1000 : 1000), function(){ if(sbFirstSlide){ //if 'first time today' then clear for click-to-replay sbTrigger.removeClass('firstSlide'); sbFirstSlide = false; } sbTrigger[(isDown ? 'remove' : 'add') + 'Class']('closeSlide').one('click', toggleSlideboxes); if(!isDown) slideTimeout = setTimeout(toggleSlideboxes, 4000); }); } if(sbFirstSlide){ //not seen yet today so set a cookie for expire tomorrow, then toggle the slide boxes... var oneDay = new Date(); oneDay.setUTCDate(oneDay.getUTCDate()+1); oneDay.setUTCHours(0, 0, 0, 0); //set to literally day-by-day, rather than 24 hours document.cookie=sbbcookiename+"=true;path=/;expires="+oneDay.toUTCString(); toggleSlideboxes(); }else{ //already seen today so show the trigger and set a click event on it... sbTrigger.removeClass('firstSlide').one('click', toggleSlideboxes); } } })();

    Read the article

  • Optimizing tasks to reduce CPU in a trading application

    - by Joel
    Hello, I have designed a trading application that handles customers stocks investment portfolio. I am using two datastore kinds: Stocks - Contains unique stock name and its daily percent change. UserTransactions - Contains information regarding a specific purchase of a stock made by a user : the value of the purchase along with a reference to Stock for the current purchase. db.Model python modules: class Stocks (db.Model): stockname = db.StringProperty(multiline=True) dailyPercentChange=db.FloatProperty(default=1.0) class UserTransactions (db.Model): buyer = db.UserProperty() value=db.FloatProperty() stockref = db.ReferenceProperty(Stocks) Once an hour I need to update the database: update the daily percent change in Stocks and then update the value of all entities in UserTransactions that refer to that stock. The following python module iterates over all the stocks, update the dailyPercentChange property, and invoke a task to go over all UserTransactions entities which refer to the stock and update their value: Stocks.py # Iterate over all stocks in datastore for stock in Stocks.all(): # update daily percent change in datastore db.run_in_transaction(updateStockTxn, stock.key()) # create a task to update all user transactions entities referring to this stock taskqueue.add(url='/task', params={'stock_key': str(stock.key(), 'value' : self.request.get ('some_val_for_stock') }) def updateStockTxn(stock_key): #fetch the stock again - necessary to avoid concurrency updates stock = db.get(stock_key) stock.dailyPercentChange= data.get('some_val_for_stock') # I get this value from outside ... some more calculations here ... stock.put() Task.py (/task) # Amount of transaction per task amountPerCall=10 stock=db.get(self.request.get("stock_key")) # Get all user transactions which point to current stock user_transaction_query=stock.usertransactions_set cursor=self.request.get("cursor") if cursor: user_transaction_query.with_cursor(cursor) # Spawn another task if more than 10 transactions are in datastore transactions = user_transaction_query.fetch(amountPerCall) if len(transactions)==amountPerCall: taskqueue.add(url='/task', params={'stock_key': str(stock.key(), 'value' : self.request.get ('some_val_for_stock'), 'cursor': user_transaction_query.cursor() }) # Iterate over all transaction pointing to stock and update their value for transaction in transactions: db.run_in_transaction(updateUserTransactionTxn, transaction.key()) def updateUserTransactionTxn(transaction_key): #fetch the transaction again - necessary to avoid concurrency updates transaction = db.get(transaction_key) transaction.value= transaction.value* self.request.get ('some_val_for_stock') db.put(transaction) The problem: Currently the system works great, but the problem is that it is not scaling well… I have around 100 Stocks with 300 User Transactions, and I run the update every hour. In the dashboard, I see that the task.py takes around 65% of the CPU (Stock.py takes around 20%-30%) and I am using almost all of the 6.5 free CPU hours given to me by app engine. I have no problem to enable billing and pay for additional CPU, but the problem is the scaling of the system… Using 6.5 CPU hours for 100 stocks is very poor. I was wondering, given the requirements of the system as mentioned above, if there is a better and more efficient implementation (or just a small change that can help with the current implemntation) than the one presented here. Thanks!! Joel

    Read the article

  • Constructor versus setter injection

    - by Chris
    Hi, I'm currently designing an API where I wish to allow configuration via a variety of methods. One method is via an XML configuration schema and another method is through an API that I wish to play nicely with Spring. My XML schema parsing code was previously hidden and therefore the only concern was for it to work but now I wish to build a public API and I'm quite concerned about best-practice. It seems that many favor javabean type PoJo's with default zero parameter constructors and then setter injection. The problem I am trying to tackle is that some setter methods implementations are dependent on other setter methods being called before them in sequence. I could write anal setters that will tolerate themselves being called in many orders but that will not solve the problem of a user forgetting to set the appropriate setter and therefore the bean being in an incomplete state. The only solution I can think of is to forget about the objects being 'beans' and enforce the required parameters via constructor injection. An example of this is in the default setting of the id of a component based on the id of the parent components. My Interface public interface IMyIdentityInterface { public String getId(); /* A null value should create a unique meaningful default */ public void setId(String id); public IMyIdentityInterface getParent(); public void setParent(IMyIdentityInterface parent); } Base Implementation of interface: public abstract class MyIdentityBaseClass implements IMyIdentityInterface { private String _id; private IMyIdentityInterface _parent; public MyIdentityBaseClass () {} @Override public String getId() { return _id; } /** * If the id is null, then use the id of the parent component * appended with a lower-cased simple name of the current impl * class along with a counter suffix to enforce uniqueness */ @Override public void setId(String id) { if (id == null) { IMyIdentityInterface parent = getParent(); if (parent == null) { // this may be the top level component or it may be that // the user called setId() before setParent(..) } else { _id = Helpers.makeIdFromParent(parent,getClass()); } } else { _id = id; } } @Override public IMyIdentityInterface getParent() { return _parent; } @Override public void setParent(IMyIdentityInterface parent) { _parent = parent; } } Every component in the framework will have a parent except for the top level component. Using the setter type of injection, then the setters will have different behavior based on the order of the calling of the setters. In this case, would you agree, that a constructor taking a reference to the parent is better and dropping the parent setter method from the interface entirely? Is it considered bad practice if I wish to be able to configure these components using an IoC container? Chris

    Read the article

  • Rails. Putting update logic in your migrations

    - by Daniel Abrahamsson
    A couple of times I've been in the situation where I've wanted to refactor the design of some model and have ended up putting update logic in migrations. However, as far as I've understood, this is not good practice (especially since you are encouraged to use your schema file for deployment, and not your migrations). How do you deal with these kind of problems? To clearify what I mean, say I have a User model. Since I thought there would only be two kinds of users, namely a "normal" user and an administrator, I chose to use a simple boolean field telling whether the user was an adminstrator or not. However, after I while I figured I needed some third kind of user, perhaps a moderator or something similar. In this case I add a UserType model (and the corresponding migration), and a second migration for removing the "admin" flag from the user table. And here comes the problem. In the "add_user_type_to_users" migration I have to map the admin flag value to a user type. Additionally, in order to do this, the user types have to exist, meaning I can not use the seeds file, but rather create the user types in the migration (also considered bad practice). Here comes some fictional code representing the situation: class CreateUserTypes < ActiveRecord::Migration def self.up create_table :user_types do |t| t.string :name, :nil => false, :unique => true end #Create basic types (can not put in seed, because of future migration dependency) UserType.create!(:name => "BASIC") UserType.create!(:name => "MODERATOR") UserType.create!(:name => "ADMINISTRATOR") end def self.down drop_table :user_types end end class AddTypeIdToUsers < ActiveRecord::Migration def self.up add_column :users, :type_id, :integer #Determine type via the admin flag basic = UserType.find_by_name("BASIC") admin = UserType.find_by_name("ADMINISTRATOR") User.all.each {|u| u.update_attribute(:type_id, (u.admin?) ? admin.id : basic.id)} #Remove the admin flag remove_column :users, :admin #Add foreign key execute "alter table users add constraint fk_user_type_id foreign key (type_id) references user_types (id)" end def self.down #Re-add the admin flag add_column :users, :admin, :boolean, :default => false #Reset the admin flag (this is the problematic update code) admin = UserType.find_by_name("ADMINISTRATOR") execute "update users set admin=true where type_id=#{admin.id}" #Remove foreign key constraint execute "alter table users drop foreign key fk_user_type_id" #Drop the type_id column remove_column :users, :type_id end end As you can see there are two problematic parts. First the row creation part in the first model, which is necessary if I would like to run all migrations in a row, then the "update" part in the second migration that maps the "admin" column to the "type_id" column. Any advice?

    Read the article

  • Load on page inside another onClick

    - by Robin I Knight
    Hello, I need to include the content, scripts, forms and dynamic abilities of one page in another onClick. Take a look at http://www.divethegap.com/scuba-diving-programmes-dive-the-gap/dahab-master-scuba-diver.html Then follow one of the links that says 'Beginner' 'Open Water Diver' etc.... You will find a PHP page with a series of options. It is an adaption of the wordpress blog system to produce only specific options for specific programmes by considering each type of each diving programme a category and then displaying only results from that category. You will see that each option is also a collapsible panel and there are also several javascripts that calculate durations, quantities and prices. There is also a validating webform at the end. Now go back to the first page. What I would like to do is include all the content from the second page after the main header inside tabbed panels on the first page so that the customers can immidietly see everything that is included. Essentially the options on the first page would become a series of tabs. The only way I can see to do this is with an iFrame as each option would need a unique URL ending (that is .php?cat=26 or .php?cat=27). THe problem is that the collapsible panels will not work with an iFrame as the iFrame will not resize when the panels open. There were also some calculation problems, but I think that was more down to me staring at the screen for the last 3 hours not remembering to include everything. I have tried it with resizing iframe SSI scripts and have got nowhere. I tried actually embedding it in the page better with a ajax script, but that left behind all the scripts that make it work. I checked with full URL's on everything and it would not take work with any scripts. I know that you could just make the whole page reload but then the user would be at the top of the page again, and even if another script was applied to slowly bring them down again it would not be anything near as easy to use as if it was like tabbed panels. Any ideas. Kind Regards,

    Read the article

  • Stumbleupon type query...

    - by Chris Denman
    Wow, makes your head spin! I am about to start a project, and although my mySql is OK, I can't get my head around what required for this: I have a table of web addresses. id,url 1,http://www.url1.com 2,http://www.url2.com 3,http://www.url3.com 4,http://www.url4.com I have a table of users. id,name 1,fred bloggs 2,john bloggs 3,amy bloggs I have a table of categories. id,name 1,science 2,tech 3,adult 4,stackoverflow I have a table of categories the user likes as numerical ref relating to the category unique ref. For example: user,category 1,4 1,6 1,7 1,10 2,3 2,4 3,5 . . . I have a table of scores relating to each website address. When a user visits one of these sites and says they like it, it's stored like so: url_ref,category 4,2 4,3 4,6 4,2 4,3 5,2 5,3 . . . So based on the above data, URL 4 would score (in it's own right) as follows: 2=2 3=2 6=1 What I was hoping to do was pick out a random URL from over 2,000,000 records based on the current users interests. So if the logged in user likes categories 1,2,3 then I would like to ORDER BY a score generated based on their interest. If the logged in user likes categories 2 3 and 6 then the total score would be 5. However, if the current logged in user only like categories 2 and 6, the URL score would be 3. So the order by would be in context of the logged in users interests. Think of stumbleupon. I was thinking of using a set of VIEWS to help with sub queries. I'm guessing that all 2,000,000 records will need to be looked at and based on the id of the url it will look to see what scores it has based on each selected category of the current user. So we need to know the user ID and this gets passed into the query as a constant from the start. Ain't got a clue! Chris Denman

    Read the article

  • Symfony : ajax call cause server to queue next queries

    - by Remiz
    Hello, I've a problem with my application when an ajax call on the server takes too much time : it queue all the others queries from the user until it's done server side (I realized that canceling the call client side has no effect and the user still have to wait). Here is my test case : <script type="text/javascript" src="jquery-1.4.1.min.js"></script> <a href="another-page.php">Go to another page on the same server</a> <script type="text/javascript"> url = 'http://localserver/some-very-long-complex-query'; $.get(url); </script> So when the get is fired and then after I click on the link, the server finish serving the first call before bringing me to the other page. My problem is that I want to avoid this behavior. I'm on a LAMP server and I'm looking in a way to inform the server that the user aborted the query with function like connection_aborted(), do you think that's the way to go ? Also, I know that the longest part of this PHP script is a MySQL query, so even if I know that connection_aborted() can detect that the user cancel the call, I still need to check this during the MySQL query... I'm not really sure that PHP can handle this kind of "event". So if you have any better idea, I can't wait to hear it. Thank you. Update : After further investigation, I found that the problem happen only with the Symfony framework (that I omitted to precise, my bad). It seems that an Ajax call lock any other future call. It maybe related to the controller or the routing system, I'm looking into it. Also for those interested by the problem here is my new test case : -new project with Symfony 1.4.3, default configuration, I just created an app and a default module. -jquery 1.4 for the ajax query. Here is my actions.class.php (in my unique module) : class defaultActions extends sfActions { public function executeIndex(sfWebRequest $request) { //Do nothing } public function executeNewpage() { //Do also nothing } public function executeWaitingaction(){ // Wait sleep(30); return false; } } Here is my indexSuccess.php template file : <script type="text/javascript" src="jquery-1.4.1.min.js"></script> <a href="<?php echo url_for('default/newpage');?>">Go to another symfony action</a> <script type="text/javascript"> url = '<?php echo url_for('default/waitingaction');?>'; $.get(url); </script> For the new page template, it's not very relevant... But with this, I'm able to reproduce the lock problem I've on my real application. Is somebody else having the same issue ? Thanks.

    Read the article

  • Multiple WCF windows services on the same box - endpoint configuration

    - by David Belanger
    Hi, I have 2 windows services installed on a machine with different service names, they install and start fine. What's happening is that they're both listening to the same endpoints and thus competing for messages. I've tried to change the baseAddress to be different for both services without success. Here's my service host config: <configuration> <appSettings> <add key="ServiceName" value="Service - Service Host 1"/> </appSettings> <system.serviceModel> <bindings> <wsHttpBinding> <binding name="NoSecurityBinding"> <security mode="None"> <message establishSecurityContext="false"/> <transport clientCredentialType="None"/> </security> </binding> </wsHttpBinding> <basicHttpBinding> <binding name="NoSecurityBinding"> <security mode="None"> <transport clientCredentialType="None"/> </security> </binding> </basicHttpBinding> </bindings> <services> <service name="Lib.Interface.Service" behaviorConfiguration="Lib.Interface.ServiceBehavior"> <host> <baseAddresses> <add baseAddress="http://localhost:8000/Service"/> </baseAddresses> </host> <endpoint address="" binding="basicHttpBinding" bindingConfiguration="NoSecurityBinding" contract="Lib.Interface.IService"/> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange"/> </service> </services> <behaviors> <serviceBehaviors> <behavior name="Lib.Interface.ServiceBehavior"> <serviceMetadata httpGetEnabled="True" policyVersion="Policy12"/> <serviceDebug includeExceptionDetailInFaults="False"/> </behavior> </serviceBehaviors> </behaviors> Any idea how I could set up the services (other than unique service names) so they're not conflicting with one another? Thanks.

    Read the article

  • Multiple infowindows - tearing my hair out

    - by thewinchester
    Ok, I'll admit I'm nowhere near the best programmer on the planet - and I'm used to the answer staring me right in the face but not making sense of it. Problem I need to display multiple markers on a map, each with their own infowindow. I have created the individual markers without a problem, but don't know how to create the infowindows for each. Steps so far I am generating a map using the V3 API within an ASP-based website, with markers being created from a set of DB records. The markers are created by looping through a rs and defining a marker() with the relevant variables: var myLatlng = new google.maps.LatLng(lat,long); var marker = new google.maps.Marker({ map: map, position: myLatlng, title: 'locationname', icon: 'http://google-maps-icons.googlecode.com/files/park.png' }); This is creating all the relevant markers in their correct locations. What I need to do now, and am not sure of how to achieve is give each of them their own unique infowindow which I can use to display information and links relevant to that marker. Source <script type="text/javascript" src="http://maps.google.com/maps/api/js?sensor=false"></script> <script language="javascript"> $(document).ready(function() { //Google Maps var myOptions = { zoom: 5, center: new google.maps.LatLng(-26.66, 122.25), mapTypeControl: false, mapTypeId: google.maps.MapTypeId.ROADMAP, navigationControl: true, navigationControlOptions: { style: google.maps.NavigationControlStyle.SMALL } } var map = new google.maps.Map(document.getElementById("map_canvas"), myOptions); <!-- While locations_haslatlong not BOF.EOF --> <% While ((Repeat1__numRows <> 0) AND (NOT locations_haslatlong.EOF)) %> var myLatlng = new google.maps.LatLng(<%=(locations_haslatlong.Fields.Item("llat").Value)%>,<%=(locations_haslatlong.Fields.Item("llong").Value)%>); var marker = new google.maps.Marker({ map: map, position: myLatlng, title: '<%=(locations_haslatlong.Fields.Item("ldescription").Value)%>', icon: 'http://google-maps-icons.googlecode.com/files/park.png', clickable: true, }); <% Repeat1__index=Repeat1__index+1 Repeat1__numRows=Repeat1__numRows-1 locations_haslatlong.MoveNext() Wend %> <!-- End While locations_haslatlong not BOF.EOF --> google.maps.event.addListener(marker, 'click', function() { infowindow.open(map,marker); }); google.maps.event.addListener(marker, 'dblclick', function() { map.setZoom(14); }); });

    Read the article

  • Multiprogramming in Django, writing to the Database

    - by Marcus Whybrow
    Introduction I have the following code which checks to see if a similar model exists in the database, and if it does not it creates the new model: class BookProfile(): # ... def save(self, *args, **kwargs): uniqueConstraint = {'book_instance': self.book_instance, 'collection': self.collection} # Test for other objects with identical values profiles = BookProfile.objects.filter(Q(**uniqueConstraint) & ~Q(pk=self.pk)) # If none are found create the object, else fail. if len(profiles) == 0: super(BookProfile, self).save(*args, **kwargs) else: raise ValidationError('A Book Profile for that book instance in that collection already exists') I first build my constraints, then search for a model with those values which I am enforcing must be unique Q(**uniqueConstraint). In addition I ensure that if the save method is updating and not inserting, that we do not find this object when looking for other similar objects ~Q(pk=self.pk). I should mention that I ham implementing soft delete (with a modified objects manager which only shows non-deleted objects) which is why I must check for myself rather then relying on unique_together errors. Problem Right thats the introduction out of the way. My problem is that when multiple identical objects are saved in quick (or as near as simultaneous) succession, sometimes both get added even though the first being added should prevent the second. I have tested the code in the shell and it succeeds every time I run it. Thus my assumption is if say we have two objects being added Object A and Object B. Object A runs its check upon save() being called. Then the process saving Object B gets some time on the processor. Object B runs that same test, but Object A has not yet been added so Object B is added to the database. Then Object A regains control of the processor, and has allready run its test, even though identical Object B is in the database, it adds it regardless. My Thoughts The reason I fear multiprogramming could be involved is that each Object A and Object is being added through an API save view, so a request to the view is made for each save, thus not a single request with multiple sequential saves on objects. It might be the case that Apache is creating a process for each request, and thus causing the problems I think I am seeing. As you would expect, the problem only occurs sometimes, which is characteristic of multiprogramming or multiprocessing errors. If this is the case, is there a way to make the test and set parts of the save() method a critical section, so that a process switch cannot happen between the test and the set?

    Read the article

  • How can I structure and recode messy categorical data in R?

    - by briandk
    I'm struggling with how to best structure categorical data that's messy, and comes from a dataset I'll need to clean. The Coding Scheme I'm analyzing data from a university science course exam. We're looking at patterns in student responses, and we developed a coding scheme to represent the kinds of things students are doing in their answers. A subset of the coding scheme is shown below. Note that within each major code (1, 2, 3) are nested non-unique sub-codes (a, b, ...). What the Raw Data Looks Like I've created an anonymized, raw subset of my actual data which you can view here. Part of my problem is that those who coded the data noticed that some students displayed multiple patterns. The coders' solution was to create enough columns (reason1, reason2, ...) to hold students with multiple patterns. That becomes important because the order (reason1, reason2) is arbitrary--two students (like student 41 and student 42 in my dataset) who correctly applied "dependency" should both register in an analysis, regardless of whether 3a appears in the reason column or the reason2 column. How Can I Best Structure Student Data? Part of my problem is that in the raw data, not all students display the same patterns, or the same number of them, in the same order. Some students may do just one thing, others may do several. So, an abstracted representation of example students might look like this: Note in the example above that student002 and student003 both are coded as "1b", although I've deliberately shown the order as different to reflect the reality of my data. My (Practical) Questions Should I concatenate reason1, reason2, ... into one column? How can I (re)code the reasons in R to reflect the multiplicity for some students? Thanks I realize this question is as much about good data conceptualization as it is about specific features of R, but I thought it would be appropriate to ask it here. If you feel it's inappropriate for me to ask the question, please let me know in the comments, and stackoverflow will automatically flood my inbox with sadface emoticons. If I haven't been specific enough, please let me know and I'll do my best to be clearer.

    Read the article

  • Design time error - multiple controls with the same Id

    - by ilivewithian
    I'm using VS 2008, I have a very simple page that has a bunch of uniquely named controls. When I try to view it in design mode I get the following error: Error Rendering Control - Label12 An unhanded exception has occurred. Multiple controls with the same ID 'Label1' were found. FindControl requires that controls have unique IDs I've checked the HTML and the designer file and I can only see one control called Label1. What might be causing this? Also, here is the aspx markup I'm having trouble with? <%@ Page Language="vb" AutoEventWireup="false" CodeBehind="CoachingAppearanceReport.aspx.vb" Inherits="AcademyPro.CoachingAppearanceReport" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title></title> </head> <body> <form id="form1" runat="server"> <asp:UpdatePanel ID="UpdatePanel1" runat="server"> <ContentTemplate> <div id="appearanceDetail" class="Left CriteriaContainer"> <asp:Label ID="Label1" runat="server" Text="Appearance Type" AssociatedControlID="ddlAppearanceType" /> <asp:DropDownList ID="ddlAppearanceType" runat="server" CssClass="AppType" OnDataBound="ddlAppearanceType_DataBound" DataSourceID="odsAppearanceType" DataTextField="AppearanceType" DataValueField="AppearanceTypeCode"> </asp:DropDownList> <asp:RequiredFieldValidator ID="rfvAppearanceType" runat="server" ControlToValidate="ddlAppearanceType" InitialValue="" Text="*" ErrorMessage="The appearance type must be selected" /> <asp:Label ID="lblAppearanceType" runat="server" /> <br /> <div class="SubSettings"> <asp:Label ID="Label12" runat="server" Text="Subbed for" AssociatedControlID="ddlSubbedFor" /> <asp:DropDownList ID="ddlSubbedFor" runat="server" OnDataBound="ddlSubbedFor_DataBound" DataSourceID="odsPlayersInAgeGroup" DataTextField="PlayerName" DataValueField="PlayerID"> </asp:DropDownList> <asp:Label ID="lblSubbedFor" runat="server" /> <br /> <asp:Label ID="Label13" runat="server" Text="Mins" AssociatedControlID="txtSubMins" /> <asp:TextBox ID="txtSubMins" runat="server" MaxLength="3" CssClass="TinyWidth" /> <asp:Label ID="lblSubMins" runat="server" /> </div> </div> </ContentTemplate> </asp:UpdatePanel> </form> </body> </html>

    Read the article

< Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >