Search Results

Search found 19382 results on 776 pages for 'multiple'.

Page 659/776 | < Previous Page | 655 656 657 658 659 660 661 662 663 664 665 666  | Next Page >

  • TFS Solution build cascading to several other builds even when common components were not modified

    - by Bob Palmer
    Hey all, here is the issue I am currently trying to work through. We are using Team Foundation Server 2008, and utilizing the automated build support out of the box. We have one very large project that encompasses a number of interrelated components and web sites, each of which is set up as a Visual Studio solution file. Many of these solutions are highly interrelated since they may contain applications, or contain common libraries or shared components. We have roughly 20 or so applications, three large web sites, and about 20 components. Each solution may include projects from other solutions. For example, a solution for a console app would also include the project files for all of the components it utilizes, since we need to ensure that when someone changes a component and rebuilds it, it is reflected in all of the projects that consume that component, and we can make sure nothing was broken. We have build projects for each solution, whether that's an application, component, or web site. For this example, we will call them solutions 01, 02, and 03. These reference multiple projects (both their own core project and test projects, plus the projects relating to various components). Solution 01 has projects A, B, and C. Solution 02 has projects C, D, and E. Solution 03 has projects E, F, and G. Now, for the problem. If I modify project A, the system will end up rebuilding all three solutions. Worse, all thirty solutions reference common projects used for data access (let's call it project H). Because they all share one project in common, if I modify any solution in my stack, even if it does not touch project H, I still end up kicking off every single build script. Any thoughts on how to address this? Ideally I would only want to kick off builds where their constituant projects were directly modified - i.e in the example below, if I modified project C, I would only rebuild solutions 01 and 02. Thanks!

    Read the article

  • Error handling in PHP

    - by Industrial
    Hi guys, We're building a PHP app based on Good old MVC (codeigniter framework) and have run into trouble with a massive chained action that consists of multiple model calls, that together is a part of a big transaction to the database. We want to be able to do a list of actions and get a status report of each one back from the function, whatever the outcome is. Our first initial idea was to utilize the exceptions of PHP5, but since we want to also need status messages that doesnt break the execution of the script, this was our solution that we came up with. It goes a little something like this: $sku = $this->addSku( $name ); if ($sku === false) { $status[] = 'Something gone terrible wrong'; $this->db->trans_rollback(); return $status; } $image= $this->addImage( $filename); if ($image=== false) { $error[] = 'Image could not be uploaded, check filesize'; $this->db->trans_rollback(); return $status; } Our controller looks like this: $var = $this->products->addProductGroup($array); if (is_array($var)) { foreach ($var as $error) { echo $error . '<br />'; } } It appears to be a very fragile solution to do what we need, but it's neither scalable, neither effective when compared to pure PHP exceptions for instance. Is this really the way that this kind of stuff generally is handled in MVC based apps? Thanks!

    Read the article

  • Business entity: private instance VS single instance

    - by taoufik
    Suppose my WinForms application has a business entity Order, the entity is used in multiple views, each view handles a different domain or use-case in the application. As an example, one managing orders, the other one digging into one order and displaying additional data. If I'd use nHibernate (or any other ORM) and use one session/dataContext per view (or per db action), I'd end up getting two different instances for the same Order (let's say orderId = 1). Although functionally the same entity, they are technically two different instances. Yes, I could implement Equals/GetHashcode to make them "seem" the same. Why would you go for a single instance per entity vs private instances per view or per use-case? Having single instances has the advantage of sharing INotifyPropertyChanged events, and sharing additional (non-persistent) data. Having a private instance in each view would give you the flexibility of the undo functionality on a view level. In the example above, I'd allow the user to change order details, and give them the flexibility to not save the change. Here, synchronisation between the view/use-case happens on a data persistence level. What would your argument be?

    Read the article

  • Generate an ID via COM interop

    - by Erik van Brakel
    At the moment, we've got an unmaintanable ball of code which offers an interface to a third party application. The third party application has a COM assembly which MUST be used to create new entries. This process involves two steps: generate a new object (basically an ID), and update that object with new field values. Because COM interop is so slow, we only use that to generate the ID (and related objects) in the database. The actual update is done using a regular SQL query. What I am trying to figure out if it's possible to use NHibernate to do some of the heavy lifting for us, without bypassing the COM assembly. Here's the code for saving something to the database as I envision it: using(var s = sessionFactory.OpenSession()) using(var t = s.BeginTransaction()) { MyEntity entity = new MyEntity(); s.Save(entity); t.Commit(); } Regular NH code I'd say. Now, this is where it gets tricky. I think I have to supply my own implementation of NHibernate.Id.IIdentifierGenerator which calls the COM assembly in the Generate method. That's not a problem. What IS a problem is that the COM assembly requires initialisation, which does take a bit of time. It also doesn't like multiple instances in the same process, for some reason. What I would like to know is if there's a way to properly access an external service in the generator code. I'm free to use any technique I want, so if it involves something like an IoC container that's no problem. The thing I am looking for is where exactly to hook-up my code so I can access the things I need in my generator, without having to resort to using singletons or other nasty stuff.

    Read the article

  • Downloading jQuery UI: Ok, so what part of this mess do I copy to the server?

    - by Martha
    From the "should be simple, but..." files: Trying to get started with jQuery UI. Went to the site, used their custom builder thingy to assemble the parts I need, made myself a custom theme using the Theme Roller, downloaded the zip file thus produced, unzipped it on my local drive. Ok, so I have 37 folders, 311 files, and a total of 2.4 MB. Ain't no way in hell all this is going on the server. What parts do I need to put there? 'css' 'custom-theme': jquery-ui-1.8.custom.css, 'images' subfolder with 12 .png images 'development-bundle' 'demos': demos.css, index.html, plus 18 subfolders, but I'm guessing "not needed" 'docs': 17 .html files, but again, I'm guessing "not needed" 'external': 4 .js files, one .css 'themes': 'base' and 'custom-theme' subfolders, each with 8 or 9 .css files and an 'images' subfolder with about a dozen images 'ui': 25 .js files, an 'i18n' subfolder with 53 .js files, and a 'minified' subfolder with 24 .js files 'js': jquery-1.4.2.min.js and jquery-ui-1.8.custom.min.js Also, the file structure. Our server is set up something like this: root admin (administrative tools) css forms (the gist of the site lives here) images include (asp code snippets that are used by multiple pages) js (just a few things right now, like an ancient wheezing spelling checker) As far as I can tell, the jQuery css files assume that (1) each theme is in its own folder, and (2) each folder has its own images subfolder. How can I convince it otherwise? i.e. put the necessary .js files in the 'js' folder, the .css files in the 'css' folder, and the images in the 'images' folder?

    Read the article

  • Validate XSD with XML .

    - by munish
    I want to know how to validate XML with XSD . XML is not of an element type but a complex type . Since validator class's validate method compare only element type. So basically I want to valide XSD's complex type with an XML. e.g. Basic XSD below xs:element name="Customer"> <xs:complexType> <xs:sequence> <xs:element name="Dob" type="xs:date" /> <xs:element name="Address"> <xs:complexType> <xs:sequence> <xs:element name="Line1" type="xs:string" /> <xs:element name="Line2" type="xs:string" /> </xs:sequence> </xs:complexType> </xs:element> </xs:sequence> </xs:complexType> </xs:element> We can assume we have multiple complex type in the Customer element. MY XML is <Address> <Line1>34 thingy street, someplace</Line1> <Line2>sometown, w1w8uu </Line2> </Address> How I validate my XML with XSD. Kindly post your suggestions in java

    Read the article

  • NHibernate Session DI from StructureMap in components

    - by Corey Coogan
    I know this is somewhat of a dead horse, but I'm not finding a satisfactory answer. First let me say, I am NOT dealing with a web app, otherwise managing NH Session is quite simple. I have a bunch of enterprise components. Those components have their own service layer that will act on multiple repositories. For example: Claim Component Claim Processing Service Claim Repository Billing Component Billing Service Billing REpository Policy Component PolicyLockService Policy Repository Now I may have a console, or windows application that needs to coordinate an operation that involves each of the services. I want to write the services to be injected with (DI) their required repositories. The Repositories should have an ISession, or similar, injected into them so that I can have this operation performed under one ISession/ITransaction. I'm aware of the Unit Of Work pattern and the many samples out there, but none of them showed DI. I'm also leery of [ThreadStatic] because this stuff can also be used from WCF and I have found enough posts describing how to do that. I've read about Business Conversations, but need something simple that each windows/console app can easily bootstrap since we have alot of these apps and some pretty inexperienced developers. So how can I configure StructureMap to inject the same ISession into each of the dependent repositories from an application? Here's a totally contrived and totally made up example without using SM (for clarification only - please don't spend energy critisizing): ConsoleApplication Main { using(ISession session = GetSession()) using(ITransaction trans = session.BeginTransaction()) { var policyRepo = new PolicyRepo(session); var policyService = new PolicyService(policyRepo); var billingRepo = new BillingRepo(session) var billingService = new BillingService(billingRepo); var claimRepo = new ClaimsRepo(session); var claimService = new ClaimService(claimRepo, policyService, billingService); claimService.FileCLaim(); trans.Commit(); } }

    Read the article

  • Seperate html pages for each screen in Jquery mobile

    - by vrs
    I am newbie to Jquery Mobile, so far what ever examples i searched contains only one html page for whole application, with multipe div tags where each page/screen is defined as div tag with data-role as page with some header and footers optionally. Based on user actions, we are hiding some div's(pages) and showing only expected page. Also, this multi-page template seems to be standard design, as written by some blogs. Are there any other designing ways? what I would like to have is multipe html pages, for ex one for login, one for home, one for contact etc. Other wise it is difficult to understand/code/debug issues, especially people from Java background like me.So, what I want is some kind of MVC design with JQueryMobile, like each view/screen as sepearate html associated with one js (Controller). Can we have multiple html pages in JqueryMobile app? If possible how to pass data/ maintain session between them? Any samples are most welcome. Thanks In Advance. Note: Also I don't want server side includes, may app contains 10 to 15 screens, each page will make a webservice call and fetch some data and map it to UI.

    Read the article

  • Convert a nested array into a flat array with PHP

    - by Ben Fransen
    Hello all, I'm trying to create a generic database mapping class with PHP. Collecting the data through my functions is going well, but as expected I'm retrieving a nested set. A print_r of my received array looks like: Array ( [table] => Session [columns] => Array ( [0] => `Session`.`ID` AS `Session_ID` [1] => `Session`.`User` AS `Session_User` [2] => `Session`.`SessionID` AS `Session_SessionID` [3] => `Session`.`ExpiresAt` AS `Session_ExpiresAt` [4] => `Session`.`CreatedAt` AS `Session_CreatedAt` [5] => `Session`.`LastActivity` AS `Session_LastActivity` [6] => `Session`.`ClientIP` AS `Session_ClientIP` ) [0] => Array ( [table] => User [columns] => Array ( [0] => `User`.`ID` AS `User_ID` [1] => `User`.`UserName` AS `User_UserName` [2] => `User`.`Password` AS `User_Password` [3] => `User`.`FullName` AS `User_FullName` [4] => `User`.`Address` AS `User_Address` ) [0] => Array ( [table] => Address [columns] => Array ( [0] => `Address`.`ID` AS `Address_ID` [1] => `Address`.`UserID` AS `Address_UserID` [2] => `Address`.`Street` AS `Address_Street` [3] => `Address`.`City` AS `Address_City` ) ) ) ) To simplify things I want to recreate this nested array to a flat array so I can easily loop through it and use the 'columns' key to create my SELECT query. I'm kinda struggling with this for a while now and figures, maybe some users at SO can help me out here. I've tried multiple things with recursion, all without luck so far... Any help is much appriciated! Thanks in advance, Ben Fransen

    Read the article

  • Index Tuning for SSIS tasks

    - by Raj More
    I am loading tables in my warehouse using SSIS. Since my SSIS is slow, it seemed like a great idea to build indexes on the tables. There are no primary keys (and therefore, foreign keys), indexes (clustered or otherwise), constraints, on this warehouse. In other words, it is 100% efficiency free. We are going to put indexes based on usage - by analyzing new queries and current query performance. So, instead of doing it our old fashioned sweat and grunt way of actually reading the SQL statements and execution plans, I thought I'd put the shiny new Database Engine Tuning Advisor to use. I turned SQL logging off in my SSIS package and ran a "Tuning" trace, saved it to a table and analyzed the output in the Tuning Advisor. Most of the lookups are done as: exec sp_executesql N'SELECT [Active], [CompanyID], [CompanyName], [CompanyShortName], [CompanyTypeID], [HierarchyNodeID] FROM [dbo].[Company] WHERE ([CompanyID]=@P1) AND ([StartDateTime] IS NOT NULL AND [EndDateTime] IS NULL)',N'@P1 int',1 exec sp_executesql N'SELECT [Active], [CompanyID], [CompanyName], [CompanyShortName], [CompanyTypeID], [HierarchyNodeID] FROM [dbo].[Company] WHERE ([CompanyID]=@P1) AND ([StartDateTime] IS NOT NULL AND [EndDateTime] IS NULL)',N'@P1 int',2 exec sp_executesql N'SELECT [Active], [CompanyID], [CompanyName], [CompanyShortName], [CompanyTypeID], [HierarchyNodeID] FROM [dbo].[Company] WHERE ([CompanyID]=@P1) AND ([StartDateTime] IS NOT NULL AND [EndDateTime] IS NULL)',N'@P1 int',3 exec sp_executesql N'SELECT [Active], [CompanyID], [CompanyName], [CompanyShortName], [CompanyTypeID], [HierarchyNodeID] FROM [dbo].[Company] WHERE ([CompanyID]=@P1) AND ([StartDateTime] IS NOT NULL AND [EndDateTime] IS NULL)',N'@P1 int',4 and when analyzed, these statements have the reason "Event does not reference any tables". Huh? Does it not see the FROM dbo.Company??!! What is going on here? So, I have multiple questions: How do I get it to capture the actual statement executing in my trace, not what was submitted in a batch? Are there any best practices to follow for tuning performance related to SSIS packages running against SQL Server 2008?

    Read the article

  • Use an Array to Cycle Through a MySQL Where Statement

    - by Ryan
    I'm trying to create a loop that will output multiple where statements for a MySQL query. Ultimately, my goal is to end up with these four separate Where statements: `fruit` = '1' AND `vegetables` = '1' `fruit` = '1' AND `vegetables` = '2' `fruit` = '2' AND `vegetables` = '1' `fruit` = '2' AND `vegetables` = '2' My theoretical code is pasted below: <?php $columnnames = array('fruit','vegetables'); $column1 = array('1','2'); $column2 = array('1','2'); $where = ''; $column1inc =0; $column2inc =0; while( $column1inc <= count($column1) ) { if( !empty( $where ) ) $where .= ' AND '; $where = "`".$columnnames[0]."` = "; $where .= "'".$column1[$column1inc]."'"; while( $column2inc <= count($column2) ) { if( !empty( $where ) ) $where .= ' AND '; $where .= "`".$columnnames[1]."` = "; $where .= "'".$column2[$column2inc]."'"; echo $where."\n"; $column2inc++; } $column1inc++; } ?> When I run this code, I get the following output: `fruit` = '1' AND `vegetables` = '1' `fruit` = '1' AND `vegetables` = '1' AND `vegetables` = '2' `fruit` = '1' AND `vegetables` = '1' AND `vegetables` = '2' AND `vegetables` = '' Does anyone see what I am doing incorrectly? Thanks.

    Read the article

  • How do i make a allocation table?

    - by david
    I have build a grid of div's as playground for some visual experiments. In order to use that grid, i need to know the x and y coordinates of each div. That's why i want to create a table with the X and Y position of each div. X:0 & Y:0 = div:eq(0), X:0 Y:1 = div:eq(1), X:0 Y:2 = div:eq(2), X:0 Y:3 = div:eq(3), X:1 Y:0 = div:eq(4) etc.. What is the best way to do a table like that? Creating a OBJECT like this: { 00: 0, 01: 1, 02: 2, etc.. } or is it better to create a array? position[0][0] = 0 the thing is i need to use the table in multiple way's.. for example the user clicked the div nb: 13 what are the coordinates of this div or what is the eq of the div x: 12 y: 5. Thats how i do it right now: var row = 0 var col = 0 var eq = 0 c.find('div').each(function(i){ // c = $('div#stage') if (i !=0 && $(this).offset().top != $(this).prev().offset().top){ row++ col = 0 } $(this).attr({'row': row, 'col': col }) col++ }) I think it would be faster to build a table with the coordinates, instead of adding them as attr or data to the DOM. but i cant figure out how to do this technically. How would you solve this problem width JS / jQuery?

    Read the article

  • Passing data back from detailViewController to Uitableview

    - by jsetting32
    I am currently at the issue where I have a UITableviewCell that needs to be updated. When the user presses on the uitableviewcell - THERES ONLY 1!!, the user is pushed to a UITABLEVIEWCONTROLLER where the user is allowed to select 1 of multiple cells with their own titles. I need to get the clicked tableviewcells title and pass the value back to the parentviewcontroller and update the 1 tableviewcell's name to the one the user clicked in the pushed uitableivewcontroller. Here is a picture of the parent viewcontroller... And heres the picture of the pushed viewcontroller.... I was told earlier yesterday that delegation would be needed but I am unsure what to do at this point :/. Heres some code I use in the parent viewcontroller... - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { [tableView deselectRowAtIndexPath:indexPath animated:YES]; ProblemTypes *view = [[ProblemTypes alloc] init]; [self.navigationController pushViewController:view animated:YES]; } I am also NOT USING storyboards, just a few xibs. Also heres the code for the pushedviewcontroller to pop to the parent viewcontroller when a cell is selected... #pragma mark - Table view delegate - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { UITableViewCell *cell = [tableView cellForRowAtIndexPath:indexPath]; NSLog(@"Cell's text: %@",cell.textLabel.text); [self.navigationController popViewControllerAnimated:YES]; } Thank you guys!

    Read the article

  • KD-Trees and missing values (vector comparison)

    - by labratmatt
    I have a system that stores vectors and allows a user to find the n most similar vectors to the user's query vector. That is, a user submits a vector (I call it a query vector) and my system spits out "here are the n most similar vectors." I generate the similar vectors using a KD-Tree and everything works well, but I want to do more. I want to present a list of the n most similar vectors even if the user doesn't submit a complete vector (a vector with missing values). That is, if a user submits a vector with three dimensions, I still want to find the n nearest vectors (stored vectors are of 11 dimensions) I have stored. I have a couple of obvious solutions, but I'm not sure either one seem very good: Create multiple KD-Trees each built using the most popular subset of dimensions a user will search for. That is, if a user submits a query vector of thee dimensions, x, y, z, I match that query to my already built KD-Tree which only contains vectors of three dimensions, x, y, z. Ignore KD-Trees when a user submits a query vector with missing values and compare the query vector to the vectors (stored in a table in a DB) one by one using something like a dot product. This has to be a common problem, any suggestions? Thanks for the help.

    Read the article

  • Passing data structures to different threads

    - by Robb
    I have an application that will be spawning multiple threads. However, I feel there might be an issue with threads accessing data that they shouldn't be. I'm relatively new to threading so bare with me. Here is the structure of the threaded application (sorry for the crudeness): MainThread / \ / \ / \ Thread A Thread B / \ / \ / \ / \ / \ / \ Thread A_1 Thread A_2 Thread B_1 Thread B_2 Under each lettered thread (which could be many), there will only be two threads and they are fired of sequentially. The issue i'm having is I'm not entirely sure how to pass in a datastructure into these threads. So, the datastructure is created in MainThread, will be modified in the lettered thread (Thread A, etc) specific to that thread and then a member variable from that datastructure is sent to Letter_Numbered threads. Currently, the lettered thread class has a member variable and when the class is constructed, the datastructure from mainthread is passed in by reference, invoking the copy constructor so the lettered thread has it's own copy to play with. The lettered_numbered thread simply takes in a string variable from the data structure within the lettered thread. My question is, is this accceptable? Is there a much better way to ensure each lettered thread gets its own data structure to play with? Sorry for the somewhat poor explanation, please leave comments and i'll try to clarify.

    Read the article

  • FTP FileWatcher

    - by Meiscooldude
    So, I am in this little predicament where I am stuck watching a few ftp folders to see if they have new files added to them. If they do, it needs to throw an event with the file name. Thereby telling something else to download that file. This is a pretty simple object to make, I was just curious if anyone knew how expensive this operation would be? I plan on using the command NLIST because I don't need file size information, and there will be no sub-directories in the folder. Each file in the folder will have exactly 25 characters in its name. There could be anywhere from 10 to 'maybe' a couple thousand (max around 2000) files per folder (usually on the lower end, 100-300, but currently growing). The files are anywhere from 250kb to a very VERY unlikely 10mb (usually within the 250kb to 4mb range). There possibly could be up to a few hundred folders (in which case I could change the watch frequency depending on number of folders), but currently there are only a few (6-10ish). There also would be multiple logins for the ftp server, different logins would have access to different folders. I am not asking for an implementation, just if anyone has some first or second hand knowledge about FTP, how could this affect my network. I am not opposed to putting in file retention times or change the frequency in which I check for new files.

    Read the article

  • Maintaining a pool of DAO Class instances vs doing new operator

    - by Fazal
    we have been trying to benchmark our application performance in multiple way for sometime now. I always believed that object creation in java using Class.newInstance() was not slow (at least after 1.4 version of java). But we anyways did a test to use newInstance method vs mainitain an object pool of 1000 objects. We did about 200K iterations of loading data from DB using JDBC and populating these objects. I was amazed (even shocked) to see that newInstance code compared to object pool code was almost 10 times slower. These objects represent tables with about 50 fields and all string type. Can someone share there thoughts on this issue as now I am more confused if object pooling of atleast some DAO instances is a better option. The pool size as I see right now should be large enough to meet size of average requests. There is a flip side as my memory footprint will go up but I am beginning to wonder if this kind of idea makes sense atleast for some of the DAO entities representing tables of about 50 or more columns Please share your ideas and let me know if this has been tried by someone or am I missing some point here

    Read the article

  • What is the best way to deserialize generics written with a different version of a signed assembly?

    - by Rick Minerich
    In other cases it has been suggested that you simply add a SerializationBinder which removes the version from the assembly type. However, when using generic collections of a type found in a signed assembly, that type is strictly versioned based on its assembly. Here is what I've found works. internal class WeaklyNamedAssemblyBinder : SerializationBinder { public override Type BindToType(string assemblyName, string typeName) { ResolveEventHandler handler = new ResolveEventHandler(CurrentDomain_AssemblyResolve); AppDomain.CurrentDomain.AssemblyResolve += handler; Type returnedType; try { AssemblyName asmName = new AssemblyName(assemblyName); var assembly = Assembly.Load(asmName); returnedType = assembly.GetType(typeName); } catch { returnedType = null; } finally { AppDomain.CurrentDomain.AssemblyResolve -= handler; } return returnedType; } Assembly CurrentDomain_AssemblyResolve(object sender, ResolveEventArgs args) { string truncatedAssemblyName = args.Name.Split(',')[0]; Assembly assembly = Assembly.Load(truncatedAssemblyName); return assembly; } } However, causing the binding process to change globally seems rather dangerous to me. Strange things could happen if serialization was happening in multiple threads. Perhaps a better solution is to do some regex manipulation of the typeName? What do you think?

    Read the article

  • Creating an API for an ASP.NET MVC site with rate-limiting and caching

    - by Maxim Z.
    Recently, I've been very interested in APIs, specifically in how to create them. For the purpose of this question, let's say that I have created an ASP.NET MVC site that has some data on it; I want to create an API for this site. I have multiple questions about this: What type of API should I create? I know that REST and oData APIs are very popular. What are the pros and cons of each, and how do I implement them? From what I understand so far, REST APIs with ASP.NET MVC would just be actions that return JSON instead of Views, and oData APIs are documented here. How do I handle writing? Reading from both API types is quite simple. However, writing is more complex. With the REST approach, I understand that I can use HTTP POST, but how do I implement authentication? Also, with oData, how does writing work in the first place? How do I implement basic rate-limiting and caching? From my past experience with APIs, these are very important things, so that the API server isn't overloaded. What's the best way to set these two things up? Can I get some sample code? Any code that relates to C# and ASP.NET MVC would be appreciated. Thanks in advance! While this is a broad question, I think it's not too broad... :) There are some similar questions to this one that are about APIs, but I haven't found any that directly address the questions I outlined here.

    Read the article

  • Multi-level shop, xml or sql. best practice?

    - by danrichardson
    Hello, i have a general "best practice" question regarding building a multi-level shop, which i hope doesn't get marked down/deleted as i personally think it's quite a good "subjective" question. I am a developer in charge (in most part) of maintaining and evolving a cms system and associated front-end functionality. Over the past half year i have developed a multiple level shop system so that an infinite level of categories may exist down into a product level and all works fine. However over the last week or so i have questioned by own methods in front-end development and the best way to show the multi-level data structure. I currently use a sql server database (2000) and pull out all the shop levels and then process them into an enumerable typed list with child enumerable typed lists, so that all levels are sorted. This in my head seems quite process heavy, but we're not talking about thousands of rows, generally only 1-500 rows maybe. I have been toying with the idea recently of storing the structure in an XML document (as well as the database) and then sending last modified headers when serving/requesting the document for, which would then be processed as/when nessecary with an xsl(t) document - which would be processed server side. This is quite a handy, reusable method of storing the data but does it have more overheads in the fact im opening and closing files? And also the xml will require a bit of processing to pull out blocks of xml if for instance i wanted to show two level mid way through the tree for a side menu. I use the above method for sitemap purposes so there is currently already code i have built which does what i require, but unsure what the best process is to go about. Maybe a hybrid method which pulls out the data, sorts it and then makes an xml document/stream (XDocument/XmlDocument) for xsl processing is a good way? - This is the way i currently make the cms work for the shop. So really (and thanks for sticking with me on this), i am just wandering which methods other people use or recommend as being the best/most logical way of doing things. Thanks Dan

    Read the article

  • C# TcpClient, getting back the entire response from a telnet device

    - by Dan Bailiff
    I'm writing a configuration tool for a device that can communicate via telnet. The tool sends a command via TcpClient.GetStream().Write(...), and then checks for the device response via TcpClient.GetStream().ReadByte(). This works fine in unit tests or when I'm stepping through code slowly. If I run the config tool such that it performs multiple commands consecutively, then the behavior of the read is inconsistent. By inconsistent I mean sometimes the data is missing, incomplete or partially garbled. So even though the device performed the operation and responded, my code to check for the response was unreliable. I "fixed" this problem by adding a Thread.Sleep to make sure the read method waits long enough for the device to complete its job and send back the whole response (in this case 1.5 seconds was a safe amount). I feel like this is wrong, blocking everything for a fixed time, and I wonder if there is a better way to get the read method to wait only long enough to get the whole response from the device. private string Read() { if (!tcpClient.Connected) throw (new Exception("Read() failed: Telnet connection not available.")); StringBuilder sb = new StringBuilder(); do { ParseTelnet(ref sb); System.Threading.Thread.Sleep(1500); } while (tcpClient.Available > 0); return sb.ToString(); } private void ParseTelnet(ref StringBuilder sb) { while (tcpClient.Available > 0) { int input = tcpClient.GetStream().ReadByte(); switch (input) { // parse the input // ... do other things in special cases default: sb.Append((char)input); break; } } }

    Read the article

  • How to use javascript class from within document ready

    - by Richard
    Hi, I have this countdown script wrapped as an object located in a separate file Then when I want to setup a counter, the timeout function in the countdown class can not find the object again that I have setup within the document ready. I sort of get that everything that is setup in the document ready is convined to that scope, however it is possible to call functions within other document ready´s. Does anyone has a solution on how I could setup multiple counters slash objects. Or do those basic javascript classes have to become plugins This is some sample code on how the class begins function countdown(obj) { this.obj = obj; this.Div = "clock"; this.BackColor = "white"; this.ForeColor = "black"; this.TargetDate = "12/31/2020 5:00 AM"; this.DisplayFormat = "%%D%% Days, %%H%% Hours, %%M%% Minutes, %%S%% Seconds."; this.CountActive = true; this.DisplayStr; this.Calcage = cd_Calcage; this.CountBack = cd_CountBack; this.Setup = cd_Setup; } thanks, Richard

    Read the article

  • Quick MEF + SL4 question

    - by Tom Allen
    I'm working on an app in the Silverlight 4 RC and i'm taking the oppertunity to learn MEF for handling plugin controls. I've got it working in a pretty basic manor, but it's not exactly tidy and I know there is a better way of importing multiple xap's. Essentially, in the App.xaml of my host app, I've got the following telling MEF to load my xap's: AggregateCatalog catalog = new AggregateCatalog(); DeploymentCatalog c1 = new DeploymentCatalog(new Uri("TestPlugInA.xap", UriKind.Relative)); DeploymentCatalog c2 = new DeploymentCatalog(new Uri("TestPlugInB.xap", UriKind.Relative)); catalog.Catalogs.Add(c1); catalog.Catalogs.Add(c2); CompositionHost.Initialize(catalog); c1.DownloadAsync(); c2.DownloadAsync(); I'm sure I'm not using the AggregateCatalog fully here and I need to be able to load any xap's that might be in the directory, not just hardcoding Uri's obviously.... Also, in the MainPage.xaml.cs in the host I have the following collection which MEF puts the plugin's into: [ImportMany(AllowRecomposition = true)] public ObservableCollection<IPlugInApp> PlugIns { get; set; } Again, this works, but I'm pretty sure I'm using ImportMany incorrectly.... Finally, the MainPage.xaml.cs file implements IPartImportsSatisfiedNotification and I have the following for handling the plugin's once loaded: public void OnImportsSatisfied() { sp.Children.Clear(); foreach (IPlugInApp plugIn in PlugIns) { if (plugIn != null) sp.Children.Add(plugIn.GetUserControl()); } } This works, but it seems filthy that it runs n times (n being the number of xap's to load). I'm having to call sp.Children.Clear() as if I don't, when loading the 2 plugin's, my stack panel is populated as follows: TestPlugIn A TestPlugIn A TestPlugIn B I'm clearly missing something here. Can anyone point out what I should be doing? Thanks!

    Read the article

  • jquery pagination plugin problem

    - by John the horn
    I am using a pagination plugin called "pagination". Here is the script that is initialized : function pageselectCallback(page_index, jq){ var new_content = jQuery('#hiddenresult div.result:eq('+page_index+')').clone(); $('#Searchresult').empty().append(new_content); return false; } /** * Initialisation function for pagination */ function initPagination() { // count entries inside the hidden content var num_entries = jQuery('#hiddenresult div.result').length; // Create content inside pagination element $("#Pagination").pagination(num_entries, { callback: pageselectCallback, items_per_page:1 // Show only one item per page }); } // When document is ready, initialize pagination $(document).ready(function(){ initPagination(); }); and I am using this script for accordation on the page : <script type="text/javascript"> $(document).ready(function(){ $('#content').show(); $('#content1').hide(); $('#content2').hide(); $('#content3').hide(); $('#content4').hide(); $('#content5').hide(); $('a.bucatarii_serie').click(function(){ $("div[id^='content']").hide(); $('#content').toggle('slow'); }); $('a.livinguri_serie').click(function(){ $("div[id^='content']").hide(); $('#content1').toggle('slow'); }); $('a.dormitoare_serie').click(function(){ $("div[id^='content']").hide(); $('#content2').toggle('slow'); }); $('a.tratamente').click(function(){ $("div[id^='content']").hide(); $('#content3').toggle('slow'); }); $('a.chirurgie').click(function(){ $("div[id^='content']").hide(); $('#content4').toggle('slow'); }); $('a.frizerie').click(function(){ $("div[id^='content']").hide(); $('#content5').toggle('slow'); }); }); </script> My problem is that I don't know how to implement multiple paginations on the same page in diferent containers but in a bigger container.

    Read the article

  • Git force complete sync to master

    - by Jesse
    My workplace uses Subversion for source control so I have been playing around with git-svn for the advantages of my own branches, commit as often as I want without touching the main repo, etc. Since my git svn checkout is local, I have cloned it to a network share as well to act as a backup. My thinking is that if my desktop takes a dump I will at least have the repo on the network share to get changes that I have not had a chance to dcommit yet. My workflow is to work from the desktop, make changes, commit, etc. At the end of the day I want to update the repo on the network share with all of my current changes. I had setup the repo on the network share using git clone repo_on_my_desktop and then updating the repo on the network share with git pull origin master. The problem that I am running into is when I used do a git rebase to squish multiple commits prior to dcommitting to the main svn repository. When I do this, I get merge conflicts on the repo on the network share when I try to backup at night. Is there a way to simply sync entirely with the repository on my desktop without doing a new git clone each night?

    Read the article

< Previous Page | 655 656 657 658 659 660 661 662 663 664 665 666  | Next Page >