Search Results

Search found 9115 results on 365 pages for 'a team lead'.

Page 323/365 | < Previous Page | 319 320 321 322 323 324 325 326 327 328 329 330  | Next Page >

  • CQRS without using others patterns

    - by John Smith
    I would like to explain CQRS to my team of developers. I just can't figure out how to explain it in the simplest way so they can implement the pattern rapidly without any others frameworks. I've read a lot of resources including video and articles but I don't find how to implement CQRS without using others patterns like a service Bus, event sourcing pattern, domain driven design. I know the purpose of these pattern but for the first step, I don't want them to think CQRS and theses patterns must be tied together. My first idea is to say that CQRS is about separating the read part and the write part. The read part is composed only of the UI project, and DAL project. Then the write part is composed of a typical multilayer architecture: UI/BLL/DAL. Then, does CQRS say we must also have two datastore ? What about the notion of commands which reveal the user's intention, is it also something part of CQRS or DDD ? Basically, how to implement CQRS without using others patterns. I concede it's also not that clear in my mind because I've used to work with NCQRS/DDD/Event Sourcing/ServiceBus in my personal project. Thanks

    Read the article

  • What are Sharepoint(MOSS 2007) Developement/Deployment best practices.

    - by Satish
    We are deploying sharepoint MOSS 2007 at our work. I'm trying to come up with a sharepoint development and deployment methodology. We have Dev/QA/Prod environments and I need a way, preferably automated to deploy changes from Dev to QA and from there to prod. We are creating site collections web parts etc. Some of it is done directly within sharepoint, some through Sharepoint designer or visual studio. I'm looking for a way to extract this and deploy it to other enviornments. I tried stsadm backup/restore import/export etc but they all move the data along with it as well. I just need the structure deployed. Content deployment paths and jobs does the same thing as well. We use MSBuild & Curisecontrol.net for other .net projects to automate build/deployment process. I'm looking for something similar with sharepoint if possible. What are your best practices for this? Since my team is learning we don't have a defined process and we are open to change our development process if needed.

    Read the article

  • PHP Combo Box AJAX Refresh

    - by bhs
    I have a PHP page that currently has 4 years of team positions in columns on the page. The client wants to select the players in positions and have first, second and thrid choices. Currently the page shows 4 columns with sets of combos for each position. Each combo has a full set of players in it and the user chooses the player he wants from the combos. On submit the player positions are stored in the database. However, the client now wants to change the page so that when he selects a player in a year and position then the player is removed from the combo and can no longer be selected for that year. I've used a bit of AJAX but was wondering if anyone had any thoughts/suggestions. The page is currently quite slow so they want it speeded up as well. The page layout is currently like this POISTION YEAR1 YEAR2 YEAR3 YEAR4 1 COMBOC1 COMBOC1 COMBOC1 COMBOC1 COMBOC2 COMBOC2 COMBOC2 COMBOC2 COMBOC3 COMBOC3 COMBOC3 COMBOC3 2 same as above COMBOC1, 2 and 3 all currently have the same players - when a player is selected it needs to be removed for all combos below it. I was thinking of starting by changing the page design and having text boxes for the players and a single player select under each year like this: POISTION YEAR1 YEAR2 YEAR3 YEAR4 1 <PLAYER><POSITION><CHOICE> ... [TEXT BOX CHOICE1] [TEXT BOX CHOICE2] [TEXT BOX CHOICE3] 2 ... Then I only have 1 combo box for each year to worry about - I do however have the same problem of refreshing the combo box and removing the player that has been selected, and I'd prefer to do it withough a page submit. Sorry for the long posting - cheers

    Read the article

  • Convincing why testing is good

    - by FireAphis
    Hello, In my team of real-time-embedded C/C++ developers, most people don't have any culture of testing their code beyond the casual manual sanity checks. I personally strongly believe in advantages of autonomous automatic tests, but when I try to convince I get some reappearing arguments like: We will spend more time on writing the tests than writing the code. It takes a lot of effort to maintain the tests. Our code is spaghetti; no way we can unit-test it. Our requirement are not sealed – we’ll have to rewrite all the tests every time the requirements are changed. Now, I'd gladly hear any convincing tips and advises, but what I am really looking for are references to researches, articles, books or serious surveys that show (preferably in numbers) how testing is worth the effort. Something like "We in IBM/Microsoft/Google, surveying 3475 active projects, found out that putting 50% more development time into testing decreased by 75% the time spent on fixing bugs" or "after half a year, the time needed to write code with test was only marginally longer than what used to take without tests". Any ideas? P.S.: I'm adding C++ tag too in case someone has a specific experience with convincing this, usually elitist, type of developers :-)

    Read the article

  • Refactoring an ASP.NET 2.0 app to be more "modern"

    - by Wayne M
    This is a hypothetical scenario. Let's say you've just been hired at a company with a small development team. The company uses an internal CRM/ERP type system written in .NET 2.0 to manage all of it's day to day things (let's simplify and say customer accounts and records). The app was written a couple of years ago when .NET 2.0 was just out and uses the following architectural designs: Webforms Data layer is a thin wrapper around SqlCommand that calls stored procedures Rudimentary DTO-style business objects that are populated via the sprocs A "business logic" layer that acts as a gateway between the webform and database (i.e. code behind calls that layer) Let's say that as there are more changes and requirements added to the application, you start to feel that the old architecture is showing its age, and changes are increasingly more difficult to make. How would you go about introducing refactoring steps to A) Modernize the app (i.e. proper separation of concerns) and B) Make sure that the app can readily adapt to change in the organization? IMO the changes would involve: Introduce an ORM like Linq to Sql and get rid of the sprocs for CRUD Assuming that you can't just throw out Webforms, introduce the M-V-P pattern to the forms Make sure the gateway classes conform to SRP and the other SOLID principles. Change the logic that is re-used to be web service methods instead of having to reuse code What are your thoughts? Again this is a totally hypothetical scenario that many of us have faced in the past, or may end up facing.

    Read the article

  • Linq to SQL duplicating entry when referencing FK

    - by Oscar
    Hi! I am still facing some problems when using LINQ-to-SQL. I am also looking for answers by myself, but this problem is so akward that I am having problems to find the right keywords to look for it. I have this code here: public CustomTask SaveTask(string token, CustomTask task) { TrackingDataContext dataConext = new TrackingDataContext(); //Check the token for security if (SessionTokenBase.Instance.ExistsToken(Convert.ToInt32(token)) == null) return null; //Populates the Task - the "real" Linq to SQL object Task t = new Task(); t.Title = task.Title; t.Description = task.Description; //****The next 4 lines are important**** if (task.Severity != null) t.Severity = task.Severity; else t.SeverityID = task.SeverityID; t.StateID = task.StateID; if (task.TeamMember != null) t.TeamMember = task.TeamMember; else t.ReporterID = task.ReporterID; if (task.ReporterTeam != null) t.Team = task.ReporterTeam; else t.ReporterTeamID = task.ReporterTeamID; //Saves/Updates the task dataConext.Tasks.InsertOnSubmit(t); dataConext.SubmitChanges(); task.ID = t.ID; return task; } The problem is that I am sending the ID of the severity, and then, when I get this situation: DB State before calling the method: ID Name 1 high 2 medium 3 low Call the method selecting "medium" as severity DB State after calling the method: ID Name 1 high 2 medium 3 low 4 medium The point is: -It identified that the ID was related to the Medium entry (and for this reason it could populate the "Name" Column correctly), but if duplicated this entry. The problem is: Why?!! Some explanation about the code: CustomTask is almost the same as Task, but I was having problems regarding serialization as can be seen here I don't want to send the Severity property populated because I want my message to be as small as possible. Could anyone clear to my, why it recognize the entry, but creates a new entry in the DB?

    Read the article

  • What exactly can "Full Control" with SharePoint Designer accomplish?

    - by Brian L.
    I've been brought in as an intern to develop a SharePoint site. My team won't authorize the budget for Visual Studio and I don't have physical or remote access to the SharePoint server (running Windows SharePoint Services 3.0 a.k.a. WSS) on the back-end. So what exactly can I do? I'm a pretty decent programmer when it comes to web technologies like PHP, JS and the obvious HTML and CSS. In an environment like this locked-down SharePoint though, I'm stumped trying to figure out how much control I have with MS's definition of "Full Control". If I figured out a way to write some C#, I'm pretty sure I could handle my own, but as I said no Visual Studio for me. Any good ideas of features that people will use on a site built with the limited functionality of WSS and SharePoint Designer with "Full Control"? Can I somehow manipulate the default Web Parts into something cool or useful? Are there Ajax tricks I can do to accomplish something on the back-end? Thanks in advance, I'm new to StackOverflow and very anxious to get involved here!

    Read the article

  • Big-O for GPS data

    - by HH
    A non-critical GPS module use lists because it needs to be modifiable, new routes added, new distances calculated, continuos comparisons. Well so I thought but my team member wrote something I am very hard to get into. His pseudo code int k =0; a[][] <- create mapModuleNearbyDotList -array //CPU O(n) for(j = 1 to n) // O(nlog(m)) for(i =1 to n) for(k = 1 to n) if(dot is nearby) adj[i][j]=min(adj[i][j], adj[i][k] + adj[k][j]); His ideas transformations of lists to tables His worst case time complexity is O(n^3), where n is number of elements in his so-called table. Exception to the last point with Finite structure: O(mlog(n)) where n is number of vertices and m is an arbitrary constants Questions about his ideas why to waste resources to transform constantly-modified lists to table? Fast? only point where I to some extent agree but cannot understand the same upper limits n for each for-loops -- perhaps he supposed it circular why does the code take O(mlog(n)) to proceed in time as finite structure? The term finite may be wrong, explicit?

    Read the article

  • How Best to Replace PL/SQL with C#?

    - by Mike
    Hi, I write a lot of one-off Oracle SQL queries/reports (in Toad), and sometimes they can get complex, involving lots of unions, joins, and subqueries, and/or requiring dynamic SQL, and/or procedural logic. PL/SQL is custom made for handling these situations, but as a language it does not compare to C#, and even if it did, it's tooling does not, and if even that did, forcing yet another language on the team is something to be avoided whenever possible. Experience has shown me that using SQL for the set based processing, coupled with C# for the procedural processing, is a powerful combination indeed, and far more readable, maintainable and enhanceable than PL/SQL. So, we end up with a number of small C# programs which typically would construct a SQL query string piece by piece and/or run several queries and process them as needed. This kind of code could easily be a disaster, but a good developer can make this work out quite well, and end up with very readable code. So, I don't think it's a bad way to code for smaller DB focused projects. My main question is, how best to create and package all these little C# programs that run ad hoc queries and reports against the database? Right now I create little report objects in a DLL, developed and tested with NUnit, but then I continue to use NUnit as the GUI to select and run them. NUnit just happens to provide a nice GUI for this kind of thing, even after testing has been completed. I'm also interested in suggestions for reporting apps generally. I feel like there is a missing piece or product. The perfect tool would allow writing and running C# inside of Toad, or SQL inside of Visual Studio, along with simple reporting facilities. All ideas will be appreciated, but let's make the assumption that PL/SQL is not the solution.

    Read the article

  • How should I name my SQL query files? Should I use some methodology?

    - by Mehper C. Palavuzlar
    We have an Oracle 10g database (a huge one) in our company, and I provide employees with data upon their requests. My problem is, I save almost every SQL query I wrote, and now my list has grown too much. I want to organize and rename these .sql files so that I can find the one I want easily. At the moment, I'm using some folders named as Sales Dept, Field Team, Planning Dept, Special etc. and under those folders there are .sql files like Delivery_sales_1, Delivery_sales_2, ... Sent_sold_lostsales_endpoints, ... Sales_provinces_period, Returnrates_regions_bymonths, ... Jack_1, Steve_1, Steve_2, ... I try to name the files regarding their content but this makes file names longer and does not completely meet my needs. Sometimes someone comes and demands a special report, and I give the file his name, but this is also not so good. I know duplicates or very similar files are growing in time but I don't have control over them. Can you show me the right direction to rename all these files and folders and organize my queries for easy and better control? TIA.

    Read the article

  • Application runs fine when executed directly, fails as scheduled task (security issues)

    - by Carl
    I have an application that loads some files from a network share (the input folder), extracts certain data from them and saves new files (zips them with SharpZLib) on a different network share (output folder). This application runs fine when you open it directly, but when it is set to a scheduled task, it fails in numerous places. This application is scheduled on a Win 2003 server. Let me say right off the bat, the scheduled task is set to use the same login account that I am currently logged in with, so it's not because it's using the LocalSystem account. Something else is going on here. Originally, the application was assigning a drive letter to the input folder using WNetGetConnectionA(). I don't remember why this was done, someone else on our team did that and she's gone now. I think there was some issue with using the WinZip command line with a UNC path. I switched from the WinZip command line utility to using SharpZLib because there were other issues with using the WinZip command line. Anyway, the application failed when trying to assign a drive letter with the error "connection already established." That wasn't true and even after trying WNetCancelConnection(), it still didn't work. Then I decided to just map the drive manually on the server. Then when the app calls Directory.Exists(inputFolderPath) it returns false, even though it does exist. So, for whatever reason, I cannot read this directory from within the application. I can manually navigate to this folder in Windows Explorer and open files. The app log file shows that the user executing it on the schedule is the user I expect, not LocalSystem. Any ideas?

    Read the article

  • ORDER BY in a Sql Server 2008 view

    - by eidylon
    Hi all... we have a view in our database which has an ORDER BY in it. Now, I realize views generally don't order, because different people may use it for different things, and want it differently ordered. This view however is used for a VERY SPECIFIC use-case which demands a certain order. (It is team standings for a soccer league.) The database is Sql Server 2008 Express, v.10.0.1763.0 on a Windows Server 2003 R2 box. The view is defined as such: CREATE VIEW season.CurrentStandingsOrdered AS SELECT TOP 100 PERCENT *, season.GetRanking(TEAMID) RANKING FROM season.CurrentStandings ORDER BY GENDER, TEAMYEAR, CODE, POINTS DESC, FORFEITS, GOALS_AGAINST, GOALS_FOR DESC, DIFFERENTIAL, RANKING It returns: GENDER, TEAMYEAR, CODE, TEAMID, CLUB, NAME, WINS, LOSSES, TIES, GOALS_FOR, GOALS_AGAINST, DIFFERENTIAL, POINTS, FORFEITS, RANKING Now, when I run a SELECT against the view, it orders the results by GENDER, TEAMYEAR, CODE, TEAMID. Notice that it is ordering by TEAMID instead of POINTS as the order by clause specifies. However, if I copy the SQL statement and run it exactly as is in a new query window, it orders correctly as specified by the ORDER BY clause.

    Read the article

  • Command or tool to display list of connections to a Windows file share

    - by BizTalkMama
    Is there a Windows command or tool that can tell me what users or computers are connected to a Windows fileshare? Here's why I'm looking for this: I've run into issues in the past where our deployment team has deployed BizTalk applications to one of our environments using the wrong bindings, leaving us with two receive locations pointing to the same file share (i.e. both dev and test servers point to dev receive location uri). When this occurs, the two environments in question tend to take turns processing the files received (meaning if I am attempting to debug something in one environment and the other environment has picked the file up, it looks as if my test file has disappeared into thin air). We have several different environments, plus individual developer machines, and I'd rather not have to check each individually to find the culprit. I'm looking for a quick way to detect what locations are connected to the share once I notice my test files vanishing. If I can determine the connections that are invalid, I can go directly to the person responsible for that environment and avoid the time it takes to randomly ask around. Or if the connections appear to be correct, I can go directly to troubleshooting where in the process the message gets lost. Any suggestions?

    Read the article

  • [C#] how to do Exception Handling & Tracing

    - by shrimpy
    Hi all, i am reading some C# books, and got some exercise don't know how to do, or not sure what does the question mean. Problem: After working for a company for some time, your skills as a knowledgeable developer are recognized, and you are given the task of “policing” the implementation of exception handling and tracing in the source code (C#) for an enterprise application that is under constant incremental development. The two goals set by the product architect are: 100% of methods in the entire application must have at least a standard exception handler, using try/catch/finally blocks; more complex methods must also have additional exception handling for specific exceptions All control flow code can optionally write “tracing” information to assist in debugging and instrumentation of the application at run-time in situations where traditional debuggers are not available (eg. on staging and production servers). (i am not quite understand these criterias, i came from the java world, java has two kind of exception, check and unchecked exception. Developer must handle checked exception, and do logging. about unchecked exception, still do logging maybe, but most of the time we just throw it. however here comes to C#, what should i do????) Question for Problem: List rules you would create for the development team to follow, and the ways in which you would enforce rules, to achieve these goals. How would you go about ensuring that all existing code complies with the rules specified by the product architect; in particular, what considerations would impact your planning for the work to ensure all existing code complies?

    Read the article

  • database design - empty fields

    - by imanc
    Hey, I am currently debating an issue with a guy on my dev team. He believes that empty fields are bad news. For instance, if we have a customer details table that stores data for customers from different countries, and each country has a slightly different address configuration - plus 1-2 extra fields, e.g. French customer details may also store details for entry code, and floor/level plus title fields (madamme, etc.). South Africa would have a security number. And so on. Given that we're talking about minor variances my idea is to put all of the fields into the table and use what is needed on each form. My colleague believes we should have a separate table with extra data. E.g. customer_info_fr. But this seams to totally defeat the purpose of a combined table in the first place. His argument is that empty fields / columns is bad - but I'm struggling to find justification in terms of database design principles for or against this argument and preferred solutions. Another option is a separate mini EAV table that stores extra data with parent_id, key, val fields. Or to serialise extra data into an extra_data column in the main customer_data table. I think I am confused because what I'm discussing is not covered by 3NF which is what I would typically use as a reference for how to structure data. So my question specifically: - if you have slight variances in data for each record (1-2 different fields for instance) what is the best way to proceed?

    Read the article

  • Publish Git repository to SVN

    - by Ken Williams
    I and my small team work in Git, and the larger group uses Subversion. I'd like to schedule a cron job to publish our repositories current HEADs every hour into a certain directory in the SVN repo. I thought I had this figured out, but the recipe I wrote down previously doesn't seem to be working now: git clone ssh://me@gitserver/git-repo/Projects/ProjX px2 cd px2 svn mkdir --parents http://me@svnserver/svn/repo/play/me/fromgit/ProjX git svn init -s http://me@svnserver/svn/repo/play/me/fromgit/ProjX git svn fetch git rebase trunk master git svn dcommit Here's what happens when I attempt: % git clone ssh://me@gitserver/git-repo/Projects/ProjX px2 Cloning into 'ProjX'... ... % cd px2 % svn mkdir --parents http://me@svnserver/svn/repo/play/me/fromgit/ProjX Committed revision 123. % git svn init -s http://me@svnserver/svn/repo/play/me/fromgit/ProjX Using higher level of URL: http://me@svnserver/svn/repo/play/me/fromgit/ProjX => http://me@svnserver/svn/repo % git svn fetch W: Ignoring error from SVN, path probably does not exist: (160013): Filesystem has no item: File not found: revision 100, path '/play/me/fromgit/ProjX' W: Do not be alarmed at the above message git-svn is just searching aggressively for old history. This may take a while on large repositories % git rebase trunk master fatal: Needed a single revision invalid upstream trunk I could have sworn this worked previously, anyone have any suggestions? Thanks.

    Read the article

  • Is excessive DataTable usage bad?

    - by Justin R.
    I was recently asked to assist another team in building an ASP .NET website. They already have a significant amount of code written -- I was specifically asked build a few individual pages for the site. While exploring the code for the rest of the site, the amount of DataTables being constructed jumped out at me. Being a relatively new in the field, I've never worked on an application that utilizes a database as much as this site does, so I'm not sure how common this is. It seems that whenever data is queried from our database, the results are stored in a DataTable. This DataTable is then usually passed around by itself, or it's passed to a constructor. Classes that are initialized with a DataTable always assign the DataTable to a private/protected field, however only a few of these classes implement IDisposable. In fact, in the thousands of lines of code that I've browsed so far, I have yet to see the Dispose method called on a DataTable. If anything, this doesn't seem to be good OOP. Is this something that I should worry about? Or am I just paying more attention to detail than I should? Assuming you're most experienced developers than I am, how would you feel or react if someone who was just assigned to help you with your site approached you about this "problem"?

    Read the article

  • Automated test, build and deploy

    - by mike79
    I have visual studio team suite 2008. I was unable to meet the requirements to setup TFS, so I'm using TortoiseSvn and VisualSvn as my version contol in VSTS. I need the system setup to do the following: I neeed to be able to create and track workitems. When updates are made to the current project worked on in VSTS, the updates will be commited back to version control. Tests will be run to see that updates don't break the application. If there's a problem with the update it will be reported back to the developer. If there's no problem with the app, which is a clickonce application, it will automatically be built and deployed to an ftp server. I've never worked with version control, build servers, automated testing and continous intergration. I need to know what needs to be put in place for this type of system. I don't know which combination/stack I should be using: CC.net, TeamCity, Hudson, NAnt, NUnit, MsTest, Trac, BugTracker.net, Ndepend, VisualSvn Server, Perforce, Msdeploy, SCM. I want something that is free/opensource and relatively easy to setup and use. Please suggest a setup that will fit my needs. Any help appreciated

    Read the article

  • How do you handle the tension between refactoring and the need for merging?

    - by Xavier Nodet
    Hi, Our policy when delivering a new version is to create a branch in our VCS and handle it to our QA team. When the latter gives the green light, we tag and release our product. The branch is kept to receive (only) bug fixes so that we can create technical releases. Those bug fixes are subsequently merged on the trunk. During this time, the trunk sees the main development work, and is potentially subject to refactoring changes. The issue is that there is a tension between the need to have a stable trunk (so that the merge of bug fixes succeed -- it usually can't if the code has been e.g. extracted to another method, or moved to another class) and the need to refactor it when introducing new features. The policy in our place is to not do any refactoring before enough time has passed and the branch is stable enough. When this is the case, one can start doing refactoring changes on the trunk, and bug-fixes are to be manually committed on both the trunk and the branch. But this means that developpers must wait quite some time before committing on the trunk any refactoring change, because this could break the subsequent merge from the branch to the trunk. And having to manually port bugs from the branch to the trunk is painful. It seems to me that this hampers development... How do you handle this tension? Thanks.

    Read the article

  • The program fails to display `cout` when it is run

    - by Jeff - FL
    Hello, I justed started a C++ course & I wrote, compiled, debugged & ran my first program: // This program calculates how much a little league team spent last year to purchase new baseballs. #include <iostream> using namespace std; int baseballs; int cost; int total; int main() { baseballs, cost, total; // Get the number of baseballs were purchased. cout << "How many baseballs were purchased? "; cin >> baseballs; // Get the cost of baseballs purchased. cout << "What was the cost of each baseball purchased? "; cin >> cost; // Calculate the total. total = baseballs * cost; // Display the total. cout << "The total amount spent $" << total << endl; return 0; } The only probelm that I encountered was that when I ran the program it failed to display the total amount spent (cout). Could someone please explain why? Thanks Jeff H - Sarasota, FL

    Read the article

  • Do vs. Run vs. Execute vs. Perform verbs

    - by coffeeaddict
    Before anyone starts to go nuts and red flag this post saying this is "Subjective" which drives me absolutely nuts because everyone has their own intent why they are posting something others feel are subjective. Subjective is subjective to each person, how about that! So with that let me tell you a couple things so that this post does not get flagged by flag happy moderators: 1) There are community guidlines on specific keywords recommended by certain organizations or people (e.g. Microsoft, Lance Hunt, etc.) 2) I want to know what others are using the most and why. Why they feel this verb reads better than others 3) Books even talk about this verb issue (Uncle Bob, etc.), so it's not subjective Now to my actual question: a) What list of verbs are you using for method names? What's your personal or team standard? b) I debate whether to use Do vs. Run vs. Execute vs. Perform and am wondering if any of these are no longer recommended or some that people just don't really use and I should just scratch them. Basically any one of those verbs mean the same thing...to invoke some process (method call). This is outside of CRUDs. For example: ExecutePayPalWorkflow(); that could be also any one of these names instead: DoPayPalWorkflow(); RunPayPalWorkflow(); PerformPayPalWorkflow(); or does it not really matter...because any of those verbs pretty much are understandable as to "what" shows your intent by the other words that follow it "PayPalWorkflow" This discussion can go for any language. I just put the two main tags C# and Java here which is good enough for me to get some solid answers or experiences.

    Read the article

  • How might a C# programmer approach writing a solution in javascript?

    - by Ben McCormack
    UPDATE: Perhaps this wasn't clear from my original post, but I'm mainly interested in knowing a best practice for how to structure javascript code while building a solution, not simply learning how to use APIs (though that is certainly important). I need to add functionality to a web site and our team has decided to approach the solution using a web service that receives a call from a JSON-formatted AJAX request from within the web site. The web service has been created and works great. Now I have been tasked with writing the javascript/html side of the solution. If I were solving this problem in C#, I would create separate classes for formatting the request, handling the AJAX request/response, parsing the response, and finally inserting the response somehow into the DOM. I would build properties and methods appropriately into each class, doing my best to separate functionality and structure where appropriate. However, I have to solve this problem in javascript. Firstly, how could I approach my solution in javascript in the way I would approach it from C# as described above? Or more importantly, what's a better way to approach structuring code in javascript? Any advice or links to helpful material on the web would be greatly appreciated. NOTE: Though perhaps not immediately relevant to this question, it may be worth noting that we will be using jQuery in our solution.

    Read the article

  • Asp.Net 2 integrated sites How to Logout second site programatically.

    - by NBrowne
    Hi , I am working with an asp.net 2.0 site (call it site 1) which has an iframe in it which loads up another site (site2) which is also an asp.net site which is developed by our team. When you log onto site 1 then behind the scenes site 2 is also logged in so that when you click the iframe tab then this displays site 2 with the user logged in (to prevent the user from having to log in twice). The problem i have is that when a user logs out of site 1 then we call some cleanup methods to perform FormsAuthentication.SignOut and clean session variables etc but at the moment no cleanup is called when the user on site 2. So the issue is that if the user opens up Site 2 from within a browser then website 2 opens with the user still logged in which is undesired. Can anyone give me some guidance as to the best approach for this?? One possible approach i though of was just that on click of logout button i could do a call to a custom page on Site 2 which would do the logout. Code below HttpWebRequest request; request = ((HttpWebRequest)(WebRequest.Create("www.mywebsite.com/Site2Logout.aspx"))); request.Method = "POST"; HttpCookie cookie = HttpContext.Current.Request.Cookies[FormsAuthentication.FormsCookieName]; Cookie authenticationCookie = new Cookie( FormsAuthentication.FormsCookieName, cookie.Value, cookie.Path, HttpContext.Current.Request.Url.Authority); request .CookieContainer = new CookieContainer(); request .CookieContainer.Add(authenticationCookie); response.GetResponse(); Problem i am having with this code is that when i run it and debug on Site 2 and check to see if the user is Authenticated they are not which i dont understand because if i open browser and browse to Site 2 i am Still authenticated. Any ideas , different direction to take etc ??? Please let me know if you need any more info or if i something i have said dosent make sense. Thanks

    Read the article

  • Yeoman 'grunt test' fails on clean project with 'port already in use'

    - by XMLilley
    With: Mac OS 10.8.4 Node 0.10.12 npm 1.3.1 grunt-cli 0.1.9 yo 1.0.0-rc.1 bower 0.9.2 [email protected] I encounter the following error with a clean yo angular project, followed by grunt server then grunt test: Running "connect:test" (connect) task Fatal error: Port 9000 is already in use by another process. I'm new to Yeoman and am stumped. I've deleted my original project and created a new one in a fresh folder just to make sure I wasn't overlooking any invisible configs. I restarted the machine to make sure I wasn't running any temporary server processes I had forgotten about. After all attempts, the basic server starts fine, attaches to Chrome, and the watcher updates the browser on any changes. (Notably, the server is running on 9000, which seems odd for the test-runner to also be trying to use 9000.) But I get that same error on attempting to start the test runner. Is this something I can fix, or an issue I should report to the Yeoman team? Thanks.

    Read the article

  • WCF data services - Limiting related objects returned based on critera

    - by Mike Morley
    I have an object graph consisting of a base employee object, and a set of related message objects. I am able to return the employee objects based on search criteria on the employee properties (eg team) etc. However, if I expand on the messages, I get the full collection of messages back. I would like to be able to either take the top n messages (i.e. restrict to 10 most recent) or ideally use a date range on the message objects to limit how many are brought back. So far I have not been able to figure out a way of doing this: I get an error if I attempt to filter on properties on the message (&$filter=employee/message/StartDate gives an error "No property 'StartDate' exists in type 'System.Data.Objects.DataClasses.EntityCollection`1). Attempting to use Top on the message related object doesn't work either. I have also tried using a WebGet extension that takes a string list of employee IDs. That works until the list gets too long, and then fails due to the URL getting too long (it might be possible to setup a paging mechanism on this approach)... Unfortunately the UI control I am using requires the data to be in a fairly specific hierarchical shape, so I can't easily come at this from starting on the message side and working backwards. Outside of making multiple calls does anyone know of a method to accomplish this with wcf data services? Thanks! M.

    Read the article

< Previous Page | 319 320 321 322 323 324 325 326 327 328 329 330  | Next Page >