Search Results

Search found 7318 results on 293 pages for 'team pannous'.

Page 255/293 | < Previous Page | 251 252 253 254 255 256 257 258 259 260 261 262  | Next Page >

  • How should I name my SQL query files? Should I use some methodology?

    - by Mehper C. Palavuzlar
    We have an Oracle 10g database (a huge one) in our company, and I provide employees with data upon their requests. My problem is, I save almost every SQL query I wrote, and now my list has grown too much. I want to organize and rename these .sql files so that I can find the one I want easily. At the moment, I'm using some folders named as Sales Dept, Field Team, Planning Dept, Special etc. and under those folders there are .sql files like Delivery_sales_1, Delivery_sales_2, ... Sent_sold_lostsales_endpoints, ... Sales_provinces_period, Returnrates_regions_bymonths, ... Jack_1, Steve_1, Steve_2, ... I try to name the files regarding their content but this makes file names longer and does not completely meet my needs. Sometimes someone comes and demands a special report, and I give the file his name, but this is also not so good. I know duplicates or very similar files are growing in time but I don't have control over them. Can you show me the right direction to rename all these files and folders and organize my queries for easy and better control? TIA.

    Read the article

  • Big-O for GPS data

    - by HH
    A non-critical GPS module use lists because it needs to be modifiable, new routes added, new distances calculated, continuos comparisons. Well so I thought but my team member wrote something I am very hard to get into. His pseudo code int k =0; a[][] <- create mapModuleNearbyDotList -array //CPU O(n) for(j = 1 to n) // O(nlog(m)) for(i =1 to n) for(k = 1 to n) if(dot is nearby) adj[i][j]=min(adj[i][j], adj[i][k] + adj[k][j]); His ideas transformations of lists to tables His worst case time complexity is O(n^3), where n is number of elements in his so-called table. Exception to the last point with Finite structure: O(mlog(n)) where n is number of vertices and m is an arbitrary constants Questions about his ideas why to waste resources to transform constantly-modified lists to table? Fast? only point where I to some extent agree but cannot understand the same upper limits n for each for-loops -- perhaps he supposed it circular why does the code take O(mlog(n)) to proceed in time as finite structure? The term finite may be wrong, explicit?

    Read the article

  • ORDER BY in a Sql Server 2008 view

    - by eidylon
    Hi all... we have a view in our database which has an ORDER BY in it. Now, I realize views generally don't order, because different people may use it for different things, and want it differently ordered. This view however is used for a VERY SPECIFIC use-case which demands a certain order. (It is team standings for a soccer league.) The database is Sql Server 2008 Express, v.10.0.1763.0 on a Windows Server 2003 R2 box. The view is defined as such: CREATE VIEW season.CurrentStandingsOrdered AS SELECT TOP 100 PERCENT *, season.GetRanking(TEAMID) RANKING FROM season.CurrentStandings ORDER BY GENDER, TEAMYEAR, CODE, POINTS DESC, FORFEITS, GOALS_AGAINST, GOALS_FOR DESC, DIFFERENTIAL, RANKING It returns: GENDER, TEAMYEAR, CODE, TEAMID, CLUB, NAME, WINS, LOSSES, TIES, GOALS_FOR, GOALS_AGAINST, DIFFERENTIAL, POINTS, FORFEITS, RANKING Now, when I run a SELECT against the view, it orders the results by GENDER, TEAMYEAR, CODE, TEAMID. Notice that it is ordering by TEAMID instead of POINTS as the order by clause specifies. However, if I copy the SQL statement and run it exactly as is in a new query window, it orders correctly as specified by the ORDER BY clause.

    Read the article

  • Application runs fine when executed directly, fails as scheduled task (security issues)

    - by Carl
    I have an application that loads some files from a network share (the input folder), extracts certain data from them and saves new files (zips them with SharpZLib) on a different network share (output folder). This application runs fine when you open it directly, but when it is set to a scheduled task, it fails in numerous places. This application is scheduled on a Win 2003 server. Let me say right off the bat, the scheduled task is set to use the same login account that I am currently logged in with, so it's not because it's using the LocalSystem account. Something else is going on here. Originally, the application was assigning a drive letter to the input folder using WNetGetConnectionA(). I don't remember why this was done, someone else on our team did that and she's gone now. I think there was some issue with using the WinZip command line with a UNC path. I switched from the WinZip command line utility to using SharpZLib because there were other issues with using the WinZip command line. Anyway, the application failed when trying to assign a drive letter with the error "connection already established." That wasn't true and even after trying WNetCancelConnection(), it still didn't work. Then I decided to just map the drive manually on the server. Then when the app calls Directory.Exists(inputFolderPath) it returns false, even though it does exist. So, for whatever reason, I cannot read this directory from within the application. I can manually navigate to this folder in Windows Explorer and open files. The app log file shows that the user executing it on the schedule is the user I expect, not LocalSystem. Any ideas?

    Read the article

  • How Best to Replace PL/SQL with C#?

    - by Mike
    Hi, I write a lot of one-off Oracle SQL queries/reports (in Toad), and sometimes they can get complex, involving lots of unions, joins, and subqueries, and/or requiring dynamic SQL, and/or procedural logic. PL/SQL is custom made for handling these situations, but as a language it does not compare to C#, and even if it did, it's tooling does not, and if even that did, forcing yet another language on the team is something to be avoided whenever possible. Experience has shown me that using SQL for the set based processing, coupled with C# for the procedural processing, is a powerful combination indeed, and far more readable, maintainable and enhanceable than PL/SQL. So, we end up with a number of small C# programs which typically would construct a SQL query string piece by piece and/or run several queries and process them as needed. This kind of code could easily be a disaster, but a good developer can make this work out quite well, and end up with very readable code. So, I don't think it's a bad way to code for smaller DB focused projects. My main question is, how best to create and package all these little C# programs that run ad hoc queries and reports against the database? Right now I create little report objects in a DLL, developed and tested with NUnit, but then I continue to use NUnit as the GUI to select and run them. NUnit just happens to provide a nice GUI for this kind of thing, even after testing has been completed. I'm also interested in suggestions for reporting apps generally. I feel like there is a missing piece or product. The perfect tool would allow writing and running C# inside of Toad, or SQL inside of Visual Studio, along with simple reporting facilities. All ideas will be appreciated, but let's make the assumption that PL/SQL is not the solution.

    Read the article

  • Command or tool to display list of connections to a Windows file share

    - by BizTalkMama
    Is there a Windows command or tool that can tell me what users or computers are connected to a Windows fileshare? Here's why I'm looking for this: I've run into issues in the past where our deployment team has deployed BizTalk applications to one of our environments using the wrong bindings, leaving us with two receive locations pointing to the same file share (i.e. both dev and test servers point to dev receive location uri). When this occurs, the two environments in question tend to take turns processing the files received (meaning if I am attempting to debug something in one environment and the other environment has picked the file up, it looks as if my test file has disappeared into thin air). We have several different environments, plus individual developer machines, and I'd rather not have to check each individually to find the culprit. I'm looking for a quick way to detect what locations are connected to the share once I notice my test files vanishing. If I can determine the connections that are invalid, I can go directly to the person responsible for that environment and avoid the time it takes to randomly ask around. Or if the connections appear to be correct, I can go directly to troubleshooting where in the process the message gets lost. Any suggestions?

    Read the article

  • [C#] how to do Exception Handling & Tracing

    - by shrimpy
    Hi all, i am reading some C# books, and got some exercise don't know how to do, or not sure what does the question mean. Problem: After working for a company for some time, your skills as a knowledgeable developer are recognized, and you are given the task of “policing” the implementation of exception handling and tracing in the source code (C#) for an enterprise application that is under constant incremental development. The two goals set by the product architect are: 100% of methods in the entire application must have at least a standard exception handler, using try/catch/finally blocks; more complex methods must also have additional exception handling for specific exceptions All control flow code can optionally write “tracing” information to assist in debugging and instrumentation of the application at run-time in situations where traditional debuggers are not available (eg. on staging and production servers). (i am not quite understand these criterias, i came from the java world, java has two kind of exception, check and unchecked exception. Developer must handle checked exception, and do logging. about unchecked exception, still do logging maybe, but most of the time we just throw it. however here comes to C#, what should i do????) Question for Problem: List rules you would create for the development team to follow, and the ways in which you would enforce rules, to achieve these goals. How would you go about ensuring that all existing code complies with the rules specified by the product architect; in particular, what considerations would impact your planning for the work to ensure all existing code complies?

    Read the article

  • database design - empty fields

    - by imanc
    Hey, I am currently debating an issue with a guy on my dev team. He believes that empty fields are bad news. For instance, if we have a customer details table that stores data for customers from different countries, and each country has a slightly different address configuration - plus 1-2 extra fields, e.g. French customer details may also store details for entry code, and floor/level plus title fields (madamme, etc.). South Africa would have a security number. And so on. Given that we're talking about minor variances my idea is to put all of the fields into the table and use what is needed on each form. My colleague believes we should have a separate table with extra data. E.g. customer_info_fr. But this seams to totally defeat the purpose of a combined table in the first place. His argument is that empty fields / columns is bad - but I'm struggling to find justification in terms of database design principles for or against this argument and preferred solutions. Another option is a separate mini EAV table that stores extra data with parent_id, key, val fields. Or to serialise extra data into an extra_data column in the main customer_data table. I think I am confused because what I'm discussing is not covered by 3NF which is what I would typically use as a reference for how to structure data. So my question specifically: - if you have slight variances in data for each record (1-2 different fields for instance) what is the best way to proceed?

    Read the article

  • Publish Git repository to SVN

    - by Ken Williams
    I and my small team work in Git, and the larger group uses Subversion. I'd like to schedule a cron job to publish our repositories current HEADs every hour into a certain directory in the SVN repo. I thought I had this figured out, but the recipe I wrote down previously doesn't seem to be working now: git clone ssh://me@gitserver/git-repo/Projects/ProjX px2 cd px2 svn mkdir --parents http://me@svnserver/svn/repo/play/me/fromgit/ProjX git svn init -s http://me@svnserver/svn/repo/play/me/fromgit/ProjX git svn fetch git rebase trunk master git svn dcommit Here's what happens when I attempt: % git clone ssh://me@gitserver/git-repo/Projects/ProjX px2 Cloning into 'ProjX'... ... % cd px2 % svn mkdir --parents http://me@svnserver/svn/repo/play/me/fromgit/ProjX Committed revision 123. % git svn init -s http://me@svnserver/svn/repo/play/me/fromgit/ProjX Using higher level of URL: http://me@svnserver/svn/repo/play/me/fromgit/ProjX => http://me@svnserver/svn/repo % git svn fetch W: Ignoring error from SVN, path probably does not exist: (160013): Filesystem has no item: File not found: revision 100, path '/play/me/fromgit/ProjX' W: Do not be alarmed at the above message git-svn is just searching aggressively for old history. This may take a while on large repositories % git rebase trunk master fatal: Needed a single revision invalid upstream trunk I could have sworn this worked previously, anyone have any suggestions? Thanks.

    Read the article

  • Is excessive DataTable usage bad?

    - by Justin R.
    I was recently asked to assist another team in building an ASP .NET website. They already have a significant amount of code written -- I was specifically asked build a few individual pages for the site. While exploring the code for the rest of the site, the amount of DataTables being constructed jumped out at me. Being a relatively new in the field, I've never worked on an application that utilizes a database as much as this site does, so I'm not sure how common this is. It seems that whenever data is queried from our database, the results are stored in a DataTable. This DataTable is then usually passed around by itself, or it's passed to a constructor. Classes that are initialized with a DataTable always assign the DataTable to a private/protected field, however only a few of these classes implement IDisposable. In fact, in the thousands of lines of code that I've browsed so far, I have yet to see the Dispose method called on a DataTable. If anything, this doesn't seem to be good OOP. Is this something that I should worry about? Or am I just paying more attention to detail than I should? Assuming you're most experienced developers than I am, how would you feel or react if someone who was just assigned to help you with your site approached you about this "problem"?

    Read the article

  • Automated test, build and deploy

    - by mike79
    I have visual studio team suite 2008. I was unable to meet the requirements to setup TFS, so I'm using TortoiseSvn and VisualSvn as my version contol in VSTS. I need the system setup to do the following: I neeed to be able to create and track workitems. When updates are made to the current project worked on in VSTS, the updates will be commited back to version control. Tests will be run to see that updates don't break the application. If there's a problem with the update it will be reported back to the developer. If there's no problem with the app, which is a clickonce application, it will automatically be built and deployed to an ftp server. I've never worked with version control, build servers, automated testing and continous intergration. I need to know what needs to be put in place for this type of system. I don't know which combination/stack I should be using: CC.net, TeamCity, Hudson, NAnt, NUnit, MsTest, Trac, BugTracker.net, Ndepend, VisualSvn Server, Perforce, Msdeploy, SCM. I want something that is free/opensource and relatively easy to setup and use. Please suggest a setup that will fit my needs. Any help appreciated

    Read the article

  • How do you handle the tension between refactoring and the need for merging?

    - by Xavier Nodet
    Hi, Our policy when delivering a new version is to create a branch in our VCS and handle it to our QA team. When the latter gives the green light, we tag and release our product. The branch is kept to receive (only) bug fixes so that we can create technical releases. Those bug fixes are subsequently merged on the trunk. During this time, the trunk sees the main development work, and is potentially subject to refactoring changes. The issue is that there is a tension between the need to have a stable trunk (so that the merge of bug fixes succeed -- it usually can't if the code has been e.g. extracted to another method, or moved to another class) and the need to refactor it when introducing new features. The policy in our place is to not do any refactoring before enough time has passed and the branch is stable enough. When this is the case, one can start doing refactoring changes on the trunk, and bug-fixes are to be manually committed on both the trunk and the branch. But this means that developpers must wait quite some time before committing on the trunk any refactoring change, because this could break the subsequent merge from the branch to the trunk. And having to manually port bugs from the branch to the trunk is painful. It seems to me that this hampers development... How do you handle this tension? Thanks.

    Read the article

  • The program fails to display `cout` when it is run

    - by Jeff - FL
    Hello, I justed started a C++ course & I wrote, compiled, debugged & ran my first program: // This program calculates how much a little league team spent last year to purchase new baseballs. #include <iostream> using namespace std; int baseballs; int cost; int total; int main() { baseballs, cost, total; // Get the number of baseballs were purchased. cout << "How many baseballs were purchased? "; cin >> baseballs; // Get the cost of baseballs purchased. cout << "What was the cost of each baseball purchased? "; cin >> cost; // Calculate the total. total = baseballs * cost; // Display the total. cout << "The total amount spent $" << total << endl; return 0; } The only probelm that I encountered was that when I ran the program it failed to display the total amount spent (cout). Could someone please explain why? Thanks Jeff H - Sarasota, FL

    Read the article

  • Do vs. Run vs. Execute vs. Perform verbs

    - by coffeeaddict
    Before anyone starts to go nuts and red flag this post saying this is "Subjective" which drives me absolutely nuts because everyone has their own intent why they are posting something others feel are subjective. Subjective is subjective to each person, how about that! So with that let me tell you a couple things so that this post does not get flagged by flag happy moderators: 1) There are community guidlines on specific keywords recommended by certain organizations or people (e.g. Microsoft, Lance Hunt, etc.) 2) I want to know what others are using the most and why. Why they feel this verb reads better than others 3) Books even talk about this verb issue (Uncle Bob, etc.), so it's not subjective Now to my actual question: a) What list of verbs are you using for method names? What's your personal or team standard? b) I debate whether to use Do vs. Run vs. Execute vs. Perform and am wondering if any of these are no longer recommended or some that people just don't really use and I should just scratch them. Basically any one of those verbs mean the same thing...to invoke some process (method call). This is outside of CRUDs. For example: ExecutePayPalWorkflow(); that could be also any one of these names instead: DoPayPalWorkflow(); RunPayPalWorkflow(); PerformPayPalWorkflow(); or does it not really matter...because any of those verbs pretty much are understandable as to "what" shows your intent by the other words that follow it "PayPalWorkflow" This discussion can go for any language. I just put the two main tags C# and Java here which is good enough for me to get some solid answers or experiences.

    Read the article

  • How might a C# programmer approach writing a solution in javascript?

    - by Ben McCormack
    UPDATE: Perhaps this wasn't clear from my original post, but I'm mainly interested in knowing a best practice for how to structure javascript code while building a solution, not simply learning how to use APIs (though that is certainly important). I need to add functionality to a web site and our team has decided to approach the solution using a web service that receives a call from a JSON-formatted AJAX request from within the web site. The web service has been created and works great. Now I have been tasked with writing the javascript/html side of the solution. If I were solving this problem in C#, I would create separate classes for formatting the request, handling the AJAX request/response, parsing the response, and finally inserting the response somehow into the DOM. I would build properties and methods appropriately into each class, doing my best to separate functionality and structure where appropriate. However, I have to solve this problem in javascript. Firstly, how could I approach my solution in javascript in the way I would approach it from C# as described above? Or more importantly, what's a better way to approach structuring code in javascript? Any advice or links to helpful material on the web would be greatly appreciated. NOTE: Though perhaps not immediately relevant to this question, it may be worth noting that we will be using jQuery in our solution.

    Read the article

  • Asp.Net 2 integrated sites How to Logout second site programatically.

    - by NBrowne
    Hi , I am working with an asp.net 2.0 site (call it site 1) which has an iframe in it which loads up another site (site2) which is also an asp.net site which is developed by our team. When you log onto site 1 then behind the scenes site 2 is also logged in so that when you click the iframe tab then this displays site 2 with the user logged in (to prevent the user from having to log in twice). The problem i have is that when a user logs out of site 1 then we call some cleanup methods to perform FormsAuthentication.SignOut and clean session variables etc but at the moment no cleanup is called when the user on site 2. So the issue is that if the user opens up Site 2 from within a browser then website 2 opens with the user still logged in which is undesired. Can anyone give me some guidance as to the best approach for this?? One possible approach i though of was just that on click of logout button i could do a call to a custom page on Site 2 which would do the logout. Code below HttpWebRequest request; request = ((HttpWebRequest)(WebRequest.Create("www.mywebsite.com/Site2Logout.aspx"))); request.Method = "POST"; HttpCookie cookie = HttpContext.Current.Request.Cookies[FormsAuthentication.FormsCookieName]; Cookie authenticationCookie = new Cookie( FormsAuthentication.FormsCookieName, cookie.Value, cookie.Path, HttpContext.Current.Request.Url.Authority); request .CookieContainer = new CookieContainer(); request .CookieContainer.Add(authenticationCookie); response.GetResponse(); Problem i am having with this code is that when i run it and debug on Site 2 and check to see if the user is Authenticated they are not which i dont understand because if i open browser and browse to Site 2 i am Still authenticated. Any ideas , different direction to take etc ??? Please let me know if you need any more info or if i something i have said dosent make sense. Thanks

    Read the article

  • Yeoman 'grunt test' fails on clean project with 'port already in use'

    - by XMLilley
    With: Mac OS 10.8.4 Node 0.10.12 npm 1.3.1 grunt-cli 0.1.9 yo 1.0.0-rc.1 bower 0.9.2 [email protected] I encounter the following error with a clean yo angular project, followed by grunt server then grunt test: Running "connect:test" (connect) task Fatal error: Port 9000 is already in use by another process. I'm new to Yeoman and am stumped. I've deleted my original project and created a new one in a fresh folder just to make sure I wasn't overlooking any invisible configs. I restarted the machine to make sure I wasn't running any temporary server processes I had forgotten about. After all attempts, the basic server starts fine, attaches to Chrome, and the watcher updates the browser on any changes. (Notably, the server is running on 9000, which seems odd for the test-runner to also be trying to use 9000.) But I get that same error on attempting to start the test runner. Is this something I can fix, or an issue I should report to the Yeoman team? Thanks.

    Read the article

  • WCF data services - Limiting related objects returned based on critera

    - by Mike Morley
    I have an object graph consisting of a base employee object, and a set of related message objects. I am able to return the employee objects based on search criteria on the employee properties (eg team) etc. However, if I expand on the messages, I get the full collection of messages back. I would like to be able to either take the top n messages (i.e. restrict to 10 most recent) or ideally use a date range on the message objects to limit how many are brought back. So far I have not been able to figure out a way of doing this: I get an error if I attempt to filter on properties on the message (&$filter=employee/message/StartDate gives an error "No property 'StartDate' exists in type 'System.Data.Objects.DataClasses.EntityCollection`1). Attempting to use Top on the message related object doesn't work either. I have also tried using a WebGet extension that takes a string list of employee IDs. That works until the list gets too long, and then fails due to the URL getting too long (it might be possible to setup a paging mechanism on this approach)... Unfortunately the UI control I am using requires the data to be in a fairly specific hierarchical shape, so I can't easily come at this from starting on the message side and working backwards. Outside of making multiple calls does anyone know of a method to accomplish this with wcf data services? Thanks! M.

    Read the article

  • Need help/suggestions for creating fantasy sports scoring databases and queries

    - by MGumbel
    I'm trying to create a website for my friends and I to keep track of fantasy sports scoring. So far, I've been doing the calculations and storage in Excel, which is very tedious. I'm trying to make it more simplified and automated through a SQL database that I can then wrap a web app around to enter daily stat updates. It's premised on our participation in another commercial site where we trade virtual shares of athletes, and thus acquire an "ownership percentage" in each athlete. For instance, if there are 100 shares of AROD, and I own 10 shares, then I own 10%. It then applies this to traditional baseball rotisserie scoring. So, for instance, if AROD has 1 HR today, then his adjusted HR stat would be 1.10. If he also has 2 RBI's, then his adjusted RBI stat today would be 2.20, based on (2 x 1.10)(1 to normalize the stat, and the .10 to represent the ownership percentage). All the stats for my team would then be summed each day and added to my stat history to come to an aggregated total. After that, points are allocated based on the ranking of each participant in each category at the end of the day. E.g. if there are 10 participants, and I have the highest total aggregate number of Adjusted HR's, then I get 10 pts. The points are then summed across the different stat categories to come up with a total point ranking for that day. An added difficulty is that ownership %'s can change on a daily basis. So far, in playing around with different schema, I don't know that having a separate table for each athlete's stats and each player's ownership %'s is the wisest choice. It seems to me that simply having two tables, one that contains the daily stat information for each athlete, and another that shows the ownership % of each player. My friend suggested using a start and end date for each ownership % to represent the potential daily changes in this category. I'm admittedly new to database development, so any suggestions on query code would be appreciated.

    Read the article

  • File size monitoring in C#

    - by manemawanna
    Hello, I work in the Systems & admin team and have been given the task of creating a quota management application to try and encourage users to better manage there resources as we currently have issues with disc space and don't enforce hard quotas. At the moment I'm using the code below to go through all the files in a users homespace to retrieve the overall amount of space they are using. As from what I've seen else where theres no other way to do this in C#, the issue with it is theirs quite a high overhead while it retireves the size of each file then creates a total. try { long dirSize = 0; FileInfo[] FI = new DirectoryInfo("I:\\").GetFiles("*.*", SearchOption.AllDirectories); foreach (FileInfo F1 in FI) { dirSize += F1.Length; } return dirSize; } So I'm looking for a quicker way to do this or a quick way to monitor changes in the size of files while using the options avaliable through FileSystemWatcher. At the moment the only thing I can think of is creating a hashtable containing the file location and size of each file, so when a size changed event occurs I can compare the old size against the new one and update the total. Any suggestions would be greatly appreciated.

    Read the article

  • Repository layout and sparse checkouts

    - by chuanose
    My team is considering to move from Clearcase to Subversion and we are thinking of organising the repository like this: \trunk\project1 \trunk\project2 \trunk\project3 \trunk\staticlib1 \trunk\staticlib2 \trunk\staticlib3 \branches\.. \tags\.. The issue here is that we have lots of projects (1000+) and each project is a dll that links in several common static libraries. Therefore checking out everything in trunk is a non-starter as it will take way too long (~2 GB), and is unwieldy for branching. Using svn:externals to pull out relevant folders for each project doesn't seem ideal because it results in several working copies for each static library folder. We also cannot do an atomic commit if the changes span the project and some static libraries. Sparse checkouts sounds very suitable for this as we can write a script to pull out only the required directories. However when we want to merge changes from a branch back to the trunk we will need to first check out a full trunk. Wonder if there is some advice on 1) a better repository organization or 2) a way to merge over branch changes to a trunk working copy that is sparse?

    Read the article

  • Book/topic recommendations for a programmer returning to programming.

    - by Jason Tan
    I used to be a developer in Java, PHP, perl and C/C++ (the C++ bit badly - the others not too badly, I hope). This was back in the Java 1.3/1.4 days. We used raw JDBC, swing, servlets, JSP and ant (sometimes even make). Eclipse was new. Then I joined a deployment team and became a deployment engineer and then after the deployment engineer work became a full time sys admin.You get the idea - my experience is a generation or two old in programming terms - maybe older. I'm interested in getting back into Java and perhaps Ruby development, but feel I will be waaaaay behind the technological 8 ball. Can you folks suggest some books (or sites) that would be worth reading to catch up with the last 5-10 years of the development world. I.e. what should I read to try and catch up with where development is now? I see lots of stuff on the web, but what are people in the fabled "real world" using? (are lots of people being SOA based apps? Are they using XP methodology) The sorts of things I'm interested in finding out about/catching up on are: Methodologies Design patterns APIs/Frameworks/Technologies Other stuff you deem current/interesting/relevant. So if you have any thoughts or can recommend any books (especially new classics - you know the 's equivalent to K&R C or "The mythical man month"). Thanks for any thoughts you might share.

    Read the article

  • Is it getting to be time for C# to support compile-time macros?

    - by Robert Rossney
    Thus far, Microsoft's C# team has resisted adding formal compile-time macro capabilities to the language. There are aspects of programming with WPF that seem (to me, at least) to be creating some compelling use cases for macros. Dependency properties, for instance. It would be so nice to just be able to do something like this: [DependencyProperty] public string Foo { get; set; } and have the body of the Foo property and the static FooProperty property be generated automatically at compile time. Or, for another example an attribute like this: [NotifyPropertyChanged] public string Foo { get; set; } that would make the currently-nonexistent preprocessor produce this: private string _Foo; public string Foo { get { return _Foo; } set { _Foo = value; OnPropertyChanged("Foo"); } } You can implement change notification with PostSharp, and really, maybe PostSharp is a better answer to the question. I really don't know. Assuming that you've thought about this more than I have, which if you've thought about it at all you probably have, what do you think? (This is clearly a community wiki question and I've marked it accordingly.)

    Read the article

  • PHP / Zend custom date input formats

    - by mld
    Hi, I'm creating a library component for other developers on my team using PHP and Zend. This component needs to be able to take as input a date (string) and another string telling it the format of that date. Looking through the Zend documentation and examples, I thought I found the solution - $dateObject = Zend_Date('13.04.2006', array('date_format' => 'dd.MM.yyyy')); this line, however, throws an error - call to undefined function. So instead I tried this - $dt = Zend_Locale_Format::getDate('13.04.2006', array('date_format' => 'dd.MM.yyyy')); This gets the job done, but throws an error if a date that is entered isn't valid. The docs make it look like you can use the isDate() function to check validity - Zend_Date::isDate('13.04.2006', array('date_format' => 'dd.MM.yyyy')) but this line always returns false. So my questions - Am I going about this the right way? If not, is there a better way to handle this via Zend or straight PHP? If I do use Zend_Locale_Format::getDate(), do I need to worry about a locale being set elsewhere and changing the results of the call? I'm locked into PHP 5.2.6 on windows, btw... so 5.3+ functions & strptime() are out.

    Read the article

  • Git repositories on shared hosting with ssh access - multiple users / one ssh account

    - by acp
    I'm part of a small team trying to start coding on a project. I've decided it's time to give git a chance (no more svn) and was trying to see if we could use our shared web hosting to deploy a "public" repository there so that we can easily push/pull to/from it and keep up-to-date with each others changes. The problem I'm having now is that we only have a single ssh account for that hosting. Having used svn in the past, I could enforce a svn username on a given pair of ssh keys, however I don't seem to be able to do something similar with git (in other words tie the ssh keypair to a specific dev). I don't mind everybody having read/write permissions everywhere, since anything that is private should stay on each others machine. Finally, solutions such as gitosis can not be used. I guess my question to you is how is accountability to git pushes given? Is it tied to the ssh account being used, or the email address given in git config? Can I create different ssh keys for every developer (for the same ssh account though), and just send them to the devs?

    Read the article

< Previous Page | 251 252 253 254 255 256 257 258 259 260 261 262  | Next Page >