Search Results

Search found 13692 results on 548 pages for 'bad practices'.

Page 502/548 | < Previous Page | 498 499 500 501 502 503 504 505 506 507 508 509  | Next Page >

  • ASP.Net Web Farm Monitoring

    - by cisellis
    I am looking for suggestions on doing some simple monitoring of an ASP.Net web farm as close to real-time as possible. The objectives of this question are to: Identify the best way to monitor several Windows Server production boxes during short (minutes long) period of ridiculous load Receive near-real-time feedback on a few key metrics about each box. These are simple metrics available via WMI such as CPU, Memory and Disk Paging. I am defining my time constraints as soon as possible with 120 seconds delayed being the absolute upper limit. Monitor whether any given box is up (with "up" being defined as responding web requests in a reasonable amount of time) Here are more details, things I've tried, etc. I am not interested in logging. We have logging solutions in place. I have looked at solutions such as ELMAH which don't provide much in the way of hardware monitoring and are not visible across an entire web farm. ASP.Net Health Monitoring is too broad, focuses too much on logging and is not acceptable for deep analysis. We are on Amazon Web Services and we have looked into CloudWatch. It looks great but messages in the forum indicate that the metrics are often a few minutes behind, with one thread citing 2 minutes as the absolute soonest you could expect to receive the feedback. This would be good to have for later analysis but does not help us real-time Stuff like JetBrains profiler is good for testing but again, not helpful during real-time monitoring. The closest out-of-box solution I've seen is Nagios which is free and appears to measure key indicators on any kind of box, including Windows. However, it appears to require a Linux box to run itself on and a good deal of manual configuration. I'd prefer to not spend my time mining config files and then be up a creek when it fails in production since Linux is not my main (or even secondary) environment. Are there any out-of-box solutions that I am missing? Obviously a windows-based solution that is easy to setup is ideal. I don't require many bells and whistles. In the absence of an out-of-box solution, it seems easy for me to write something simple to handle what I need. I've been thinking a simple client-server setup where the server requests a few WMI metrics from each client over http and sticks them in a database. We could then monitor the metrics via a query or a dashboard or something. If the client doesn't respond, it's effectively down. Any problems with this, best practices, or other ideas? Thanks for any help/feedback.

    Read the article

  • Web Safe Area (optimal resolution) for web app design

    - by M.A.X
    I'm in the process of designing a new web app and I'm wondering for what 'web safe area' should I optimize the app layout and design. I did some investigation and thinking on my own but wanted to share this to see what the general opinion is. Here is what I found: Optimal Display Resolution: w3schools web stats seems to be the most referenced source (however they state that these are results from their site and is biased towards tech savvy users) http://www.w3counter.com/globalstats.php (aggregate data from something like 15,000 different sites that use their tracking services) StatCounter Global Stats Display Resolution (Stats are based on aggregate data collected by StatCounter on a sample exceeding 15 billion pageviews per month collected from across the StatCounter network of more than 3 million websites) NetMarketShare Screen Resolutions (marketshare.hitslink.com) (a web analytics consulting firm, they get data from browsers of site visitors to their on-demand network of live stats customers. The data is compiled from approximately 160 million visitors per month) Display Resolution Summary: There is a bit of variation between the above sources but in general as of Jan 2011 looks like 1024x768 is about 20%, while ~85% have a higher resolution of at least 1280x768 (1280x800 is the most common of these with 15-20% of total web, depending on the source; 1280x1024 and 1366x768 follow behind with 9-14% of the share). My guess would be that the higher resolution values will be even more common if we filter on North America, and even higher if we filter on N.American corporate users (unfortunately I couldn't find any free geographically filtered statistics). Another point to note is that the 1024x768 desktop user population is likely lower than the aforementioned 20%, seeing as the iPad (1024x768 native display) is likely propping up those number. My recommendation would be to optimize around the 1280x768 constraint (*note: 1280x768 is actually a relatively rare resolution, but I think it's a valid constraint range considering that 1366x768 is relatively common and 1280 is the most common horizontal resolution). Browser + OS Constraints: To further add to the constraints we have to subtract the space taken up by the browser (assuming IE, which is the most space consuming) and the OS (assuming WinXP-Win7): Win7 has the biggest taskbar footprint at a height of 40px (XP's and Vista's is 30px) The default IE8 view uses up 25px at the bottom of the screen with the status bar and a further 120px at the top of the screen with the windows title bar and the browser UI (assuming the default 'favorites' toolbar is present, it would instead be 91px without the favorites toolbar). Assuming no scrollbar, we also loose a total of 4px horizontally for the window outline. This means that we are left with 583px of vertical space and 1276px of horizontal. In other words, a Web Safe Area of 1276 x 583 Is this a correct line of thinking? I tried to Google some design best practices but most still talk about designing around 1024x768 which seems to be quickly disappearing. Any help on this would be greatly appreciated! Thanks.

    Read the article

  • Windows Service HTTPListener Memory Issue

    - by crawshaws
    Hi all, Im a complete novice to the "best practices" etc of writing in any code. I tend to just write it an if it works, why fix it. Well, this way of working is landing me in some hot water. I am writing a simple windows service to server a single webpage. (This service will be incorperated in to another project which monitors the services and some folders on a group of servers.) My problem is that whenever a request is recieved, the memory usage jumps up by a few K per request and keeps qoing up on every request. Now ive found that by putting GC.Collect in the mix it stops at a certain number but im sure its not meant to be used this way. I was wondering if i am missing something or not doing something i should to free up memory. Here is the code: Public Class SimpleWebService : Inherits ServiceBase 'Set the values for the different event log types. Public Const EVENT_ERROR As Integer = 1 Public Const EVENT_WARNING As Integer = 2 Public Const EVENT_INFORMATION As Integer = 4 Public listenerThread As Thread Dim HTTPListner As HttpListener Dim blnKeepAlive As Boolean = True Shared Sub Main() Dim ServicesToRun As ServiceBase() ServicesToRun = New ServiceBase() {New SimpleWebService()} ServiceBase.Run(ServicesToRun) End Sub Protected Overrides Sub OnStart(ByVal args As String()) If Not HttpListener.IsSupported Then CreateEventLogEntry("Windows XP SP2, Server 2003, or higher is required to " & "use the HttpListener class.") Me.Stop() End If Try listenerThread = New Thread(AddressOf ListenForConnections) listenerThread.Start() Catch ex As Exception CreateEventLogEntry(ex.Message) End Try End Sub Protected Overrides Sub OnStop() blnKeepAlive = False End Sub Private Sub CreateEventLogEntry(ByRef strEventContent As String) Dim sSource As String Dim sLog As String sSource = "Service1" sLog = "Application" If Not EventLog.SourceExists(sSource) Then EventLog.CreateEventSource(sSource, sLog) End If Dim ELog As New EventLog(sLog, ".", sSource) ELog.WriteEntry(strEventContent) End Sub Public Sub ListenForConnections() HTTPListner = New HttpListener HTTPListner.Prefixes.Add("http://*:1986/") HTTPListner.Start() Do While blnKeepAlive Dim ctx As HttpListenerContext = HTTPListner.GetContext() Dim HandlerThread As Thread = New Thread(AddressOf ProcessRequest) HandlerThread.Start(ctx) HandlerThread = Nothing Loop HTTPListner.Stop() End Sub Private Sub ProcessRequest(ByVal ctx As HttpListenerContext) Dim sb As StringBuilder = New StringBuilder sb.Append("<html><body><h1>Test My Service</h1>") sb.Append("</body></html>") Dim buffer() As Byte = Encoding.UTF8.GetBytes(sb.ToString) ctx.Response.ContentLength64 = buffer.Length ctx.Response.OutputStream.Write(buffer, 0, buffer.Length) ctx.Response.OutputStream.Close() ctx.Response.Close() sb = Nothing buffer = Nothing ctx = Nothing 'This line seems to keep the mem leak down 'System.GC.Collect() End Sub End Class Please feel free to critisise and tear the code apart but please BE KIND. I have admitted I dont tend to follow the best practice when it comes to coding.

    Read the article

  • Maven best practice for generating multiple jars with different/filtered classes ?

    - by jaguard
    I developed a Java utility library (similarly to Apache Commons) that I use in various projects. Additionally to fat clients I also use it for mobile clients (PDA with J9 Foundation profile). In time the library that started as a single project spread over multiple packages. As a result I end up with a lot of functionality but not really needed in all the projects. Since this library is also used inside some mobile/PDA projects I need a way to collect just the used classes and generate the actual specialized jars Currently in the projects that area using this library, I have Ant jar tasks that generate (from the utility project) the specialized jar files (ex: my-util-1.0-pda.jar, my-util-1.0-rcp.jar) using include/exclude jar task features. This is mostly needed due to the generated jar size constraints for the mobile projects. Migrating now to Maven I just wonder if there are any best practices to arrive to something similar so I consider the following scenarios: [1] - additionally to the main jar artifact (my-lib-1.0.jar) also generating inside my-lib project the separate/specialized artifacts using classifiers (ex: my-lib-1.0-pda.jar) using Maven Jar Plugin or Maven Assembly Plugin filtering/includes ... I'm not very comfortable with this approach since it pollute the library with library consumers demands (filters) [2] - Create additional Maven projects for all the specialized clients/projects, that will "wrap" the "my-lib" and generate the filtered jar artifacts (ex: my-lib-wrapper-pda-1.0 ...etc). As a result, these wrapper projects will include the filtering (to generate the filtered artifact) and will depend just on the "my-lib" project and the client projects will depend on my-lib-wrapper-xxx-1.0 instead of my-lib-1.0. This approach my look problematic since even that will let "my-lib" project intact (with no additional classifiers & artifacts), basically will double the number of projects since for every client project I'll have one just to collect the needed classes from the "my-util" library ("my-pda-app" project will need a "my-lib-wrapper-for-my-pda-app" project/dependency) [3] - Into the every client project that use the library (ex: my-pda-app) add some specialized Maven plugins to trim - out (when generating the final artifact/package) the un-needed classes (ex: maven-assembly-plugin, maven-jar-plugin, proguard-maven-plugin) What is the best practice for solving this kind of problems in the "Maven way" ?!

    Read the article

  • Minimalistic tools for developer documentation

    - by Pekka
    I am currently working on a large PHP CMS / Framework and documenting it extensively as I go along. In addition to phpdoc-style inline comments, I need to document XML structures, details on concepts and practices, write HOWTOs and so on. At the moment, I am using simple OpenOffice documents for that, but I'm unhappy with it and looking for a "real" documentation system. So, I am looking for recommendations for robust, minimalistic, easy-to-use documentation software. I have tried a number of Wikis, most prominently Dokuwiki. I like the open-minded approach, the freedom in editing, and the simplicity, but they provide little support in structuring a multi-chapter documentation, and make basic reorganisation tasks very difficult (e.g. moving pages to a different namespace). Working with the plugins is Cumbersome, and they are not really easy to use. Open Source would be a plus but is not a requirement. Thanks for all the suggestions. I have not had time to look into each one in detail. I will be trying Sphinx, especially because it provides so much support for a good structure. I may update this post later when I'm done and report how it worked out. The suggestions Trac's built-in wiki which is great but for my taste provides too little support for keeping a structure - it's perfect though for "normal", smaller size project documentation Markdown my current favourite because of its minimalism, however not sure yet whether maintaining a structure will be easy enough. A Markdown-Based system would of course be very easy to extend, e.g. to look up cross references from the project's code base. Of course it would be great to find something that already has that out of the box. The DocBook format and to edit, the commercial Oxygen XML Editor - a great standard for building documentation, no doubt. Maybe too "technical" for my purposes as I need something to open quickly, write into and go on coding. Still always worth a mention. Sphinx an Open Source, Python based documentation generator, promising structured documentation and extensive cross-referencing. Interesting and will take a look. Confluence a commercial but very affordable Wiki. XWiki, an Open Source playing in Confluence's league with numerous extensions and connectors to Eclipse and Microsoft Office. TiddlyWiki an open-source Wiki.

    Read the article

  • What are five things you hate about your favorite language?

    - by brian d foy
    There's been a cluster of Perl-hate on Stackoverflow lately, so I thought I'd bring my "Five things you hate about your favorite language" question to StackOverflow. Take your favorite language and tell me five things you hate about it. Those might be things that just annoy you, admitted design flaws, recognized performance problems, or any other category. You just have to hate it, and it has to be your favorite language. Don't compare it to another language, and don't talk about languages that you already hate. Don't talk about the things you like in your favorite language. I just want to hear the things that you hate but tolerate so you can use all of the other stuff, and I want to hear it about the language you wished other people would use. I ask this whenever someone tries to push their favorite language on me, and sometimes as an interview question. If someone can't find five things to hate about his favorite tool, he don't know it well enough to either advocate it or pull in the big dollars using it. He hasn't used it in enough different situations to fully explore it. He's advocating it as a culture or religion, which means that if I don't choose his favorite technology, I'm wrong. I don't care that much which language you use. Don't want to use a particular language? Then don't. You go through due diligence to make an informed choice and still don't use it? Fine. Sometimes the right answer is "You have a strong programming team with good practices and a lot of experience in Bar. Changing to Foo would be stupid." This is a good question for code reviews too. People who really know a codebase will have all sorts of suggestions for it, and those who don't know it so well have non-specific complaints. I ask things like "If you could start over on this project, what would you do differently?" In this fantasy land, users and programmers get to complain about anything and everything they don't like. "I want a better interface", "I want to separate the model from the view", "I'd use this module instead of this other one", "I'd rename this set of methods", or whatever they really don't like about the current situation. That's how I get a handle on how much a particular developer knows about the codebase. It's also a clue about how much of the programmer's ego is tied up in what he's telling me. Hate isn't the only dimension of figuring out how much people know, but I've found it to be a pretty good one. The things that they hate also give me a clue how well they are thinking about the subject.

    Read the article

  • What is the best python module skeleton code?

    - by user213060
    == Subjective Question Warning == Looking for well supported opinions or supporting evidence. Let us assume that skeleton code can be good. If you disagree with the very concept of module skeleton code then fine, but please refrain from repeating that opinion here. Many python IDE's will start you with a template like: print 'hello world' That's not enough... So here's my skeleton code to get this question started: My Module Skeleton, Short Version: #!/usr/bin/env python """ Module Docstring """ # ## Code goes here. # def test(): """Testing Docstring""" pass if __name__=='__main__': test() and, My Module Skeleton, Long Version: #!/usr/bin/env python # -*- coding: ascii -*- """ Module Docstring Docstrings: http://www.python.org/dev/peps/pep-0257/ """ __author__ = 'Joe Author ([email protected])' __copyright__ = 'Copyright (c) 2009-2010 Joe Author' __license__ = 'New-style BSD' __vcs_id__ = '$Id$' __version__ = '1.2.3' #Versioning: http://www.python.org/dev/peps/pep-0386/ # ## Code goes here. # def test(): """ Testing Docstring""" pass if __name__=='__main__': test() Notes: """ ===MODULE TYPE=== Since the vast majority of my modules are "library" types, I have constructed this example skeleton as such. For modules that act as the main entry for running the full application, you would make changes such as running a main() function instead of the test() function in __main__. ===VERSIONING=== The following practice, specified in PEP8, no longer makes sense: __version__ = '$Revision: 1.2.3 $' for two reasons: (1) Distributed version control systems make it neccessary to include more than just a revision number. E.g. author name and revision number. (2) It's a revision number not a version number. Instead, the __vcs_id__ variable is being adopted. This expands to, for example: __vcs_id__ = '$Id: example.py,v 1.1.1.1 2001/07/21 22:14:04 goodger Exp $' ===VCS DATE=== Likewise, the date variable has been removed: __date__ = '$Date: 2009/01/02 20:19:18 $' ===CHARACTER ENCODING=== If the coding is explicitly specified, then it should be set to the default setting of ascii. This can be modified if necessary (rarely in practice). Defaulting to utf-8 can cause anomalies with editors that have poor unicode support. """ There are a lot of PEPs that put forward coding style recommendations. Am I missing any important best practices? What is the best python module skeleton code? Update Show me any kind of "best" that you prefer. Tell us what metrics you used to qualify "best".

    Read the article

  • Verify Authenticode signature as being from our company for automatic updater

    - by James Johnston
    I am implementing an automatic update feature and need some advice on how to do this securely using best practices. I would like to use the downloaded file's Authenticode signature to verify that it is safe to run (i.e. originates from our company and hasn't been tampered with). My question is very similar to question #2008519. The bottom-line question: what's the best, most secure way to check Authenticode signatures for an automatic update feature? What fields in the certificate should be checked? Requirements being: (1) check signature is valid, (2) check it's my signature, (3) old clients can still update when my certificate expires and I get a new one. Here's some background information / ideas from my research: I believe this could be broken into two steps: Verify that the signature is valid. I believe this should be easy using WinVerifyTrust as outlined in http://msdn.microsoft.com/en-us/library/aa382384(VS.85).aspx - I don't expect problems here. Verify that the signature corresponds to our company, and not another company. This seems to be a more difficult question to answer: One possibility is to check some of the strings in the signature. Could be obtained via code at MS KB article #323809, but this article doesn't make recommendations on what fields should be checked for this type of application (or any other, for that matter). Question #1072540 also illustrates how to get some certificate info, but again doesn't recommend what fields to actually check. My concern is that the strings might not be the best check: what if another person is able to obtain a certificate with the same name, for example? Or if there's a valid reason for us to change the strings in the future? The person at question #2008519 has a very similar requirement. His need for a "TrustedByUs" function is identical to mine. However, he goes about doing the check by comparing public keys. While this would work in the short-term, it seems like it won't work for an automatic update feature. This is because code signing certificates are only valid for 2 - 3 years max. Therefore, in the future, when we buy a new certificate in 2 years, the old clients wouldn't be able to update any more due to the change in public key.

    Read the article

  • Set maven to use archiva repositories WITHOUT using activeByDefault?

    - by Sam Levin
    I am very close to finally having a working setup with archiva and maven. The last thing that's really boggling me, is how to set up my internal and snapshot repositories - without using a profile which contains activeByDefault set to true. I am using a SUPER super pom - a company-wide pom which contains distributionManagement information for releases. I was thinking that I could specify the repositories in this pom, and configure the authentication settings in settings.xml? Can I use repositories tag without a profile? There should be no "profile" for my internal and snapshot repositories, as they will never change... What I'm trying to steer clear from, is using a "default" profile, which is active all the time. I hear activeByDefault is NOT a best practice and I don't intend to use it. With that said, how should I go about doing this? My internal repo is a mirror of the maven central repo, so I would like to lock down my developers to ONLY use our internal artifact server. Remember - I do NOT want a profile with activeByDefault set to true. I cannot stress this enough! Should I use Maven mirrors? Should I "add" additional repositories? If I take the repositories tag instead of the mirrors tag, will maven force builds to use ONLY my archiva settings, instead of the default maven central? Or is what I seek to accomplish able to be done using only the mirrors tag in maven? I know how to configure repo credentials when using repositories tag, but not with mirrors. How is this done? Is providing credentials for anything in mirrors tags the same as for anything in repositories tags? Am I missing something obvious? I've had it up to here with getting things up and running using maven. I know it will be worthwhile in the end, but it is surely causing me a ton of aggravation and resources seem to be sparse. Either that, or people are content using it however they please without regard to best-practices. Thank you

    Read the article

  • A couple questions using fwrite/fread with data structures

    - by Nazgulled
    Hi, I'm using fwrite() and fread() for the first time to write some data structures to disk and I have a couple of questions about best practices and proper ways of doing things. What I'm writing to disk (so I can later read it back) is all user profiles inserted in a Graph structure. Each graph vertex is of the following type: typedef struct sUserProfile { char name[NAME_SZ]; char address[ADDRESS_SZ]; int socialNumber; char password[PASSWORD_SZ]; HashTable *mailbox; short msgCount; } UserProfile; And this is how I'm currently writing all the profiles to disk: void ioWriteNetworkState(SocialNetwork *social) { Vertex *currPtr = social->usersNetwork->vertices; UserProfile *user; FILE *fp = fopen("save/profiles.dat", "w"); if(!fp) { perror("fopen"); exit(EXIT_FAILURE); } fwrite(&(social->usersCount), sizeof(int), 1, fp); while(currPtr) { user = (UserProfile*)currPtr->value; fwrite(&(user->socialNumber), sizeof(int), 1, fp); fwrite(user->name, sizeof(char)*strlen(user->name), 1, fp); fwrite(user->address, sizeof(char)*strlen(user->address), 1, fp); fwrite(user->password, sizeof(char)*strlen(user->password), 1, fp); fwrite(&(user->msgCount), sizeof(short), 1, fp); break; currPtr = currPtr->next; } fclose(fp); } Notes: The first fwrite() you see will write the total user count in the graph so I know how much data I need to read back. The break is there for testing purposes. There's thousands of users and I'm still experimenting with the code. My questions: After reading this I decided to use fwrite() on each element instead of writing the whole structure. I also avoid writing the pointer to to the mailbox as I don't need to save that pointer. So, is this the way to go? Multiple fwrite()'s instead of a global one for the whole structure? Isn't that slower? How do I read back this content? I know I have to use fread() but I don't know the size of the strings, cause I used strlen() to write them. I could write the output of strlen() before writing the string, but is there any better way without extra writes?

    Read the article

  • Using JUnit as an acceptance test framework

    - by Chris Knight
    OK, so I work for a company who has openly adopted agile practices for development in recent years. Our unit tests and code quality are improving. One area we still are working on is to find what works best for us in the automated acceptance test arena. We want to take our well formed user stories and use these to drive the code in a test driven manner. This will also give us acceptance level tests for each user story which we can then automate. To date, we've tried Fit, Fitnesse and Selenium. Each have their advantages, but we've also had real issues with them as well. With Fit and Fitnesse, we can't help but feel they overcomplicate things and we've had many technical issues using them. The business haven't fully bought in these tools and aren't particularly keen on maintaining the scripts all the time (and aren't big fans of the table style). Selenium is really good, but slow and relies on real time data and resources. One approach we are now considering is the use of the JUnit framework to provide similiar functionality. Rather than testing just a small unit of work using JUnit, why not use it to write a test (using the JUnit framework) to cover an acceptance level swath of the application? I.e. take a new story ("As a user I would like to see basic details of my policy...") and write a test in JUnit which starts executing application code at the point of entry for the policy details link but covers all code and logic down to the stubbed data access layer and back to the point of forwarding to the next page in the application, asserting on what data the user should see on that page. This seems to me to have the following advantages: Simplicity (no additional frameworks required) Zero effort to integrate with our Continuous Integration build server (since it already handles our JUnit tests) Full skillset already present in the team (its just a JUnit test after all) And the downsides being: Less customer involvement (though they are heavily involved in writing the user stories in the first place from which the acceptance tests will be written) Perhaps more difficult to understand (or make understood) the user story and acceptance criteria in a JUnit class verses a freetext specification ala Fit or Fitnesse So, my question is really, have you ever tried this method? Ever considered it? What are your thoughts? What do you like and dislike about this approach? Finally, please only mention alternative frameworks if you can say why you like or dislike them more than this approach.

    Read the article

  • Override variables while testing a standalone Perl script

    - by BrianH
    There is a Perl script in our environment that I now need to maintain. It is full of bad practices, including using (and re-using) global variables throughout the script. Before I start making changes to the script, I was going to try to write some test scripts so I can have a good regression base. To do this, I was going to use a method described on this page. I was starting by writing tests for a single subroutine. I put this line somewhat near the top of the script I am testing: return 1 if ( caller() ); That way, in my test script, I can require 'script_to_test.pl'; and it won't execute the whole script. The first subroutine I was going to test makes a lot of use of global variables that are set throughout the script. My thought was to try to override these variables in my test script, something like this: require_ok('script_to_test.pl'); $var_from_other_script = 'Override Value'; ok( sub_from_other_script() ); Unfortunately (for me), the script I am testing has a massive "my" block at the top, where it declares all variables used in the script. This prevents my test script from seeing/changing the variables in the script I'm running tests against. I've played with Exporter, Test::Mock..., and some other modules, but it looks like if I want to be able to change any variables I am going to have to modify the other script in some fashion. My goal is to not change the other script, but to get some good tests running so when I do start changing the other script, I can make sure I didn't break anything. The script is about 10,000 lines (3,000 of them in the main block), so I'm afraid that if I start changing things, I will affect other parts of the code, so having a good test suite would help. Is this possible? Can a calling script modify variables in another script declared with "my"? And please don't jump in with answers like, "Just re-write the script from scratch", etc. That may be the best solution, but it doesn't answer my question, and we don't have the time/resources for a re-write.

    Read the article

  • What are the benefits of using ORM over XML Serialization/Deserialization?

    - by Tequila Jinx
    I've been reading about NHibernate and Microsoft's Entity Framework to perform Object Relational Mapping against my data access layer. I'm interested in the benefits of having an established framework to perform ORM, but I'm curious as to the performance costs of using it against standard XML Serialization and Deserialization. Right now, I develop stored procedures in Oracle and SQL Server that use XML Types for either input or output parameters and return or shred XML depending on need. I use a custom database command object that uses generics to deserialize the XML results into a specified serializable class. By using a combination of generics, xml (de)serialization and Microsoft's DAAB, I've got a process that's fairly simple to develop against regardless of the data source. Moreover, since I exclusively use Stored Procedures to perform database operations, I'm mostly protected from changes in the data structure. Here's an over-simplified example of what I've been doing. static void main() { testXmlClass test = new test(1); test.Name = "Foo"; test.Save(); } // Example Serializable Class ------------------------------------------------ [XmlRootAttribute("test")] class testXmlClass() { [XmlElement(Name="id")] public int ID {set; get;} [XmlElement(Name="name")] public string Name {set; get;} //create an instance of the class loaded with data. public testXmlClass(int id) { GenericDBProvider db = new GenericDBProvider(); this = db.ExecuteSerializable("myGetByIDProcedure"); } //save the class to the database... public Save() { GenericDBProvider db = new GenericDBProvider(); db.AddInParameter("myInputParameter", DbType.XML, this); db.ExecuteSerializableNonQuery("mySaveProcedure"); } } // Database Handler ---------------------------------------------------------- class GenericDBProvider { public T ExecuteSerializable<T>(string commandText) where T : class { XmlSerializer xml = new XmlSerializer(typeof(T)); // connection and command code is assumed for the purposes of this example. // the final results basically just come down to... return xml.Deserialize(commandResults) as T; } public void ExecuteSerializableNonQuery(string commandText) { // once again, connection and command code is assumed... // basically, just execute the command along with the specified // parameters which have been serialized. } public void AddInParameter(string name, DbType type, object value) { StringWriter w = new StringWriter(); XmlSerializer x = new XmlSerializer(value.GetType()); //handle serialization for serializable classes. if (type == DbType.Xml && (value.GetType() != typeof(System.String))) { x.Serialize(w, value); w.Close(); // store serialized object in a DbParameterCollection accessible // to my other methods. } else { //handle all other parameter types } } } I'm starting a new project which will rely heavily on database operations. I'm very curious to know whether my current practices will be sustainable in a high-traffic situation and whether or not I should consider switching to NHibernate or Microsoft's Entity Framework to perform what essentially seems to boil down to the same thing I'm currently doing. I appreciate any advice you may have.

    Read the article

  • How can I model the data in a multi-language data editor in WPF with MVVM?

    - by Patrick Szalapski
    Are there any good practices to follow when designing a model/ViewModel to represent data in an app that will view/edit that data in multiple languages? Our top-level class--let's call it Course--contains several collection properties, say Books and TopicsCovered, which each might have a collection property among its data. For example, the data needs to represent course1.Books.First().Title in different languages, and course1.TopicsCovered.First().Name in different languages. We want a app that can edit any of the data for one given course in any of the available languages--as well as edit non-language-specific data, perhaps the Author of a Book (i.e. course1.Books.First().Author). We are having trouble figuring out how best to set up the model to enable binding in the XAML view. For example, do we replace (in the single-language model) each String with a collection of LanguageSpecificString instances? So to get the author in the current language: course1.Books.First().Author.Where(Function(a) a.Language = CurrentLanguage).SingleOrDefault If we do that, we cannot easily bind to any value in one given language, only to the collection of language values such as in an ItemsControl. <TextBox Text={Binding Author.???} /> <!-- no way to bind to the current language author --> Do we replace the top-level Course class with a collection of language-specific Courses? So to get the author in the current language: course1.GetLanguage(CurrentLanguage).Books.First.Author If we do that, we can only easily work with one language at a time; we might want a view to show one language and let the user edit the other. <TextBox Text={Binding Author} /> <!-- good --> <TextBlock Text={Binding ??? } /> <!-- no way to bind to the other language author --> Also, that has the disadvantage of not representing language-neutral data as such; every property (such as Author) would seem to be in multiple languages. Even non-string properties would be in multiple languages. Is there an option in between those two? Is there another way that we aren't thinking of? I realize this is somewhat vague, but it would seem to be a somewhat common problem to design for. Note: This is not a question about providing a multilingual UI, but rather about actually editing multi-language data in a flexible way.

    Read the article

  • Observer pattern used with decorator pattern

    - by icelated
    I want to make a program that does an order entry system for beverages. ( i will probably do description, cost) I want to use the Decorator pattern and the observer pattern. I made a UML drawing and saved it as a pic for easy viewing. This site wont let me upload as a word doc so i have to upload a pic - i hope its easily viewable.... I need to know if i am doing the UML / design patterns correctly before moving on to the coding part. Beverage is my abstract component class. Espresso, houseblend, darkroast are my concrete subject classes.. I also have a condiment decorator class milk,mocha,soy,whip. would be my observer? because they would be interested in data changes to cost? Now, would the espresso,houseblend etc, be my SUBJECT and the condiments be my observer? My theory is that Cost is a changes and that the condiments need to know the changes? So, subject = esspresso,houseblend,darkroast,etc.. // they hold cost() Observer = milk,mocha,soy,whip? // they hold cost() would be the concrete components and the milk,mocha,soy,whip? would be the decorator! So, following good software engineering practices "design to an interface and not implementation" or "identify things that change from those that dont" would i need a costbehavior interface? If you look at the UML you will see where i am going with this and see if i am implementing observer + Decorator pattern correctly? I think the decorator is correct. since, the pic is not very viewable i will detail the classes here: Beverage class(register observer, remove observer, notify observer, description) these classes are the concrete beverage classes espresso, houseblend,darkroast, decaf(cost,getdescription,setcost,costchanged) interface observer class(update) // cost? interface costbehavior class(cost) // since this changes? condiment decorator class( getdescription) concrete classes that are linked to the 2 interface s and decorator are: milk,mocha,soy,whip(cost,getdescription,update) these are my decorator/ wrapper classes. Thank you.. Is there a way to make this picture bigger?

    Read the article

  • How do I correct "Commit Failed. File xxx is out of date. xxx path not found."

    - by Ryan Taylor
    I have recently run into a particularly sticky issue regarding committing the result of a merge in subversion. Our Subversion server is @ 1.5.0 and my TortoiseSVN client is now @ 1.6.1. I am trying to merge a feature branch back into my trunk. The merge appears to work okay; however, the commit fails with the following error message. Commit failed (details follow): File 'flex/src/com/penbay/invision/portal/services/http/soap/ReportServices/GetAllBldgsParamsByRegionBySiteResultEvent.as' is out of date '/svn/ibis/!svn/wrk/531d459d-80fa-ea46-bfb4-940d79ee6d2e/visualization/trunk/source/flex/src/com/penbay/invision/portal/services/http/soap/ReportServices/GetAllBldgsParamsByRegionBySiteResultEvent.as' path not found You have to update your working copy first. My working trunk is up to date. I have even checked out a new one into a different folder to make sure there wasn't any local cruft messing with the merge. I have done some more research into this and I think part of the problem is user error. I think our problems are: We had some developers committing work with a subversion client before 1.5 and some after. I believe this has the potential to corrupt the merge info. In other branches we have performed partial merges. That is, we did not always perform merges at the root of the branch. This was to facilitate updating Flex and .NET efforts within the same branch. We performed cyclic (reflexive) merges on our branch. This was done because we had multiple parallel branches and we wanted to periodically update our branch with the latest code in trunk. All of these things are explicitly not recommended by the Subversion book/team. We have learned our lesson and now know the best practices. However, we first need to merge and commit our latest branch. What it the best way to correct the problems we are encountering? Would deleting all the merge info in the trunk and branch be a viable solution? No. I have done this but it does not resolve the error that I am getting above.

    Read the article

  • help merging perl code routines together for file processing

    - by jdamae
    I need some perl help in putting these (2) processes/code to work together. I was able to get them working individually to test, but I need help bringing them together especially with using the loop constructs. I'm not sure if I should go with foreach..anyways the code is below. Also, any best practices would be great too as I'm learning this language. Thanks for your help. Here's the process flow I am looking for: -read a directory -look for a particular file -use the file name to strip out some key information to create a newly processed file -process the input file -create the newly processed file for each input file read (if i read in 10, I create 10 new files) Sample Recs: col1,col2,col3,col4,col5 [email protected],[email protected],8,2009-09-24 21:00:46,1 [email protected],[email protected],16,2007-08-18 22:53:12,33 [email protected],[email protected],16,2007-08-18 23:41:23,33 Here's my test code: Target Filetype: `/backups/test/foo101.name.aue-foo_p002.20110124.csv` Part 1: my $target_dir = "/backups/test/"; opendir my $dh, $target_dir or die "can't opendir $target_dir: $!"; while (defined(my $file = readdir($dh))) { next if ($file =~ /^\.+$/); #Get filename attributes if ($file =~ /^foo(\d{3})\.name\.(\w{3})-foo_p(\d{1,4})\.\d+.csv$/) { print "$1\n"; print "$2\n"; print "$3\n"; } print "$file\n"; } Part 2: use strict; use Digest::MD5 qw(md5_hex); #Create new file open (NEWFILE, ">/backups/processed/foo$1.name.$2-foo_p$3.out") || die "cannot create file"; my $data = ''; my $line1 = <>; chomp $line1; my @heading = split /,/, $line1; my ($sep1, $sep2, $eorec) = ( "^A", "^E", "^D"); while (<>) { my $digest = md5_hex($data); chomp; my (@values) = split /,/; my $extra = "__mykey__$sep1$digest$sep2" ; $extra .= "$heading[$_]$sep1$values[$_]$sep2" for (0..scalar(@values)); $data .= "$extra$eorec"; print NEWFILE "$data"; } #print $data; close (NEWFILE);

    Read the article

  • bundler/capistrano is not installing gems with correct ruby version

    - by Douglas
    I'm trying to deploy my first app on a server with Capistrano, and I'm a bit lost with managing gemsets and ruby version. These are my (server and workstation) versions : Rails 3.2.8 RVM 1.16.17 Gem 1.8.24 Bundler 1.2.1 pg gem 0.14.1 My gemset are : Gemsets for ruby-1.9.3-p194 (found in /usr/local/rvm/gems/ruby-1.9.3-p194) (default) global = rail3dev20120606 I set the default gemset with : rvm use 1.9.3-p194@rail3dev20120606 --default --passenger When I run a : cap bundle:install The task end with success, but when I do a : gem list There are many missing gems though they are present in my Gemfile. When I go to check my gems in /var/www/opf/shared/bundle/ruby/ I find a folder called 1.9.1 and in /var/www/opf/shared/bundle/ruby/1.9.1/gems/ I can fond all of my needed gems (specified in Gemfile). I'm sure there is a problem with ruby version, but how do I solve this ? At the moment, if I do any rake command, I got a ruby crash [Bug] Segmentation fault, as it try to access the db and using postgresql_adapter. I think as many gems are missing there must have some gem dependencies not verified, and maybe a gem is using an incompatible ruby version 1.9.1 though it expect a 1.9.3. I think the issue is around managing ruby versions and gems. I'm certainly doing some mix with gemset and my capistrano deployement. I'm missing experience and info. Could anybody advise me how to handle this on the server ? What are the best practices ? How am I suppose to update my ruby version ? with Capistrano deploy.rb ? manually ? with/without rvm ? I saw a new version of ruby 1.9.3-p327 has just released. Should I use gemset or not ? What about the :rvm_ruby_string in my deploy.rb. Is it correctly spelled or should I remove the p194 part ? Should I Remove the :rvm_ruby_string ? Keep it ? Use a .rvmrc file ??? I'm really lost and some kind help would be welcome. This is my config/deploy.rb in any case : require 'bundler/capistrano' require File.join(File.dirname(__FILE__), 'deploy') + '/capistrano_database' set :rvm_type, :system set :rvm_ruby_string, 'ruby-1.9.3-p194@rail3dev20120606' require 'rvm/capistrano' set :application, 'opf' set :deploy_to, '/var/www/opf' set :rails_env, 'production' set :user, 'the_user' set :use_sudo, false set :group_writable, false set :scm, :git set :repository, '[email protected]:user/opf.git' set :branch, 'master' default_run_options[:pty] = true set :deploy_via, :remote_cache server '192.168.5.200', :web, :app, :db, :primary => true # If you are using Passenger mod_rails uncomment this: namespace :deploy do task :start do ; end task :stop do ; end task :restart, :roles => :app, :except => { :no_release => true } do run "#{try_sudo} touch #{File.join(current_path,'tmp','restart.txt')}" end end Thanks for any help

    Read the article

  • Browser displays page without styles for a short moment (visual glitch)

    - by Pierre
    I have observed that, very infrequently, Internet Explorer (7 or 8, it does not matter) displays our web pages (www.epsitec.ch) a short time without applying the CSS. The layout appears completely broken, with everything displayed sequentially from top to bottom. When the page has finished loading, everything finally gets displayed properly. Our web pages do not use any fancy scripting, just two javascript inclusions for QuantCast and Google Analytics, done at the end of the page. By the way, we already had the issue before adding the QuantCast script. The CSS gets linked in the <head> section: <head> <title>Crésus Comptabilité</title> <link rel="icon" href="/favicon.ico" type="image/x-icon" /> <link rel="shortcut icon" href="http://www.epsitec.ch/favicon.ico" /> <link href="../../style.css" rel="stylesheet" type="text/css" /> ... </head> and then follows static HTML up to the final chunk which includes the JavaScript: ... <div id="account"> <a class="deselect" href="/account/login">Identifiez-vous</a> <script type="text/javascript"> _qoptions={qacct:"..."}; </script> <script type="text/javascript" src="http://edge.quantserve.com/quant.js"> </script> <noscript> <img src="..." style="display: none;" border="0" height="1" width="1"/> </noscript> </div> <div id="contact"> <a href="/support/contact">Contactez-nous</a> </div> <div id="ending"><!-- --></div> </div> <script type="text/javascript"> ... </script> <script type="text/javascript"> var pageTracker = _gat._getTracker("..."); pageTracker._initData(); pageTracker._trackPageview(); </script> </body> As this is a very short visual glitch, I have no idea what provokes it. Worse, I cannot reproduce it and it appears only on seldom occasions. How can I further investigate the cause of the glitch? Are there any best practices I should be aware of?

    Read the article

  • Consolidating coding styles: Funcs, private method, single method classes

    - by jdoig
    Hi all, We currently have 3 devs with, some, conflicting styles and I'm looking for a way to bring peace to the kingdom... The Coders: Foo 1: Likes to use Func's & Action's inside public methods. He uses actions to alias off lengthy method calls and Func's to perform simple tasks that can be expressed in 1 or 2 lines and will be used frequently through out the code Pros: The main body of his code is succinct and very readable, often with only one or 2 public methods per class and rarely any private methods. Cons: The start of methods contain blocks of lambda rich code that other developers don't enjoy reading; and, on occasion, can contain higher order functions that other dev's REALLY don't like reading. Foo 2: Likes to create a private method for (almost) everything the public method will have to do . Pros: Public methods remain small and readable (to all developers). Cons: Private methods are numerous. With private methods that call into other private methods, that call into... etc, etc. Making code hard to navigate. Foo 3: Likes to create a public class with a, single, public method for every, non-trivial, task that needs performing, then dependency inject them into other objects. Pros: Easily testable, easy to understand (one object, one responsibility). Cons: project gets littered by classes, opening multiple class files to understand what code does makes navigation awkward. It would be great to take the best of all these techniques... Foo-1 Has really nice, readable (almost dsl-like) code... for the most part, except for all the Action and Func lambda shenanigans bulked together at the start of a method. Foo-3 Has highly testable and extensible code that just feels a bit "belt-&-braces" for some solutions and has some code-navigation niggles (constantly hitting F12 in VS and opening 5 other .cs files to find out what a single method does). And Foo-2... Well I'm not sure I like anything about the one-huge .cs file with 2 public methods and 12 private ones, except for the fact it's easier for juniors to dig into. I admit I grossly over-simplified the explanations of those coding styles; but if any one knows of any patterns, practices or diplomatic-manoeuvres that can help unite our three developers (without just telling any of them to just "stop it!") that would be great. From a feasibility standpoint : Foo-1's style meets with the most resistance due to some developers finding lambda and/or Func's hard to read. Foo-2's style meets with a less resistance as it's just so easy to fall into. Foo-3's style requires the most forward thinking and is difficult to enforce when time is short. Any ideas on some coding styles or conventions that can make this work?

    Read the article

  • Design Technique: How to design a complex system for processing orders, products and units.

    - by Shyam
    Hi, Programming is fun: I learned that by trying out simple challenges, reading up some books and following some tutorials. I am able to grasp the concepts of writing with OO (I do so in Ruby), and write a bit of code myself. What bugs me though is that I feel re-inventing the wheel: I haven't followed an education or found a book (a free one that is) that explains me the why's instead of the how's, and I've learned from the A-team that it is the plan that makes it come together. So, armed with my nuby Ruby skills, I decided I wanted to program a virtual store. I figured out the following: My virtual Store will have: Products and Services Inventories Orders and Shipping Customers Now this isn't complex at all. With the help of some cool tools (CMapTools), I drew out some concepts, but quickly enough (thanks to my inferior experience in designing), my design started to bite me. My very first product-line were virtual "laptops". So, I created a class (Ruby): class Product attr_accessor :name, :price def initialize(name, price) @name = name @price = price end end which can be instantiated by doing (IRb) x = Product.new("Banana Pro", 250) Since I want my virtual customers to be able to purchase more than one product, or various types, I figured out I needed some kind of "Order" mechanism. class Order def initialize(order_no) @order_no = order_no @line_items = [] end def add_product(myproduct) @line_items << myproduct end def show_order() puts @order_no @line_items.each do |x| puts x.name.to_s + "\t" + x.price.to_s end end end that can be instantiated by doing (IRb) z = Order.new(1234) z.add_product(x) z.show_order Splendid, I have now a very simple ordering system that allows me to add products to an order. But, here comes my real question. What if I have three models of my product (economy, business, showoff)? Or have my products be composed out of separate units (bigger screen, nicer keyboard, different OS)? Surely I could make them three separate products, or add complexity to my product class, but I am looking for are best practices to design a flexible product object that can be used in the real world, to facilitate a complex system. My apologies if my grammar and my spelling are with error, as english is not my first language and I took the time to check as far I could understand and translate properly! Thank you for your answers, comments and feedback!

    Read the article

  • Objective-c design advice for use of different data sources, swapping between test and live

    - by user200341
    I'm in the process of designing an application that is part of a larger piece of work, depending on other people to build an API that the app can make use of to retrieve data. While I was thinking about how to setup this project and design the architecture around it, something occurred to me, and I'm sure many people have been in similar situations. Since my work is depending on other people to complete their tasks, and a test server, this slows work down at my end. So the question is: What's the best practice for creating test repositories and classes, implementing them, and not having to depend on altering several places in the code to swap between the test classes and the actual repositories / proper api calls. Contemplate the following scenario: GetDataFromApiCommand *getDataCommand = [[GetDataFromApiCommand alloc]init]; getDataCommand.delegate = self; [getDataCommand getData]; Once the data is available via the API, "GetDataFromApiCommand" could use the actual API, but until then a set of mock data could be returned upon the call of [getDataCommand getData] There might be multiple instances of this, in various places in the code, so replacing all of them wherever they are, is a slow and painful process which inevitably leads to one or two being overlooked. In strongly typed languages we could use dependency injection and just alter one place. In objective-c a factory pattern could be implemented, but is that the best route to go for this? GetDataFromApiCommand *getDataCommand = [GetDataFromApiCommandFactory buildGetDataFromApiCommand]; getDataCommand.delegate = self; [getDataCommand getData]; What is the best practices to achieve this result? Since this would be most useful, even if you have the actual API available, to run tests, or work off-line, the ApiCommands would not necessarily have to be replaced permanently, but the option to select "Do I want to use TestApiCommand or ApiCommand". It is more interesting to have the option to switch between: All commands are test and All command use the live API, rather than selecting them one by one, however that would also be useful to do for testing one or two actual API commands, mixing them with test data. EDIT The way I have chosen to go with this is to use the factory pattern. I set up the factory as follows: @implementation ApiCommandFactory + (ApiCommand *)newApiCommand { // return [[ApiCommand alloc]init]; return [[ApiCommandMock alloc]init]; } @end And anywhere I want to use the ApiCommand class: GetDataFromApiCommand *getDataCommand = [ApiCommandFactory newApiCommand]; When the actual API call is required, the comments can be removed and the mock can be commented out. Using new in the message name implies that who ever uses the factory to get an object, is responsible for releasing it (since we want to avoid autorelease on the iPhone). If additional parameters are required, the factory needs to take these into consideration i.e: [ApiCommandFactory newSecondApiCommand:@"param1"]; This will work quite well with repositories as well.

    Read the article

  • How can I refactor this JavaScript code to avoid making functions in a loop?

    - by Bungle
    I wrote the following code for a project that I'm working on: var clicky_tracking = [ ['related-searches', 'Related Searches'], ['related-stories', 'Related Stories'], ['more-videos', 'More Videos'], ['web-headlines', 'Publication'] ]; for (var x = 0, length_x = clicky_tracking.length; x < length_x; x++) { links = document.getElementById(clicky_tracking[x][0]) .getElementsByTagName('a'); for (var y = 0, length_y = links.length; y < length_y; y++) { links[y].onclick = (function(name, url) { return function() { clicky.log(url, name, 'outbound'); }; }(clicky_tracking[x][1], links[y].href)); } } What I'm trying to do is: define a two-dimensional array, with each instance the inner arrays containing two elements: an id attribute value (e.g., "related-searches") and a corresponding description (e.g., "Related Searches"); for each of the inner arrays, find the element in the document with the corresponding id attribute, and then gather a collection of all <a> elements (hyperlinks) within it; loop through that collection and attach an onclick handler to each hyperlink, which should call clicky.log, passing in as parameters the description that corresponds to the id (e.g., "Related Searches" for the id "related-searches") and the value of the href attribute for the <a> element that was clicked. Hopefully that wasn't thoroughly confusing! The code may be more self-explanatory than that. I believe that what I've implemented here is a closure, but JSLint complains: http://img.skitch.com/20100526-k1trfr6tpj64iamm8r4jf5rbru.png So, my questions are: How can I refactor this code to make JSLint agreeable? Or, better yet, is there a best-practices way to do this that I'm missing, regardless of what JSLint thinks? Should I rely on event delegation instead? That is, attaching onclick event handlers to the document elements with the id attributes in my arrays, and then looking at event.target? I've done that once before and understand the theory, but I'm very hazy on the details, and would appreciate some guidance on what that would look like - assuming this is a viable approach. Thanks very much for any help!

    Read the article

  • Parallel programming in C#

    - by Alxandr
    I'm interested in learning about parallel programming in C#.NET (not like everything there is to know, but the basics and maybe some good-practices), therefore I've decided to reprogram an old program of mine which is called ImageSyncer. ImageSyncer is a really simple program, all it does is to scan trough a folder and find all files ending with .jpg, then it calculates the new position of the files based on the date they were taken (parsing of xif-data, or whatever it's called). After a location has been generated the program checks for any existing files at that location, and if one exist it looks at the last write-time of both the file to copy, and the file "in its way". If those are equal the file is skipped. If not a md5 checksum of both files is created and matched. If there is no match the file to be copied is given a new location to be copied to (for instance, if it was to be copied to "C:\test.jpg" it's copied to "C:\test(1).jpg" instead). The result of this operation is populated into a queue of a struct-type that contains two strings, the original file and the position to copy it to. Then that queue is iterated over untill it is empty and the files are copied. In other words there are 4 operations: 1. Scan directory for jpegs 2. Parse files for xif and generate copy-location 3. Check for file existence and if needed generate new path 4. Copy files And so I want to rewrite this program to make it paralell and be able to perform several of the operations at the same time, and I was wondering what the best way to achieve that would be. I've came up with two different models I can think of, but neither one of them might be any good at all. The first one is to parallelize the 4 steps of the old program, so that when step one is to be executed it's done on several threads, and when the entire of step 1 is finished step 2 is began. The other one (which I find more interesting because I have no idea of how to do that) is to create a sort of worker and consumer model, so when a thread is finished with step 1 another one takes over and performs step 2 at that object (or something like that). But as said, I don't know if any of these are any good solutions. Also, I don't know much about parallel programming at all. I know how to make a thread, and how to make it perform a function taking in an object as its only parameter, and I've also used the BackgroundWorker-class on one occasion, but I'm not that familiar with any of them. Any input would be appreciated.

    Read the article

  • Does the <script> tag position in HTML affects performance of the webpage?

    - by Rahul Joshi
    If the script tag is above or below the body in a HTML page, does it matter for the performance of a website? And what if used in between like this: <body> ..blah..blah.. <script language="JavaScript" src="JS_File_100_KiloBytes"> function f1() { .. some logic reqd. for manipulating contents in a webpage } </script> ... some text here too ... </body> Or is this better?: <script language="JavaScript" src="JS_File_100_KiloBytes"> function f1() { .. some logic reqd. for manipulating contents in a webpage } </script> <body> ..blah..blah.. ..call above functions on some events like onclick,onfocus,etc.. </body> Or this one?: <body> ..blah..blah.. ..call above functions on some events like onclick,onfocus,etc.. <script language="JavaScript" src="JS_File_100_KiloBytes"> function f1() { .. some logic reqd. for manipulating contents in a webpage } </script> </body> Need not tell everything is again in the <html> tag!! How does it affect performance of webpage while loading? Does it really? Which one is the best, either out of these 3 or some other which you know? And one more thing, I googled a bit on this, from which I went here: Best Practices for Speeding Up Your Web Site and it suggests put scripts at the bottom, but traditionally many people put it in <head> tag which is above the <body> tag. I know it's NOT a rule but many prefer it that way. If you don't believe it, just view source of this page! And tell me what's the better style for best performance.

    Read the article

< Previous Page | 498 499 500 501 502 503 504 505 506 507 508 509  | Next Page >