Search Results

Search found 33477 results on 1340 pages for 'static vs non static'.

Page 268/1340 | < Previous Page | 264 265 266 267 268 269 270 271 272 273 274 275  | Next Page >

  • Draw Rectangle To All Dimensions of Image

    - by opiop65
    I have some rudimentary collision code: public class Collision { static boolean isColliding = false; static Rectangle player; static Rectangle female; public static void collision(){ Rectangle player = Game.Playerbounds(); Rectangle female = Game.Femalebounds(); if(player.intersects(female)){ isColliding = true; }else{ isColliding = false; } } } And this is the rectangle code: public static Rectangle Playerbounds() { return(new Rectangle(posX, posY, 25, 25)); } public static Rectangle Femalebounds() { return(new Rectangle(femaleX, femaleY, 25, 25)); } My InputHandling class: public static void movePlayer(GameContainer gc, int delta){ Input input = gc.getInput(); if(input.isKeyDown(input.KEY_W)){ Game.posY -= walkSpeed * delta; walkUp = true; if(Collision.isColliding == true){ Game.posY += walkSpeed * delta; } } if(input.isKeyDown(input.KEY_S)){ Game.posY += walkSpeed * delta; walkDown = true; if(Collision.isColliding == true){ Game.posY -= walkSpeed * delta; } } if(input.isKeyDown(input.KEY_D)){ Game.posX += walkSpeed * delta; walkRight = true; if(Collision.isColliding == true){ Game.posX -= walkSpeed * delta; } } if(input.isKeyDown(input.KEY_A)){ Game.posX -= walkSpeed * delta; walkLeft = true; if(Collision.isColliding == true){ Game.posX += walkSpeed * delta; } } } The code works partially. Only the right and top side of the images collide. How do I correct the rectangle so it will draw on all sides? Thanks for any suggestions!

    Read the article

  • Pre game loading time vs. in game loading time

    - by Keeper
    I'm developing a game in which a random maze is included. There are some AI creatures, lurking the maze. And I want them to go in some path according to the mazes shape. Now there are two possibilities for me to implement that, the first way (which I used) is by calculating several wanted lurking paths once the maze is created. The second, is by calculating a path once needed to be calculated, when a creature starts lurking it. My main concern is loading times. If I calculate many paths at the creating of the maze, the pre loading time is a bit long, so I thought about calculating them when needed. At the moment the game is not 'heavy' so calculating paths in mid game is not noticeable, but I'm afraid it will once it will get more complicated. Any suggestions, comments, opinions, will be of help. Edit: As for now, let p be the number of pre-calculated paths, a creatures has the probability of 1/p to take a new path (which means a path calculation) instead of an existing one. A creature does not start its patrol until the path is fully calculated of course, so no need to worry about him getting killed in the process.

    Read the article

  • OpenGL programming vs Blender Software, which is better for custom video creation?

    - by iammilind
    I am learning OpenGL API bit by bit and also develop my own C++ framework library for effectively using them. Recently came across Blender software which is used for graphics creation and is in turn written in OpenGL itself. For my part time hobby of graphics learning, I want to just create small-small movie or video segments; e.g. related to construction engineering, epic stories and so on. There may be very minimal to nil mouse-keyboard interaction for those videos, unlike video games which are highly interactive. I was wondering if learning OpenGL from scratch is worth for it or should I invest my time in learning Blender software? There are quite a few good movie examples are created using Blender and are shown in its website. Other such opensource cross platform alternatives are also welcome, which can serve my aforementioned purpose.

    Read the article

  • associate dhcp requests with subdomains in dnsmasq

    - by Dezra
    I have dnsmasq running as a dns server with a number of linux boxes using static ips that run several virtual hosts on subdomains. I currently have the following address line in my dnsmasq.conf to map the subdomain of a boxes address to the boxes static ip: address=/.devbox1.mydomain.com/192.168.1.3 address=/.devbox2.mydomain.com/192.168.1.4 e.g. site1.devbox1.mydomain.com > maps to devbox1 static ip, site1 virtual host site2.devbox1.mydomain.com > maps to devbox1 static ip, site2 virtual host site3.devbox2.mydomain.com > maps to devbox2 static ip, site3 virtual host I was wondering if I can change the machines over to DHCP addresses (instead of static) and have dnsmasq use the dhcp ip instead of the static one? Can I modify the address line to refer to the DHCP address (obviously, I cant hardcode the address)? I know I could add mac address to ip allocation, but I want to avoid this if possible.

    Read the article

  • .com vs .me for personal and blogging sites. Which one is good regarding seo

    - by Sameer Manas
    I basically have a domain under my name with .com extension. I am planning to use it for my portfolio and also as a regular blog. Now considering SEO and ranking stuff, what is the best way to implement this. myname.com - Portfolio || myname.com/blog - Blog page (or) myname.com - Blog || myname.me - Portfolio i have absolutely no idea on how .tld's impact SEO and Ranking, so i seek the experts advice on this. Thanks in advance.

    Read the article

  • Simplest way to automatically alter "const" value in Java during compile time

    - by Michael Mao
    Hi all: This is a question corresponds to my uni assignment so I am very sorry to say I cannot adopt any one of the following best practices in a short time -- you know -- assignment is due tomorrow :( link to Best way to alter const in Java on StackOverflow Basically the only task (I hope so) left for me is the performance tuning. I've got a bunch of predefined "const" values in my single-class agent source code like this: //static final values private static final long FT_THRESHOLD = 400; private static final long FT_THRESHOLD_MARGIN = 50; private static final long FT_SMOOTH_PRICE_IDICATOR = 20; private static final long FT_BUY_PRICE_MODIFIER = 0; private static final long FT_LAST_ROUNDS_STARTTIME = 90; private static final long FT_AMOUNT_ADJUST_MODIFIER = 5; private static final long FT_HISTORY_PIRCES_LENGTH = 10; private static final long FT_TRACK_DURATION = 5; private static final int MAX_BED_BID_NUM_PER_AUC = 12; I can definitely alter the values manually and then compile the code to give it another go around. But the execution time for a thorough "statistic analysis" usually requires over 2000 times of execution, which will typically lasts more than half an hour on my own laptop... So I hope there is a way to alter values using other ways than dig into the source code to change the "const" values there, so I can automatically distributed compiled code to other people's PC and let them run the statistic analysis instead. One other reason for a automatically value adjustment is that I can try using my own agent to defeat itself by choosing different "const" values. Although my values are derived from previous history and statistical results, they are far from the most optimized values. I hope there is a easy way so I can quickly adopt that so to have a good sleep tonight while the computer does everything for me... :) Any hints on this sort of stuff? Any suggestion is welcomed and much appreciated.

    Read the article

  • When to use Constants vs. Config Files to maintain Configuration

    - by CoffeeAddict
    I often fight with myself on whether to put certain keys in my web.config or in a Constants.cs class or something like this. For example if I wanted to store application specific keys for whatever the case may be..I could store it and grab it from my web config via custom keys or consume it by referencing a constant in my constants class. When would you want to use Constants over config keys? This question really applies to any language I think.

    Read the article

  • Simple way to embed an MP3 audio file in a (static) HTML file, starting to play at a specifc time?

    - by Marcel
    Hi all, I want to produce a simple, static HTML file, that has one or more embedded MP3 files in it. The idea is to have a simple mean of listening to specific parts of an mp3 file. On a single click on a visual element, the sound should start to play; However, not from the beginning of the file, but from a specified starting point in that file (and play to the end of the file). This should work all locally from the client's local filesystem, the HTML file and the MP3 files do not reside on a webserver. So, how to play the MP3 audio from a specific starting point? The solution I am looking for should be as simple as possible, while working on most browsers, including IE, Firefox and Safari. Note: I know the <embed> tag as described here, but this seems not to work with my target browsers. Also I have read about jPlayer and other Java-Script-based players, but since I have never coded some JavaScript, I would prefer a HTML-only solution, if possible.

    Read the article

  • Keep a programming language backwards compatible vs. fixing its flaws

    - by Radu Murzea
    First, some context (stuff that most of you know anyway): Every popular programming language has a clear evolution, most of the time marked by its version: you have Java 5, 6, 7 etc., PHP 5.1, 5.2, 5.3 etc. Releasing a new version makes new APIs available, fixes bugs, adds new features, new frameworks etc. So all in all: it's good. But what about the language's (or platform's) problems? If and when there's something wrong in a language, developers either avoid it (if they can) or they learn to live with it. Now, the developers of those languages get a lot of feedback from the programmers that use them. So it kind of makes sense that, as time (and version numbers) goes by, the problems in those languages will slowly but surely go away. Well, not really. Why? Backwards compatibility, that's why. But why is this so? Read below for a more concrete situation. The best way I can explain my question is to use PHP as an example: PHP is loved thousands of people and hated by just as many thousands. All languages have flaws, but apparently PHP is special. Check out this blog post. It has a very long list of so called flaws in PHP. Now, I'm not a PHP developer (not yet), but I read through all of it and I'm sure that a big chunk of that list are indeed real issues. (Not all of it, since it's potentially subjective). Now, if I was one of the guys who actively develops PHP, I would surely want to fix those problems, one by one. However, if I do that, then code that relies on a particular behaviour of the language will break if it runs on the new version. Summing it up in 2 words: backwards compatibility. What I don't understand is: why should I keep PHP backwards compatible? If I release PHP version 8 with all those problems fixed, can't I just put a big warning on it saying: "Don't run old code on this version !"? There is a thing called deprecation. We had it for years and it works. In the context of PHP: look at how these days people actively discourage the use of the mysql_* functions (and instead recommend mysqli_* and PDO). Deprecation works. We can use it. We should use it. If it works for functions, why shouldn't it work for entire languages? Let's say I (the developer of PHP) do this: Launch a new version of PHP (let's say 8) with all of those flaws fixed New projects will start using that version, since it's much better, clearer, more secure etc. However, in order not to abandon older versions of PHP, I keep releasing updates to it, fixing security issues, bugs etc. This makes sense for reasons that I'm not listing here. It's common practice: look for example at how Oracle kept updating version 5.1.x of MySQL, even though it mostly focused on version 5.5.x. After about 3 or 4 years, I stop updating old versions of PHP and leave them to die. This is fine, since in those 3 or 4 years, most projects will have switched to PHP 8 anyway. My question is: Do all these steps make sense? Would it be so hard to do? If it can be done, then why isn't it done? Yes, the downside is that you break backwards compatibility. But isn't that a price worth paying ? As an upside, in 3 or 4 years you'll have a language that has 90 % of its problems fixed.... a language much more pleasant to work with. Its name will ensure its popularity. EDIT: OK, so I didn't expressed myself correctly when I said that in 3 or 4 years people will move to the hypothetical PHP 8. What I meant was: in 3 or 4 years, people will use PHP 8 if they start a new project.

    Read the article

  • Master-slave vs. peer-to-peer archictecture: benefits and problems

    - by Ashok_Ora
    Normal 0 false false false EN-US X-NONE X-NONE Almost two decades ago, I was a member of a database development team that introduced adaptive locking. Locking, the most popular concurrency control technique in database systems, is pessimistic. Locking ensures that two or more conflicting operations on the same data item don’t “trample” on each other’s toes, resulting in data corruption. In a nutshell, here’s the issue we were trying to address. In everyday life, traffic lights serve the same purpose. They ensure that traffic flows smoothly and when everyone follows the rules, there are no accidents at intersections. As I mentioned earlier, the problem with typical locking protocols is that they are pessimistic. Regardless of whether there is another conflicting operation in the system or not, you have to hold a lock! Acquiring and releasing locks can be quite expensive, depending on how many objects the transaction touches. Every transaction has to pay this penalty. To use the earlier traffic light analogy, if you have ever waited at a red light in the middle of nowhere with no one on the road, wondering why you need to wait when there’s clearly no danger of a collision, you know what I mean. The adaptive locking scheme that we invented was able to minimize the number of locks that a transaction held, by detecting whether there were one or more transactions that needed conflicting eyou could get by without holding any lock at all. In many “well-behaved” workloads, there are few conflicts, so this optimization is a huge win. If, on the other hand, there are many concurrent, conflicting requests, the algorithm gracefully degrades to the “normal” behavior with minimal cost. We were able to reduce the number of lock requests per TPC-B transaction from 178 requests down to 2! Wow! This is a dramatic improvement in concurrency as well as transaction latency. The lesson from this exercise was that if you can identify the common scenario and optimize for that case so that only the uncommon scenarios are more expensive, you can make dramatic improvements in performance without sacrificing correctness. So how does this relate to the architecture and design of some of the modern NoSQL systems? NoSQL systems can be broadly classified as master-slave sharded, or peer-to-peer sharded systems. NoSQL systems with a peer-to-peer architecture have an interesting way of handling changes. Whenever an item is changed, the client (or an intermediary) propagates the changes synchronously or asynchronously to multiple copies (for availability) of the data. Since the change can be propagated asynchronously, during some interval in time, it will be the case that some copies have received the update, and others haven’t. What happens if someone tries to read the item during this interval? The client in a peer-to-peer system will fetch the same item from multiple copies and compare them to each other. If they’re all the same, then every copy that was queried has the same (and up-to-date) value of the data item, so all’s good. If not, then the system provides a mechanism to reconcile the discrepancy and to update stale copies. So what’s the problem with this? There are two major issues: First, IT’S HORRIBLY PESSIMISTIC because, in the common case, it is unlikely that the same data item will be updated and read from different locations at around the same time! For every read operation, you have to read from multiple copies. That’s a pretty expensive, especially if the data are stored in multiple geographically separate locations and network latencies are high. Second, if the copies are not all the same, the application has to reconcile the differences and propagate the correct value to the out-dated copies. This means that the application program has to handle discrepancies in the different versions of the data item and resolve the issue (which can further add to cost and operation latency). Resolving discrepancies is only one part of the problem. What if the same data item was updated independently on two different nodes (copies)? In that case, due to the asynchronous nature of change propagation, you might land up with different versions of the data item in different copies. In this case, the application program also has to resolve conflicts and then propagate the correct value to the copies that are out-dated or have incorrect versions. This can get really complicated. My hunch is that there are many peer-to-peer-based applications that don’t handle this correctly, and worse, don’t even know it. Imagine have 100s of millions of records in your database – how can you tell whether a particular data item is incorrect or out of date? And what price are you willing to pay for ensuring that the data can be trusted? Multiple network messages per read request? Discrepancy and conflict resolution logic in the application, and potentially, additional messages? All this overhead, when all you were trying to do was to read a data item. Wouldn’t it be simpler to avoid this problem in the first place? Master-slave architectures like the Oracle NoSQL Database handles this very elegantly. A change to a data item is always sent to the master copy. Consequently, the master copy always has the most current and authoritative version of the data item. The master is also responsible for propagating the change to the other copies (for availability and read scalability). Client drivers are aware of master copies and replicas, and client drivers are also aware of the “currency” of a replica. In other words, each NoSQL Database client knows how stale a replica is. This vastly simplifies the job of the application developer. If the application needs the most current version of the data item, the client driver will automatically route the request to the master copy. If the application is willing to tolerate some staleness of data (e.g. a version that is no more than 1 second out of date), the client can easily determine which replica (or set of replicas) can satisfy the request, and route the request to the most efficient copy. This results in a dramatic simplification in application logic and also minimizes network requests (the driver will only send the request to exactl the right replica, not many). So, back to my original point. A well designed and well architected system minimizes or eliminates unnecessary overhead and avoids pessimistic algorithms wherever possible in order to deliver a highly efficient and high performance system. If you’ve every programmed an Oracle NoSQL Database application, you’ll know the difference! /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • Today VS 2010 SP1 comes out, any news on the roadmap for Visual Studio 2012?

    - by Abel
    Today Visual Studio 2010 SP1 comes out as general availability release. This made me wondering about the upcoming release of Visual Studio 2012: What are Microsoft's plans for Visual Studio 2012? I heard they'll come with a new version every two years. Are there any open fora or discussions? When will a preview be publicly available? But most importantly: what are the new highlights, improvements in .NET and C#/F#/VB (and C++ of course, request from Stijn)?

    Read the article

  • How to decide whether to implement an operation as Entity operation vs Service operation in Domain Driven Design?

    - by Louis Rhys
    I am reading Evans's Domain Driven Design. The book says that there are entity and there are services. If I were to implement an operation, how to decide whether I should add it as a method on an entity or do it in a service class? e.g. myEntity.DoStuff() or myService.DoStuffOn(myEntity)? Does it depend on whether other entities are involved? If it involves other entities, implement as service operation? But entities can have associations and can traverse it from there too right? Does it depend on stateless or not? But service can also access entities' variable, right? Like in do stuff myService.DoStuffOn, it can have code like if(myEntity.IsX) doSomething(); Which means that it will depend on the state? Or does it depend on complexity? How do you define complex operations?

    Read the article

  • Visual Studio 2012 Launch Winnipeg&ndash;Slides

    - by Dylan Smith
    The Winnipeg .Net User Group hosted a VS 2012 Launch Event at the Imax in Winnipeg on Thursday, Dec 6.  Doing presentations on the giant Imax screen is always fun, and I did the first 2 sessions on: End-To-End Application Lifecycle Management with TFS 2012 Improving Developer Productivity with Visual Studio 2012 Thanks to everybody that came out, and if anybody is interested my slide decks can be downloaded here: TFS 2012 Slides VS 2012 Slides Also the Virtual Machine that I used to do my demo’s can be downloaded from Brian Keller’s blog here: VS 2012 ALM Virtual Machine

    Read the article

  • SQL Server Developer Tools &ndash; Codename Juneau vs. Red-Gate SQL Source Control

    - by Ajarn Mark Caldwell
    So how do the new SQL Server Developer Tools (previously code-named Juneau) stack up against SQL Source Control?  Read on to find out. At the PASS Community Summit a couple of weeks ago, it was announced that the previously code-named Juneau software would be released under the name of SQL Server Developer Tools with the release of SQL Server 2012.  This replacement for Database Projects in Visual Studio (also known in a former life as Data Dude) has some great new features.  I won’t attempt to describe them all here, but I will applaud Microsoft for making major improvements.  One of my favorite changes is the way database elements are broken down.  Previously every little thing was in its own file.  For example, indexes were each in their own file.  I always hated that.  Now, SSDT uses a pattern similar to Red-Gate’s and puts the indexes and keys into the same file as the overall table definition. Of course there are really cool features to keep your database model in sync with the actual source scripts, and the rename refactoring feature is now touted as being more than just a search and replace, but rather a “semantic-aware” search and replace.  Funny, it reminds me of SQL Prompt’s Smart Rename feature.  But I’m not writing this just to criticize Microsoft and argue that they are late to the party with this feature set.  Instead, I do see it as a viable alternative for folks who want all of their source code to be version controlled, but there are a couple of key trade-offs that you need to know about when you choose which tool set to use. First, the basics Both tool sets integrate with a wide variety of source control systems including the most popular: Subversion, GIT, Vault, and Team Foundation Server.  Both tools have integrated functionality to produce objects to upgrade your target database when you are ready (DACPACs in SSDT, integration with SQL Compare for SQL Source Control).  If you regularly live in Visual Studio or the Business Intelligence Development Studio (BIDS) then SSDT will likely be comfortable for you.  Like BIDS, SSDT is a Visual Studio Project Type that comes with SQL Server, and if you don’t already have Visual Studio installed, it will install the shell for you.  If you already have Visual Studio 2010 installed, then it will just add this as an available project type.  On the other hand, if you regularly live in SQL Server Management Studio (SSMS) then you will really enjoy the SQL Source Control integration from within SSMS.  Both tool sets store their database model in script files.  In SSDT, these are on your file system like other source files; in SQL Source Control, these are stored in the folder structure in your source control system, and you can always GET them to your file system if you want to browse them directly. For me, the key differentiating factors are 1) a single, unified check-in, and 2) migration scripts.  How you value those two features will likely make your decision for you. Unified Check-In If you do a continuous-integration (CI) style of development that triggers an automated build with unit testing on every check-in of source code, and you use Visual Studio for the rest of your development, then you will want to really consider SSDT.  Because it is just another project in Visual Studio, it can be added to your existing Solution, and you can then do a complete, or unified single check-in of all changes whether they are application or database changes.  This is simply not possible with SQL Source Control because it is in a different development tool (SSMS instead of Visual Studio) and there is no way to do one unified check-in between the two.  You CAN do really fast back-to-back check-ins, but there is the possibility that the automated build that is triggered from the first check-in will cause your unit tests to fail and the CI tool to report that you broke the build.  Of course, the automated build that is triggered from the second check-in which contains the “other half” of your changes should pass and so the amount of time that the build was broken may be very, very short, but if that is very, very important to you, then SQL Source Control just won’t work; you’ll have to use SSDT. Refactoring and Migrations If you work on a mature system, or on a not-so-mature but also not-so-well-designed system, where you want to refactor the database schema as you go along, but you can’t have data suddenly disappearing from your target system, then you’ll probably want to go with SQL Source Control.  As I wrote previously, there are a number of changes which you can make to your database that the comparison tools (both from Microsoft and Red Gate) simply cannot handle without the possibility (or probability) of data loss.  Currently, SSDT only offers you the ability to inject PRE and POST custom deployment scripts.  There is no way to insert your own script in the middle to override the default behavior of the tool.  In version 3.0 of SQL Source Control (Early Access version now available) you have that ability to create your own custom migration script to take the place of the commands that the tool would have done, and ensure the preservation of your data.  Or, even if the default tool behavior would have worked, but you simply know a better way then you can take control and do things your way instead of theirs. You Decide In the environment I work in, our automated builds are not triggered off of check-ins, but off of the clock (currently once per night) and so there is no point at which the automated build and unit tests will be triggered without having both sides of the development effort already checked-in.  Therefore having a unified check-in, while handy, is not critical for us.  As for migration scripts, these are critically important to us.  We do a lot of new development on systems that have already been in production for years, and it is not uncommon for us to need to do a refactoring of the database.  Because of the maturity of the existing system, that often involves data migrations or other additional SQL tasks that the comparison tools just can’t detect on their own.  Therefore, the ability to create a custom migration script to override the tool’s default behavior is very important to us.  And so, you can see why we will continue to use Red Gate SQL Source Control for the foreseeable future.

    Read the article

  • How to review the current state of open source vs. closed source graphics drivers?

    - by Bucic
    How to know whether it's worth it to replace open source drivers installed by default with proprietary ones. Are there any benchmarks? Major known issues summaries. I don't mean 'at the time of writing this post'. I mean an up-to-date status on how the drivers compare. This page https://help.ubuntu.com/community/BinaryDriverHowto/ certainly doesn't do much on the matter, nor it even mentions Intel. EDIT: I've just learned there is no Intel proprietary driver because they made their drivers open source http://askubuntu.com/a/17395/29347

    Read the article

  • flat files vs. RDBMS database, few read/writes, few changes

    - by Bob Lapique
    I have to handle data from long term (years, decades) climate monitoring stations. The data flow usually starts with raw data (voltages, etc.) plus quality check information (pressure, temperature, flow rate, etc.) generally recorded @ 1Hz. Then, the data are assigned a quality flag (human and/or program), processed (apply calibration curves) and flagged. So, we basically end up with 2 datasets : raw and processed data. New data are typically added once a day (~500Ko/day/instrument). Simultaneous queries are not likely to ever happen. I wanted to go for a RDBMS (we have a MySQL server) and have some experience in database design, but the IT guy keeps telling me that flat files will to the job just as well. I suspect him to try to make his life easier when it comes to backup/upgrade the MySQL. There are not so many links between data, they don't change much, but the quality flags will change. A RDBMS is easier to compare data from different instruments on a "many days" scale, compared to daily text files. Well, what would you advise ? Thanks.

    Read the article

  • Anonymous function vs. separate named function for initialization in jquery

    - by Martin N.
    We just had some controversial discussion and I would like to see your opinions on the issue: Let's say we have some code that is used to initialize things when a page is loaded and it looks like this: function initStuff() { ...} ... $(document).ready(initStuff); The initStuff function is only called from the third line of the snippet. Never again. Now I would say: Usually people put this into an anonymous callback like that: $(document).ready(function() { //Body of initStuff }); because having the function in a dedicated location in the code is not really helping with readability, because with the call on ready() makes it obvious, that this code is initialization stuff. Would you agree or disagree with that decision? And why? Thank you very much for your opinion!

    Read the article

  • Scheme vs Haskell for an Introduction to Functional Programming?

    - by haziz
    I am comfortable with programming in C and C#, and will explore C++ in the future. I may be interested in exploring functional programming as a different programming paradigm. I am doing this for fun, my job does not involve computer programming, and am somewhat inspired by the use of functional programming, taught fairly early, in computer science courses in college. Lambda calculus is certainly beyond my mathematical abilities, but I think I can handle functional programming. Which of Haskell or Scheme would serve as a good intro to functional programming? I use emacs as my text editor and would like to be able to configure it more easily in the future which would entail learning Emacs Lisp. My understanding, however, is that Emacs Lisp is fairly different from Scheme and is also more procedural as opposed to functional. I would likely be using "The Little Schemer" book, which I have already bought, if I pursue Scheme (seems to me a little weird from my limited leafing through it). Or would use the "Learn You a Haskell for Great Good" if I pursue Haskell. I would also watch the Intro to Haskell videos by Dr Erik Meijer on Channel 9. Any suggestions, feedback or input appreciated. Thanks. P.S. BTW I also have access to F# since I have Visual Studio 2010 which I use for C# development, but I don't think that should be my main criteria for selecting a language.

    Read the article

  • SEO blog Indexing: wordpress.com subdomain vs a registered domain?

    - by rumspringa00
    I've used WordPress for a few of my client's sites, mostly small businesses and eCommerce sites. I have found through Google Analytics as well as the All in One Webmaster plugin that when it comes to social media, using WordPress is a surefire way of getting your site indexed by Google and occasionally Bing and Yahoo. Since I am a heavy WP user, I'd like to contribute by registering a dot WordPress domain for my portfolio. When using a WP installation concurrently with a WP domain, e.g. myportfolio.wordpress.com, will the site be more or less likely to be indexed rather a generic myportfolio.com domain? I've seen mixed opinions where people seem to favor a WP domain for URL output where others say that it's a moot point, and that Google will not favor a WP domain over a dot com domain as long as your meta tags are updated and content is keyword optimized. I tend to disagree and believe a WP domain would more likely be indexed and output more URLs over an individual, laconic domain like myportfolio.com. Am I wrong?

    Read the article

< Previous Page | 264 265 266 267 268 269 270 271 272 273 274 275  | Next Page >