Search Results

Search found 1266 results on 51 pages for 'cons'.

Page 46/51 | < Previous Page | 42 43 44 45 46 47 48 49 50 51  | Next Page >

  • .Net Architecture challenge: The Change-prone Frankestein Model

    - by SDReyes
    Good Morning SO! We've been scratching our heads with with this interesting scenario at the office, and we're anxious to hear your ideas and approaches: We have a database, whose schema is prone to changes -lets call it Prony-. (is used to store configuration parameters for embedded devices. so if the embedded devices guy need a new table, property or relationship for the model, he should be able to adapt the schema in a easy way -happens so often- ). Prony needs a web interface to create/edit its data. We have another database containing data that also need to be loaded to the devices, after making some transformations - lets call this one Oddy- (this data it's generated by an already existent administrative web application). Finally we have Tracy, a server that communicates our DBs and our embedded devices. She should to auto-adapt herself, to our dbs schema changes and serialize the data to the devices. Nice puzzle, don't think so? : ) Our current candidates: Rady: The fast Lets create some views in Prony that make the data transformation from Oddy. then use DynamicData (or some RAD tool) to create/update a simple web interface for Prony (so he can even consult the transformated data from coming from Prony : ). About Tracy, she will need to be recompiled to update her DB schema (Entity framework should work) and use Reflection to explore recursively the schema and serialize data. Cons: We would have to recompile Tracy and the Prony's web interface. What do you think of the candidate(s)? What would you do?

    Read the article

  • Suggestions for entering mobile development -- pure iPhone SDK, Andorid SDK. Mono Touch or Titanium?

    - by Tom Cabanski
    I am entering mobile development. I have been working primarily in .NET since 1.0 came out in beta. Before that, I was mostly a C++ and Delphi guy and still dabble in C++ from time to time. I do web apps quite a bit so I am reasonably proficient with Javascript, JQuery and CSS. I have also done a few Java applications. I started web programming with CGI and live mostly in the ASP.NET MVC world these days. I am trying to decide on which platform/OS and tool to select. I am concerned with the size of the market available for my applications as well as the marketibility of the skills I will pick up. The apps I have in mind would work on both phones and pads. Some aspects of what I have in mind will play better on the bigger screens that will be available on pads. Here are the options I am considering: Apple iPhone/iPad using pure Apple SDK (Objective-C) Apple iPhone/iPad using Mono Touch (C#) Android using pure Android SDK (Java) Multiple platforms using something like Titanium to generate native apps from web technologies (HTML, CSS and Javascript) Multiple platforms using HTML5 web applications that run in the browser (HTML, CSS and Javascript). Which option would you choose? Do you have a different suggestion? What are the pros and cons?

    Read the article

  • Hadoop Map/Reduce - simple use example to do the following...

    - by alexeypro
    I have MySQL database, where I store the following BLOB (which contains JSON object) and ID (for this JSON object). JSON object contains a lot of different information. Say, "city:Los Angeles" and "state:California". There are about 500k of such records for now, but they are growing. And each JSON object is quite big. My goal is to do searches (real-time) in MySQL database. Say, I want to search for all JSON objects which have "state" to "California" and "city" to "San Francisco". I want to utilize Hadoop for the task. My idea is that there will be "job", which takes chunks of, say, 100 records (rows) from MySQL, verifies them according to the given search criteria, returns those (ID's) which qualify. Pros/cons? I understand that one might think that I should utilize simple SQL power for that, but the thing is that JSON object structure is pretty "heavy", if I put it as SQL schemas, there will be at least 3-5 tables joins, which (I tried, really) creates quite a headache, and building all the right indexes eats RAM faster than I one can think. ;-) And even then, every SQL query has to be analyzed to be utilizing the indexes, otherwise with full scan it literally is a pain. And with such structure we have the only way "up" is just with vertical scaling. But I am not sure it's the best option for me, as I see how JSON objects will grow (the data structure), and I see that the number of them will grow too. :-) Help? Can somebody point me to simple examples of how this can be done? Does it make sense at all? Am I missing something important? Thank you.

    Read the article

  • Goal setting/tracking packages

    - by Avi
    I'm a developer working by myself. I'm looking for a computerized tool to manage my goals and activities. I own it Microsoft Project, but I don't like it. I've started many "projects" but could never keep on using it. Too complex and heavyweight for me. I use MS-Outlook tasks. They are not what I need. No planning capability. Tracking is not nice. I'm using the Pomodoro technique and I like it, but I'm looking for something more comprehensive and with better computerized support. Something that would allow me to define goals with dependencies and time estimation, keep daily prioritized lists etc. So, I'm looking for a solution. One I've found is GoalPro, but I don't like the fact I could not find a "top ten comparison". Are you using any goal setting package such as GoalPro? Which? Does it help? Pros and Cons?

    Read the article

  • Creating an API for an ASP.NET MVC site with rate-limiting and caching

    - by Maxim Z.
    Recently, I've been very interested in APIs, specifically in how to create them. For the purpose of this question, let's say that I have created an ASP.NET MVC site that has some data on it; I want to create an API for this site. I have multiple questions about this: What type of API should I create? I know that REST and oData APIs are very popular. What are the pros and cons of each, and how do I implement them? From what I understand so far, REST APIs with ASP.NET MVC would just be actions that return JSON instead of Views, and oData APIs are documented here. How do I handle writing? Reading from both API types is quite simple. However, writing is more complex. With the REST approach, I understand that I can use HTTP POST, but how do I implement authentication? Also, with oData, how does writing work in the first place? How do I implement basic rate-limiting and caching? From my past experience with APIs, these are very important things, so that the API server isn't overloaded. What's the best way to set these two things up? Can I get some sample code? Any code that relates to C# and ASP.NET MVC would be appreciated. Thanks in advance! While this is a broad question, I think it's not too broad... :) There are some similar questions to this one that are about APIs, but I haven't found any that directly address the questions I outlined here.

    Read the article

  • MySQL database design question

    - by Greelmo
    I'm trying to weigh the pros and cons of a database design, and would like to get some feedback as to the best approach. Here is the situation: I have users of my system that have only a few required items (username, password). They can then supply a lot of optional information. This optional information continues to grow as the system grows, so I want to do it in such a way that adding new optional information is easy. Currently, I have a separate table for each piece of information. For example, there's a table called 'names' that holds 'user_id', 'first_name', and 'last_name'. There's 'address', 'occupation', etc. You get the drift. In most cases, when I talk to my database, I'm looking only for users with one particular qualifier (name, address, etc.). However, there are instances when I want to see what information a user has set. The 'edit account' page, for example, must run queries for each piece of information it wants. Is this wasteful? Is there a way I can structure my queries or my database to make it so I never have to do one query for each piece of information like that without getting my tables to huge? If i want to add 'marital status', how hard will that be if I don't have a one-table-per-attribute system? Thanks in advance.

    Read the article

  • Using clases in PHP to store function

    - by Artur
    Hello! I need some advise on my PHP code organisation. I need classes where I can store different functions, and I need access to those classes in different parts of my project. Making an object of this classes each time is too sadly, so I've found a two ways have to solve it. First is to use static methods, like class car { public static $wheels_count = 4; public static function change_wheels_count($new_count) { car::$wheels_count = $new_count; } } Second is to use singleton pattern: class Example { // Hold an instance of the class private static $instance; // The singleton method public static function singleton() { if (!isset(self::$instance)) { $c = __CLASS__; self::$instance = new $c; } return self::$instance; } } But author of the article about singletons said, that if I have too much singletons in my code I should reconstruct it. But I need a lot of such classes. Can anybody explain prons and cons of each way? Which is mostly used? Are there more beautiful ways?

    Read the article

  • Lazy Sequences that "Look Ahead" for Project Euler Problem 14

    - by ivar
    I'm trying to solve Project Euler Problem 14 in a lazy way. Unfortunately, I may be trying to do the impossible: create a lazy sequence that is both lazy, yet also somehow 'looks ahead' for values it hasn't computed yet. The non-lazy version I wrote to test correctness was: (defn chain-length [num] (loop [len 1 n num] (cond (= n 1) len (odd? n) (recur (inc len) (+ 1 (* 3 n))) true (recur (inc len) (/ n 2))))) Which works, but is really slow. Of course I could memoize that: (def memoized-chain (memoize (fn [n] (cond (= n 1) 1 (odd? n) (+ 1 (memoized-chain (+ 1 (* 3 n)))) true (+ 1 (memoized-chain (/ n 2))))))) However, what I really wanted to do was scratch my itch for understanding the limits of lazy sequences, and write a function like this: (def lazy-chain (letfn [(chain [n] (lazy-seq (cons (if (odd? n) (+ 1 (nth lazy-chain (dec (+ 1 (* 3 n))))) (+ 1 (nth lazy-chain (dec (/ n 2))))) (chain (+ n 1)))))] (chain 1))) Pulling elements from this will cause a stack overflow for n2, which is understandable if you think about why it needs to look 'into the future' at n=3 to know the value of the tenth element in the lazy list because (+ 1 (* 3 n)) = 10. Since lazy lists have much less overhead than memoization, I would like to know if this kind of thing is possible somehow via even more delayed evaluation or queuing?

    Read the article

  • Javascript library: to obfuscate or not to obfuscate - that is the question

    - by morpheous
    I need to write a GUI related javascript library. It will give my website a bit of an edge (in terms of functionality I can offer) - up until my competitors play with it long enough to figure out how to write it by themselves. I can accept the fact that it will be emulated over time - thats par for the course (its part of business). However, what I cannot bear, is the idea of effectively, simply handing over all the hard work that would have gone into the library to my competitors, by using plain javascript that anyone can download and use. It is an established fact that no none in the industry I am "attacking" has this functionality, so the value of such a library is undeniable and is not up for discussion (i.e. thats not what I'm asking here). What I am seeking to find out are the pros and cons of obfuscating a javascript library, so that I can come to a final decision. Two of my biggest concerns are debugging, and subtle errors that may be introduced by the obfuscator. I would like to know: How can I manage those risks (being able to debug faulty code, ensuring/minimizing against obfuscation errors) Are there any good quality industry standard obfuscators you can recommend (preferably something you use yourself). What are your experiences of using obfuscated code in a production environment?

    Read the article

  • how to implement a sparse_vector class

    - by Neil G
    I am implementing a templated sparse_vector class. It's like a vector, but it only stores elements that are different from their default constructed value. So, sparse_vector would store the index-value pairs for all indices whose value is not T(). I am basing my implementation on existing sparse vectors in numeric libraries-- though mine will handle non-numeric types T as well. I looked at boost::numeric::ublas::coordinate_vector and eigen::SparseVector. Both store: size_t* indices_; // a dynamic array T* values_; // a dynamic array int size_; int capacity_; Why don't they simply use vector<pair<size_t, T>> data_; My main question is what are the pros and cons of both systems, and which is ultimately better? The vector of pairs manages size_ and capacity_ for you, and simplifies the accompanying iterator classes; it also has one memory block instead of two, so it incurs half the reallocations, and might have better locality of reference. The other solution might search more quickly since the cache lines fill up with only index data during a search. There might also be some alignment advantages if T is an 8-byte type? It seems to me that vector of pairs is the better solution, yet both containers chose the other solution. Why?

    Read the article

  • Many to many table design question

    - by user169867
    Originally I had 2 tables in my DB, [Property] and [Employee]. Each employee can have 1 "Home Property" so the employee table has a HomePropertyID FK field to Property. Later I needed to model the situation where despite having only 1 "Home Property" the employee did work at or cover for multiple properties. So I created an [Employee2Property] table that has EmployeeID and PropertyID FK fields to model this many 2 many relationship. Now I find that I need to create other many-to-many relationships between employees and properties. For example if there are multiple employees that are managers for a property or multiple employees that perform maintenance work at a property, etc. My questions are: 1) Should I create seperate many-to-many tables for each of these situations or should I just create 1 more table like [PropertyAssociatonType] that lists the types of associations an emploee can have with a property and just add a FK field to [Employee2Property] such a PropertyAssociationTypeID that explains what the association is? I'm curious about the pros/cons or if there's another better way. 2) Am I stupid and going about this all worng? Thanks for any suggestions :)

    Read the article

  • Semantically linking to code snippets

    - by Tim
    What's the most simple and semantic way of presenting code snippets in HTML? Possible XHTML syntax <a href="code_sample.php" type="text/x-php"> Example of widget creation </a> Example of linked file (code_sample.php): // Create a new widget $widget = new widget(); Pros: Semantically uses title to describe the source code being referenced Up to the client to render snippet Having very many custom server-side implementations tells me it should be standardized Browsers can have plug-ins for copy+paste, download, etc Seems to me this is where it belongs (not in Javascript) Degradation: non-compliant browsers receive a link to the associated content Cons: Not semantic enough? Seems wrong to replace hyperlinks with source code for presentation <object> might be better, but wouldn't degrade as nicely. Background I'm trying to create a "personal" XHTML standard for storing notes (wow, this is probably among the nerdiest things I've said). Since notes are just "scratch" it needs to be very lightweight. SO's markdown is very lightweight but not semantic enough for my needs. Plus, now I'm just curious. What's the most ideal syntax for linking to client-rendered code-snippets?

    Read the article

  • Parsing data with Clojure, interval problem.

    - by Andrea Di Persio
    Hello! I'm writing a little parser in clojure for learning purpose. basically is a TSV file parser that need to be put in a database, but I added a complication. The complication itself is that in the same file there are more intervals. The file look like this: ###andreadipersio 2010-03-19 16:10:00### USER COMM PID PPID %CPU %MEM TIME root launchd 1 0 0.0 0.0 2:46.97 root DirectoryService 11 1 0.0 0.2 0:34.59 root notifyd 12 1 0.0 0.0 0:20.83 root diskarbitrationd 13 1 0.0 0.0 0:02.84` .... ###andreadipersio 2010-03-19 16:20:00### USER COMM PID PPID %CPU %MEM TIME root launchd 1 0 0.0 0.0 2:46.97 root DirectoryService 11 1 0.0 0.2 0:34.59 root notifyd 12 1 0.0 0.0 0:20.83 root diskarbitrationd 13 1 0.0 0.0 0:02.84 I ended up with this code: (defn is-header? "Return true if a line is header" [line] (> (count (re-find #"^\#{3}" line)) 0)) (defn extract-fields "Return regex matches" [line pattern] (rest (re-find pattern line))) (defn process-lines [lines] (map process-line lines)) (defn process-line [line] (if (is-header? line) (extract-fields line header-pattern)) (extract-fields line data-pattern)) My idea is that in 'process-line' interval need to be merged with data so I have something like this: ('andreadipersio', '2010-03-19', '16:10:00', 'root', 'launchd', 1, 0, 0.0, 0.0, '2:46.97') for every row till the next interval, but I can't figure how to make this happen. I tried with something like this: (def process-line [line] (if is-header? line) (def header-data (extract-fields line header-pattern))) (cons header-data (extract-fields line data-pattern))) But this doesn't work as excepted. Any hints? Thanks!

    Read the article

  • What's a way for a client to automatically resolve the ip address of a server?

    - by zooropa
    The project I am working on is a client/server architecture. In a LAN environment, I want the client's to be able to automatically determine the server's address. I want to avoid having to manually configure each client with the ip address of the server. What is the best way to do this? Some alternatives I have thought about doing are: The server could listen for broadcast packets from the clients. The message from the client would be a request for the IP address of the server. The server would respond with its address. The machine running my project's server could also have a bind server running. The LAN's router could be configured to use it as one of its DNS servers. I think I saw that there is a bind library. Does that mean I can build the bind service into my server so that bind doesn't have to be installed on the server? Any other ideas? What have you done in the past? What are the pros/cons of these approaches and others that might be suggested? Thanks for your help!

    Read the article

  • Examples using Active Directory/LDAP groups for permissions \ roles in Rails App.

    - by Nick Gorbikoff
    Hello. I was wondering how other people implemented this scenario. I have an internal rails app ( inventory management, label printing, shipping,etc). I'm rewriting security on the system, cause the old way got to cumbersome to maintain ( users table, passwords, roles) - I used restful_authentication and roles. It was implemented about 3 years ago. I already implemented AuthLogic with ruby-ldap-net to authenticate users ( actually that was surprisingly easy, compared to how I struggled with other frameworks/languages before). Next step is roles. I already have groups defined in Active Directory - so I don't want to run a separate roles system in my rails app, I just want to reuse Active Directory groups - since that part of the system is already maintained for other purposes ( shared drives, backups, pc access, etc) So I was wondering if others had experience implementing permissions/roles in a rails app based on groups in Active Directory or LDAP. Also the roles requirements are pretty complex. Here is an example: For instance I have users that belong to the supervisors group in AD and to inventory dept, so I was that user to be able to run "advanced" tasks in invetory - adjust qty, run reports, however other "supervisors" from other departmanets, shouldn't be able to do this, also Top Management - should be able to use those reports (regardless weather they belong to the invetory or not), but not Middle Management, unless they are in inventory group. Admins of the system (Domain Admins) should have unrestricted access to the system , except for HR & Finances part unless they are in HR ( like you don't want all system admins (except for one authorized one) to see personal info of other employees). I looked at acl9, cancan, aegis. I was wondering if there are any advantaged/cons to using one versus the other for this particular use of system access based on AD. Suggest other systems if you had good experience. Thank you!!!

    Read the article

  • Should I return IEnumerable<T> or IQueryable<T> from my DAL?

    - by Gary '-'
    I know this could be opinion, but I'm looking for best practices. As I understand, IQueryable implements IEnumerable, so in my DAL, I currently have method signatures like the following: IEnumerable<Product> GetProducts(); IEnumerable<Product> GetProductsByCategory(int cateogoryId); Product GetProduct(int productId); Should I be using IQueryable here? What are the pros and cons of either approach? Note that I am planning on using the Repository pattern so I will have a class like so: public class ProductRepository { DBDataContext db = new DBDataContext(<!-- connection string -->); public IEnumerable<Product> GetProductsNew(int daysOld) { return db.GetProducts() .Where(p => p.AddedDateTime > DateTime.Now.AddDays(-daysOld )); } } Should I change my IEnumerable<T> to IQueryable<T>? What advantages/disadvantages are there to one or the other?

    Read the article

  • Doing coding in Linux through a virtual machine on Windows VS partitioning

    - by ihaveitnow
    I already have experience with setting up virtual machines, running them and other minor tasks. Im a gamer, so I wont get rid of windows (for now at least...) but I do want to be a great programmer and to be involved with the Open-Source community. Id like to know if its a good idea to do my programming in linux through a virtual machine, vs giving it a partitioned section of the HDD. Id like to know about performance pros and cons and functionality. All responses are appreciated, thanks in advance. The type of programming I intend to dive into : Android Dev, Web Dev, Desktop Dev...More Android and Web right now though. So im looking at C#,C,C++,Java,PHP,HTML,MySQL...Off the top of the dome. I do web designing as well, so dreamweaver is added as an "essential". But im sure I can do dreamweaver files and upload them to the server after programming in Linux...Right? And any info on IDE's in Linux for the above mentioned are appreciated, but i would prefer going the coding route and understanding the essence of whats happening "under the covers" Thanks to all for reading, I appreciate it. Hope this isnt confusing :S

    Read the article

  • Why have or haven't you moved to ASP.NET MVC yet?

    - by Jason
    I find myself on the edge of trying out ASP.NET MVC but there is still "something" holding me back. Are you still waiting to try it, and if so, why? If you finally decided to use it, what helped you get over your hesitation? I'm not worried about it from a technical point of view; I know the pros and cons of web forms vs MVC. My concerns are more on the practical side. Will Microsoft continue to support ASP.NET MVC if they don't reach some critical threshold of developers/customers using it? Are customers willing to try ASP.NET MVC? Have you had to convince a customer to use it? How did that go? Are there major sites using ASP.NET MVC (besides SO)? Could you provide links if you have them? Did you try ASP.NET MVC and found yourself regretting it? If so, what do you regret? If you have any other concerns preventing you from using MVC.NET, what are they? If you had concerns but felt they were addressed and now use MVC.NET, could you list them as well? Thanks

    Read the article

  • Which MySQL Fork/Version to Pick??

    - by Drew
    As most of you know, Sun acquired MySQL (and later Oracle acquired Sun), and during these acquisitions, there were a lot of FUD in MySQL community which resulted in creation of various forks. Today we have MySQL from MySQL, Percona (XtraDB) MySQL, OurDelta MySQL, MariaDB, Drizzle to name a few. Which brings us to the source of the problem. We are in the process of upgrading our databases (hardware/software) and I would like to know which one of the forks should I go with. Each has their own set of pros/cons. We are currently using MySQL 5.0.x from MySQL/Linux on an 8-core machine. Our new hardware is a monster with 32 cores and 32GB of memory connecting to a fast NetApp Storage via FC. I would like to stick with MySQL from MySQL but I have heard horror stories on how badly MySQL 5.1 performs on many cores. I have also heard that MySQL 5.4 performs better on multi-core machines but that's still not production ready. In addition, I have also heard a lot of good things about Percona builds. This is what I know so far: MySQL 5.1 from MySQL: Reliable choice, but doesn't scale well on a big machine Percona: Scales well, good backing company. I don't have much experience with it MariaDB: Don't know much about it besides that it was founded by Original MySQL developers (including Monty) OurDelta: Don't know much Drizzle: Mostly optimized for cloud computing I would like to know what's the general notion about this problem. Which build/version should I go with? How are you guys picking your builds/versions? Thanks!

    Read the article

  • Entity Framework 4 + POCO with custom classes and WCF contracts (serialization problem)

    - by eman
    Yesterday I worked on a project where I upgraded to Entity Framework 4 with the Repository pattern. In one post, I have read that it is necessary to turn off the custom tool generator classes and then write classes (same like entites) by hand. That I can do it, I used the POCO Entity Generator and then deleted the new generated files .tt and all subordinate .cs classes. Then I wrote the "entity classes" by myself. I added the repository pattern and implemented it in the business layer and then implemented a WCF layer, which should call the methods from the business layer. By calling an Insert (Add) method from the presentation layer and everything is OK. But if I call any method that should return some class, then I get an error like (the connection was interrupted by the server). I suppose there is a problem with the serialization or am I wrong? How can by this problem solved? I'm using Visual Studio S2010, Entity Framework 4, C#. UPDATE: I have uploaded the project and hope somebody can help me! link text UPDATE 2: My questions: Why is POCO good (pros/cons)? When should POCO be used? Is POCO + the repository pattern a good choice? Should POCO classes by written by myself or could I use auto generated POCO classes?

    Read the article

  • Macro and array crossing

    - by Thomas
    I am having a problem with a lisp macro. I would like to create a macro which generate a switch case according to an array. Here is the code to generate the switch-case: (defun split-elem(val) `(,(car val) ',(cdr val))) (defmacro generate-switch-case (var opts) `(case ,var ,(mapcar #'split-elem opts))) I can use it with a code like this: (generate-switch-case onevar ((a . A) (b . B))) But when I try to do something like this: (defparameter *operators* '((+ . OPERATOR-PLUS) (- . OPERATOR-MINUS) (/ . OPERATOR-DIVIDE) (= . OPERATOR-EQUAL) (* . OPERATOR-MULT))) (defmacro tokenize (data ops) (let ((sym (string->list data))) (mapcan (lambda (x) (generate-switch-case x ops)) sym))) (tokenize data *operators*) I got this error: *** - MAPCAR: A proper list must not end with OPS. But I don't understand why. When I print the type of ops I get SYMBOL I was expecting CONS, is it related? Also, for my function tokenize how many times the lambda is evaluated (or the macro expanded)?

    Read the article

  • Recommendations for Continuous integration for Mercurial/Kiln + MSBuild + MSTest

    - by TDD
    We have our source code stored in Kiln/Mercurial repositories; we use MSBuild to build our product and we have Unit Tests that utilize MSTest (Visual Studio Unit Tests). What solutions exist to implement a continuous integration machine (i.e. Build machine). The requirements for this are: A build should be kicked of when necessary (i.e. code has changed in the Repositories we care about) Before the actual build, the latest version of the source code must be acquired from the repository we are building from The build must build the entire product The build must build all Unit Tests The build must execute all unit tests A summary of success/failure must be sent out after the build has finished; this must include information about the build itself but also about which Unit Tests failed and which ones succeeded. The summary must contain which changesets were in this build that were not yet in the previous successful (!) build The system must be configurable so that it can build from multiple branches(/Repositories). Ideally, this system would run on a single box (our product isn't that big) without any server components. What solutions are currently available? What are their pros/cons? From the list above, what can be done and what cannot be done? Thanks

    Read the article

  • JavaScript Module Pattern - What about using "return this"?

    - by Rob
    After doing some reading about the Module Pattern, I've seen a few ways of returning the properties which you want to be public. One of the most common ways is to declare your public properties and methods right inside of the "return" statement, apart from your private properties and methods. A similar way (the "Revealing" pattern) is to provide simply references to the properties and methods which you want to be public. Lastly, a third technique I saw was to create a new object inside your module function, to which you assign your new properties before returning said object. This was an interesting idea, but requires the creation of a new object. So I was thinking, why not just use "this.propertyName" to assign your public properties and methods, and finally use "return this" at the end? This way seems much simpler to me, as you can create private properties and methods with the usual "var" or "function" syntax, or use the "this.propertyName" syntax to declare your public methods. Here's the method I'm suggesting: (function() { var privateMethod = function () { alert('This is a private method.'); } this.publicMethod = function () { alert('This is a public method.'); } return this; })(); Are there any pros/cons to using the method above? What about the others?

    Read the article

  • Silverlight vs Flex

    - by 1kevgriff
    My company develops several types of applications. A lot of our business comes from doing multimedia-type apps, typically done in Flash. However, now that side of the house is starting to migrate towards doing Flex development. Most of our other development is done using .NET. I'm trying to make a push towards doing Silverlight development instead, since it would take better advantage of the .NET developers on staff. I prefer the Silverlight platform over the Flex platform for the simple fact that Silverlight is all .NET code. We have more .NET developers on staff than Flash/Flex developers, and most of our Flash/Flex developers are graphic artists (not real programmers). Only reason they push towards Flex right now is because it seems like the logical step from Flash. I've done development using both, and I honestly believe Silverlight is easier to work with. But I'm trying to convince people who are only Flash developers. So here's my question: If I'm going to go into a meeting to praise Silverlight, why would a company want to go with Silverlight instead of Flex? Other than the obvious "not everyone has Silverlight", what are the pros and cons for each?

    Read the article

  • caching previous return values of procedures in scheme

    - by Brock
    In Chapter 16 of "The Seasoned Schemer", the authors define a recursive procedure "depth", which returns 'pizza nested in n lists, e.g (depth 3) is (((pizza))). They then improve it as "depthM", which caches its return values using set! in the lists Ns and Rs, which together form a lookup-table, so you don't have to recurse all the way down if you reach a return value you've seen before. E.g. If I've already computed (depthM 8), when I later compute (depthM 9), I just lookup the return value of (depthM 8) and cons it onto null, instead of recursing all the way down to (depthM 0). But then they move the Ns and Rs inside the procedure, and initialize them to null with "let". Why doesn't this completely defeat the point of caching the return values? From a bit of experimentation, it appears that the Ns and Rs are being reinitialized on every call to "depthM". Am I misunderstanding their point? I guess my question is really this: Is there a way in Scheme to have lexically-scoped variables preserve their values in between calls to a procedure, like you can do in Perl 5.10 with "state" variables?

    Read the article

< Previous Page | 42 43 44 45 46 47 48 49 50 51  | Next Page >