Search Results

Search found 7891 results on 316 pages for 'multi layer perceptron'.

Page 274/316 | < Previous Page | 270 271 272 273 274 275 276 277 278 279 280 281  | Next Page >

  • What Test Environment Setup do Top Project Committers Use in the Ruby Community?

    - by viatropos
    Today I am going to get as far as I can setting up my testing environment and workflow. I'm looking for practical advice on how to setup the test environment from you guys who are very passionate and versed in Ruby Testing. By the end of the day (6am PST?) I would like to be able to: Type one 1-command to run test suites for ANY project I find on Github. Run autotest for ANY Github project so I can fork and make TESTABLE contributions. Build gems from the ground up with Autotest and Shoulda. For one reason or another, I hardly ever run tests for projects I clone from Github. The major reason is because unless they're using RSpec and have a Rake task to run the tests, I don't see the common pattern behind it all. I have built 3 or 4 gems writing tests with RSpec, and while I find the DSL fun, it's less than ideal because it just adds another layer/language of methods I have to learn and remember. So I'm going with Shoulda. But this isn't a question about which testing framework to choose. So the questions are: What is your, the SO reader and Github project committer, test environment setup using autotest so that whenever you git clone a gem, you can run the tests and autotest-develop them if desired? What are the guys who are writing the Paperclip Tests and Authlogic Tests doing? What is their setup? Thanks for the insight. Looking for answers that will make me a more effective tester.

    Read the article

  • Ajax and JSF 1.1 using hidden iframe with "proxy forms", what do you think about this development st

    - by Steel Plume
    Hi, currently I am using yet 1.1 specs, so I am trying to make simple what is too complex for me :p, managing backing beans with conflicting navigation rules, external params breaking rules and so on... for example when I need a backing bean used by other "views" simply I call it using FacesContext inside other backing beans, but often it's too wired up to JSF navigation/initialization rules to be really usable, and of course more simple is more useful become the FacesContext. So with only a bit of cross browser Javascript (simply a form copy and a read-write on a "proxy" form), I create a sort of proxy form inside the main user page (totally disassociated from JSF navigation rules, but using JSF taglibs). Ajax gives me flexibility on the user interaction, but data is always managed by JSF. Pratically I demand all "fictious" user actions to an hidden "iframe" which build up all needed forms according JSF rules, then a javascript simply clone its form output and put it into the user view level (CSS for showing/hiding real command buttons and making pretty), the user plays around and when he click submit, a script copies all "proxied" form values into the real JSF form inside the "iframe" that invokes the real submit of the form, what it returns is obviously dependent by your choice. Now JSF is really a pleasure :-p My real interest is to know what are your alternative strategy for using pure Ajax and JSF 1.1 without adopting middle layer like ajax4jsf and others, all good choices but too much "plugins" than specs.

    Read the article

  • How to accommodate for the different screen resolution of iPhone 4?

    - by mystify
    This is a programming question! Read on before you vote to close! According to Apple, iPhone 4 has a new screen resolution: 3.5-inch (diagonal) widescreen Multi-Touch display 960-by-640-pixel resolution at 326 ppi This little detail affects our apps in a heavy way. Most of the demo apps on the net have one thing in common: They position views in the believe that the screen has a fixed size of 320 x 480 pixels. So what most -if not all- developers do is: They designed everything in such a way, that a touchable area is -for example- 50 x 50 pixels big. Just enough to tap it. Things have been positioned relative to the upper left, to reach a specific position on screen - let's say the center, or somewhere at the bottom. Edit: It seems Apple has integrated an switch that allows to tell if an app is highRes or not. Nice. When we develop high-resolution apps, probably they won't work on older devices. And if they did, they would suffer a lot from 4-times the size of any image, having to scale them down in memory. This is community wiki. Just add anything that you think is relevant to this huge problem (constant screen res was one of the main reasons why I didn't go for Android!!).

    Read the article

  • How can I specify processor affinity?

    - by BenAlabaster
    I have an application that's having some trouble handling multi-processor systems. It's not an app that I have a particular affection for modifying and would like to avoid it if possible. However, I'm not above modifying the code if I have to. The application is written in VBA (and hence my inclination to avoid touching it). We've noticed that the application seems to run pretty smoothly if we set the processor affinity to a single processor using task manager, only manifesting instability when processor affinity isn't set. I know that I can specify the processor affinity of a task using .NET and as such, there lies a possibility of me writing a shell application that could be used to run legacy applications with a specified processor affinity, does anyone have any experience with this and can throw out some ideas as to headaches I'm likely to run into with this approach? The other question is: Is it in fact possible to modify the core VBA product to handle its own processor affinity? I've never had to handle this with any of my applications natively so this (at this point in time) is completely outside my realm of expertise. Thanks in advance

    Read the article

  • Web-Application development startup advice

    - by rpr
    Dear programmers of StackOverflow, recently me and two of my friends from college started a software company. We are developing web-applications using GWT-Spring-Hibernate and other helper frameworks. In a couple of months, we managed to set up a stable back-end and produced some demo programs for CRM solutions. Our area of interest is CRM where we can combine the flexibility of our back-end with the slick looking GWT based GUIs. Unfortunately we live in a third world country (well, kind of two and a half :) and no one here gives our work enough credit, or really cares about the advantages of web-apps. We are stuck at the moment because our current clients do not want to pay that much money for just "putting their local app on the web". Since we can not find satisfying work here, we have decided to work online/international. I was wondering if you guys know a good freelance kind of sites where we can throw ourselves into the market.. Also a question from frustration, to those who are in the field or knowledgeable/interested about web based CRM, how much would you ask/pay for lets say a web app which will keep track of all the patients of a multi-branch clinic also allowing the patients to access to view their own records? Including tiered authorization, logging etc.. Many thanks in advance for your responses!

    Read the article

  • Java: fastest way to do random reads on huge disk file(s)

    - by cocotwo
    I've got a moderately big set of data, about 800 MB or so, that is basically some big precomputed table that I need to speed some computation by several orders of magnitude (creating that file took several mutlicores computers days to produce using an optimized and multi-threaded algo... I do really need that file). Now that it has been computed once, that 800MB of data is read only. I cannot hold it in memory. As of now it is one big huge 800MB file but splitting in into smaller files ain't a problem if it can help. I need to read about 32 bits of data here and there in that file a lot of time. I don't know before hand where I'll need to read these data: the reads are uniformly distributed. What would be the fastest way in Java to do my random reads in such a file or files? Ideally I should be doing these reads from several unrelated threads (but I could queue the reads in a single thread if needed). Is Java NIO the way to go? I'm not familiar with 'memory mapped file': I think I don't want to map the 800 MB in memory. All I want is the fastest random reads I can get to access these 800MB of disk-based data. btw in case people wonder this is not at all the same as the question I asked not long ago: http://stackoverflow.com/questions/2346722/java-fast-disk-based-hash-set

    Read the article

  • When is a try catch not a try catch?

    - by Dearmash
    I have a fun issue where during application shutdown, try / catch blocks are being seemingly ignored in the stack. I don't have a working test project (yet due to deadline, otherwise I'd totally try to repro this), but consider the following code snippet. public static string RunAndPossiblyThrow(int index, bool doThrow) { try { return Run(index); } catch(ApplicationException e) { if(doThrow) throw; } return ""; } public static string Run(int index) { if(_store.Contains(index)) return _store[index]; throw new ApplicationException("index not found"); } public static string RunAndIgnoreThrow(int index) { try { return Run(index); } catch(ApplicationException e) { } return ""; } During runtime this pattern works famously. We get legacy support for code that relies on exceptions for program control (bad) and we get to move forward and slowly remove exceptions used for program control. However, when shutting down our UI, we see an exception thrown from "Run" even though "doThrow" is false for ALL current uses of "RunAndPossiblyThrow". I've even gone so far as to verify this by modifying code to look like "RunAndIgnoreThrow" and I'll still get a crash post UI shutdown. Mr. Eric Lippert, I read your blog daily, I'd sure love to hear it's some known bug and I'm not going crazy. EDIT This is multi-threaded, and I've verified all objects are not modified while being accessed

    Read the article

  • PDO closeCursor Error

    - by Metropolis
    Hey Everyone, I currently have a database layer that I wrote myself and I have been using it now for over a year without any problems. The database class uses PDO, and there are two different databases that I regularly connect to (MySQL and MS SQL). The MS SQL database is used for Accpac accounting storage, and the MySQL database is used for everything else. In one of the MySQL databases I have all of the dsn's listed which I use to create the string I need to connect to the MS SQL databases. I have a new program I am trying to write which I am taking employee data from one of the MySQL databases, and using the employee ID to get the employee's information from the MS SQL database. For some reason, whenever I run the program it will get through about 1200 records (out of 11k) and then crash with an error like the following, Fatal error: Call to a member function closeCursor() on a non-object I have tried moving the loops around in many different ways, and I have tried manually closing the connections by setting the database handle to null. Nothing I do seems to work. Thanks for any help! Metropolis

    Read the article

  • Optimization in Common Decalaration

    - by Pratik
    Its a 3-tier ASP.NET Website Project In Data Layer there is class "Common Decalaration" in which lot of common things are mentioned. Something this way : public class CommonDeclartion { #region Common Messages public const string RECORD_INSERT_MSG = "Record Inserted Successfully "; public const string RECORD_UPDATE_MSG = "Record Updated Successfully"; public const string RECORD_DELETE_MSG = "Record Deleted Successfully"; public const string ERROR_MSG = "Error Ocuured while Perfoming This Action."; public const string UserID_Incorrect = "Please Enter The Correct User ID."; public const string RECORD_ALREADY_EXIT = "Record Already Exit"; public const string NO_RECORD = "No Record found."; #endregion } Can this be more optimized in terms of : 1.Perfomance 2.Security(if any) 3.Code Readablity or Reusablity I thought of using enum but can't figure that out : enum CommonMessages { RECORD_INSERT_MSG "Record Inserted Successfully.", RECORD_UPDATE_MSG "Record Updated Successfully.", RECORD_DELETE_MSG "Record Deleted Successfully.", ERROR_MSG "Error Ocuured while Perfoming This Action.", UserID_Incorrect "Please Enter The Correct User ID.", RECORD_ALREADY_EXIT "Record Already Exit.", NO_RECORD "No Record found.", } or else should keep them in some collections like dictionary/NameValueCollection or so or i have to keep them in XML in form of key/value pair and reterive from it ? What can be better way keeping in mind 1.Perfomance 2.Security(if any) 3.Code Readablity or Reusablity

    Read the article

  • Hard coded contents in SVG Editor is not appearing?

    - by marknt15
    Hi, I'm using this: http://code.google.com/p/svg-edit/ I put a hard coded code inside the svgcanvas div html tag: <div id="svgcanvas"> <svg xmlns:xlink="http://www.w3.org/1999/xlink" xmlns="http://www.w3.org/2000/svg" id="svgroot" height="480" width="640"><svg xmlns:xlink="http://www.w3.org/1999/xlink" xmlns="http://www.w3.org/2000/svg" viewbox="0 0 640 480" id="svgcontent"><g style="pointer-events: all;"><title style="pointer-events: inherit;">Layer 1</title><ellipse ry="69" rx="90" style="pointer-events: inherit;" stroke-width="5" stroke="#000000" fill="#FF0000" id="svg_1" cy="156.5" cx="286"></ellipse></g></svg><g id="selectorParentGroup"><rect style="pointer-events: none;" display="none" stroke-width="0.5" stroke="#22C" fill-opacity="0.15" fill="#22C" id="selectorRubberBand"></rect></g></svg> </div> My expected output is that it should draw the example svg code in there but it didn't. Even if I reload the page it will still not appear. How can I make it appear? Thanks

    Read the article

  • How can I keep the the logic to translate a ViewModel's values to a Where clause to apply to a linq query out of My Controller?

    - by Mr. Manager
    This same problem keeps cropping up. I have a viewModel that doesn't have any persistent backing. It is just a ViewModel to generate a search input form. I want to build a large where clause from the values the user entered. If the Action Accepts as a parameter SearchViewModel How do I do this without passing my viewModel to my service layer? Service shouldn't know about ViewModels right? Oh and if I serialize it, then it would be a big string and the key/values would be strongly typed. SearchViewModel this is just a snippet. [Display(Name="Address")] public string AddressKeywords { get; set; } /// <summary> /// Gets or sets the census. /// </summary> public string Census { get; set; } /// <summary> /// Gets or sets the lot block sub. /// </summary> public string LotBlockSub { get; set; } /// <summary> /// Gets or sets the owner keywords. /// </summary> [Display(Name="Owner")] public string OwnerKeywords { get; set; } In my controller action I was thinking of something like this. but I would think all this logic doesn't belong in my Controller. ActionResult GetSearchResults(SearchViewModel model){ var query = service.GetAllParcels(); if(model.Census != null){ query = query.Where(x=>x.Census == model.Census); } if (model.OwnerKeywords != null){ query = query.Where(x=>x.Owners == model.OwnerKeywords); } return View(query.ToList()); }

    Read the article

  • (HTML) PNG on top of another PNG - possible to eliminate full transparency?

    - by MHTri
    I'd like to put a logo png onto of another coloured png. They both have transparent backgrounds. When I try this the images blend together. Curiously, in Photoshop the logo retains its opaque-ness - I put colours on the layers underneath it, another image, etc etc, the logo is still opaque. I'd like to do it this way so I can rotate the background images. How do I fix this? [edit]I've cooked up an example image: http://i.imgur.com/XtoGn.png The left is what I want to happen, the right is what happens on all browsers (I know the background isn't transparent but bear with me - they're both transparent pngs, with the background having a gradient layer mask). I've put the images like this <div> <img id="backgroundImg" style="position: absolute; top: 0;" src="/Images/background.png" /> <img id="logoImg" src="/Images/logo.png" /> </div> I'm not entirely sure what blending mode I'm using in PS.

    Read the article

  • Multiple "ObjectChangeTracker" getting created, can it be avoided?

    - by user555937
    Hi, We are working on a POC where we have following architecture (MVVM), WPF(Client) + WCF + Model(DataAccess)+ ADO.Net Entity Framework 4.0 (with SQL Server 2008 R2 as DB) All are different projects. In the DataAccess layer we have created different Entity Models(edmx) based on the functionality. The tables under perticular flow are grouped and created different entity models. We are using self tracking entities to and fro to communicate with the WPF client through wcf service. For Single model everything works fine. But when we created a Multiple models then few issues started coming. Mutliple models have few duplicate tables/entities. Two probels are, 1) When we try to access entities from different models mutiple objects "ObjectChangeTracker" are getting created. E.g. CompanyModel(edmx) - Company(Entity) - ObjectChangeTracker, ObjectState ProductModel(edmx) - Customer(Entity) - ObjectChangeTracker1, ObjectState1 OrderModel(edmx) - Oder(Entity) - ObjectChangeTracker2, ObjectState2 Is there any way to avoid this? 2) There are few tables which shared across the Models, E.g. Company(Entity) is used in All above mdoels. During compile time it does not thow any error. But run time It gives error saying "Schema specified is not valid. Errors: The mapping of CLR type to EDM type is ambiguous because multiple CLR types match the EDM type "Company"".. To resolve this, we renamed the entities with some prefix to make them Unique. Is there any other way we can resolve this without changing the name of the entity in the same assembly? Thanks in advance and appreciate if anyone has approach for these issues. Thanks, Kiran

    Read the article

  • C/C++ I18N mbstowcs question

    - by bogertron
    I am working on internationalizing the input for a C/C++ application. I have currently hit an issue with converting from a multi-byte string to wide character string. The code needs to be cross platform compatible, so I am using mbstowcs and wcstombs as much as possible. I am currently working on a WIN32 machine and I have set the locale to a non-english locale (Japanese). When I attempt to convert a multibyte character string, I seem to be having some conversion issues. Here is an example of the code: int main(int argc, char** argv) { wchar_t *wcsVal = NULL; char *mbsVal = NULL; /* Get the current code page, in my case 932, runs only on windows */ TCHAR szCodePage[10]; int cch= GetLocaleInfo( GetSystemDefaultLCID(), LOCALE_IDEFAULTANSICODEPAGE, szCodePage, sizeof(szCodePage)); /* verify locale is set */ if (setlocale(LC_CTYPE, "") == 0) { fprintf(stderr, "Failed to set locale\n"); return 1; } mbsVal = argv[1]; /* validate multibyte string and convert to wide character */ int size = mbstowcs(NULL, mbsVal, 0); if (size == -1) { printf("Invalid multibyte\n"); return 1; } wcsVal = (wchar_t*) malloc(sizeof(wchar_t) * (size + 1)); if (wcsVal == NULL) { printf("memory issue \n"); return 1; } mbstowcs(wcsVal, szVal, size + 1); wprintf(L"%ls \n", wcsVal); return 0; } At the end of execution, the wide character string does not contain the converted data. I believe that there is an issue with the code page settings, because when i use MultiByteToWideChar and have the current code page sent in EX: MultiByteToWideChar( CP_ACP, 0, mbsVal, -1, wcsVal, size + 1 ); in place of the mbstowcs calls, the conversion succeeds. My question is, how do I use the generic mbstowcs call instead of teh MuliByteToWideChar call?

    Read the article

  • How to set up a load/stress test for a web site?

    - by Ryan
    I've been tasked with stress/load testing our company web site out of the blue and know nothing about doing so. Every search I make on google for "how to load test a web site" just comes back with various companies and software to physically do the load testing. For now I'm more interested in how to actually go about setting up a load test like what I should take into account prior to load testing, what pages within my site I should be testing load against and what things I'm going to want to monitor when doing the test. Our web site is on a multi-tier system complete with a separate database server (IIS 7 Web Server, SQL Server 2000 db). I imagine I'd want to monitor both the web server and the database server for testing load however when setting up scenarios to load test the web server I'd have to use pages that query the database to see any load on the database server at the same time. Are web servers and database servers generally tested simultaneously or are they done as separate tests? As you can see I'm pretty clueless as to the whole operation so any incite as to how to go about this would be very helpful. FYI I have been tinkering with Pylot and was able to create and run a scenario against our site but I'm not sure what I should be looking for in the results or if the scenario I created is even a scenario worth measuring for our site. Thanks in advance.

    Read the article

  • Cross-Origin Resource Sharing (CORS) - am I missing something here?

    - by David Semeria
    I was reading about CORS (https://developer.mozilla.org/en/HTTP_access_control) and I think the implementation is both simple and effective. However, unless I'm missing something, I think there's a big part missing from the spec. As I understand, it's the foreign site that decides, based on the origin of the request (and optionally including credentials), whether to allow access to its resources. This is fine. But what if malicious code on the page wants to POST a user's sensitive information to a foreign site? The foreign site is obviously going to authenticate the request. Hence, again if I'm not missing something, CORS actually makes it easier to steal sensitive information. I think it would have made much more sense if the original site could also supply an immutable list of servers its page is allowed to access. So the expanded sequence would be: 1) Supply a page with list of acceptable CORS servers (abc.com, xyz.com, etc) 2) Page wants to make an XHR request to abc.com - the browser allows this because it's in the allowed list and authentication proceeds as normal 3) Page wants to make an XHR request to malicious.com - request rejected locally (ie by the browser) because the server is not in the list. I know that malicious code could still use JSONP to do its dirty work, but I would have thought that a complete implementation of CORS would imply the closing of the script tag multi-site loophole. I also checked out the official CORS spec (http://www.w3.org/TR/cors) and could not find any mention of this issue.

    Read the article

  • Application that depends heavily on stored procedures

    - by PieterG
    We currently have an application that depends largely on stored procedures. There is a heavy use of temp tables. It's an extremely large application. Facing this situation, I would like to use Entity Framework or Linq2Sql for a rewrite. I might consider using Fluent Hibernate or Subsonic, as i've used them quite extensively in the past. I've had problems with Linq2Sql generating the return types for the stored procedures because of the usage of the temp tables, and I think it's cumbersome to go and change all the stored procedures from temp tables to in-memory tables. Considering the 2 choices that I want to make, which one of the 2 is the best route to go and why? If my choices are extremely idiotic, please provide alternatives. Edit: The reason for the question and the change is that the data access layer is non-existent and was built 10 years ago. We currently still run into a lot of issues with it. I don't want to divulge too much, but if you saw it, your eyes would start bleeding :)

    Read the article

  • How to optimize paging for large in memory database

    - by snakefoot
    I have an application where the entire database is implemented in memory using a stl-map for each table in the database. Each item in the stl-map is a complex object with references to other items in the other stl-maps. The application works with a large amount of data, so it uses more than 500 MByte RAM. Clients are able to contact the application and get a filtered version of the entire database. This is done by running through the entire database, and finding items relevant for the client. When the application have been running for an hour or so, then Windows 2003 SP2 starts to page out parts of the RAM for the application (Eventhough there is 16 GByte RAM on the machine). After the application have been partly paged out then a client logon takes a long time (10 mins) because it now generates a page fault for each pointer lookup in the stl-map. I can see it is possible to tell Windows to lock memory in RAM, but this is generally only recommended for device drivers, and only for "small" amounts of memory. I guess a poor mans solution could be to loop through the entire memory database, and thus tell Windows we are still interested in keeping the datamodel in RAM. I guess another poor mans solution could be to disable the pagefile completely on Windows. I guess the expensive solution would be a SQL database, and then rewrite the entire application to use a database layer. Then hopefully the database system will have implemented means to for fast access. Are there other more elegant solutions ?

    Read the article

  • Where are possible locations of queueing/buffering delays in Linux multicast?

    - by Matt
    We make heavy use of multicasting messaging across many Linux servers on a LAN. We are seeing a lot of delays. We basically send an enormous number of small packages. We are more concerned with latency than throughput. The machines are all modern, multi-core (at least four, generally eight, 16 if you count hyperthreading) machines, always with a load of 2.0 or less, usually with a load less than 1.0. The networking hardware is also under 50% capacity. The delays we see look like queueing delays: the packets will quickly start increasing in latency, until it looks like they jam up, then return back to normal. The messaging structure is basically this: in the "sending thread", pull messages from a queue, add a timestamp (using gettimeofday()), then call send(). The receiving program receives the message, timestamps the receive time, and pushes it in a queue. In a separate thread, the queue is processed, analyzing the difference between sending and receiving timestamps. (Note that our internal queues are not part of the problem, since the timestamps are added outside of our internal queuing.) We don't really know where to start looking for an answer to this problem. We're not familiar with Linux internals. Our suspicion is that the kernel is queuing or buffering the packets, either on the send side or the receive side (or both). But we don't know how to track this down and trace it. For what it's worth, we're using CentOS 4.x (RHEL kernel 2.6.9).

    Read the article

  • What /else/ causes this?

    - by Mordachai
    MFC Toolbox Library.lib(SimpleFileIO.obj) : error LNK2005: _wcsnlen already defined in libcmtd.lib(wcslen_s.obj) fatal error LNK1169: one or more multiply defined symbols found This is driving me nuts. Normally, one would get this if the various projects that are a part of their solution do not agree on which CRT to use (single threaded, multi-threaded, release or debug). However, I have been over this thing about 500 times now, and they all agree. Background: this is a VS 2010 project just converted from VS 2008. MFC Toolbox Library.lib is set to compile as a static library, using /MTd, as is the target .exe I am trying to compile in this solution. Further, the solution that this is being converted from (VS 2008) already compiles & links properly!!! So it's not like that there is a disagreement between the two .vcproj's - or at least there wasn't before the conversion. Furthermore, the MFC Toolbox Library is used by about 25 other projects in another solution - and in that solution (Master Build English) it compiles & links against those other projects without complaint in both debug and release targets. I have just spent the last hour going over every single project property for this target project (Cimex Header Viewer) vs. several different target exe projects in Master Build English solution - and I cannot find a difference. They appear to be identical, excepting that they're different names. I've tried doing a clean & build all. I'm simply out of ideas. Does anyone have a thought on what else I might investigate??? I think I'm ready to start chewing glass. :(

    Read the article

  • What are the drawbacks of this Classing format?

    - by Keysle
    This is a 3 layer example of my classing format function __(_){return _.constructor} //class var _ = ( CLASS = function(){ this.variable = 0; this.sub = new CLASS.SUBCLASS(); }).prototype; _.func = function(){ alert('lvl'+this.variable); this.sub.func(); } _.divePeak = function(){ alert('lvl'+this.variable); this.sub.variable += 5; } //sub class _ = ( __(_).SUBCLASS = function(){ this.variable = 1; this.sub = new CLASS.SUBCLASS.DEEPCLASS(); }).prototype; _.func = function(){ alert('lvl'+this.variable); this.sub.func(); } //deep class _ = ( __(_).DEEPCLASS = function(){ this.variable = 2; }).prototype; _.func = function(){ alert('lvl'+this.variable); } Before you blow a gasket, let me explain myself. The purpose behind the underscores is to accelerate the time needed to specify functions for a class and also specify sub classes of a class. To me it's easier to read. I KNOW, this does interfere with underscore.js if you intend to use it in your classes. I'm sure _.js can be easily switched over to another $ymbol though ... oh wait, But I digress. Why have classes within a class? because solar.system() and social.system() mean two totally different things but it's convenient to use the same name. Why user underscores to manage the definition of the class? because "Solar.System.prototype" took me about 2 seconds to type out and 2 typos to correct. It also keeps all function names for all classes in the same column of texts, which is nice for legibility. All I'm doing is presenting my reasoning behind this method and why I came up with it. I'm 3 days into learning OO JS and I am very willing to accept that I might have messed up.

    Read the article

  • How do you make use of the provider independency in the Entity Framework?

    - by Anders Svensson
    I'm trying to learn more about data access and the Entity Framework. My goal is to have a "provider independent" data access layer (to be able to switch easily e.g. from SQL Server to MySQL or vice versa), and since the EF is supposed to be provider independent it seems like a good way to go. But how do you use this provider independence? I mean, I was expecting to be able to sort of "program to an interface", and then just be able to switch database provider. But as far as I can tell I'm only getting a concrete type to program against, an "Entities" class, e.g. in my case it is: UserDBEntities _context = new UserDBEntities(); What I would have expected to be able to switch provider easily was to have an interface e.g. like IEntities _context = new UserDBEntities(); Sort of like I can do with datasets... But maybe that isn't how it works at all with EF? Or do you just switch provider in the connectionstring, and the model stays the same?? Please remember that I'm a complete newbie at this EF, and rather inexperienced with databases in general, so I would really appreciate if you could be as clear as possible :-)

    Read the article

  • java web templates across multiple WAR files

    - by Casey
    I have a multi WAR web application that was designed badly. There is a single WAR that is responsible for handling some authorization against a database and defines a standard web page using a jsp taglib. The main WAR basically checks the privileges of the user and than based on that, displays links to the context path of the other deployed WARS. Each of the other deployed WARs includes this custom tag lib. I am working on redesigning this application, and one of the nice things that I want to retain is that we have other project teams that have developed these WAR modules that "plug into" our current system to take advantage of other things we have to offer. I am not entirely sure how to handle the page templates though. I need a templating system that would be easy enough to use across multiple wars (I was thinking of jsp fragments??). I really only need to define a consistent header and main navigation section. Whatever else is displayed on the page is up to the individual web project. Any suggestions? I hope that this is clear, if not I can elaborate more.

    Read the article

  • Why do I need to close fds when reading and writing to the pipe?

    - by valentimsousa
    However, what if one of my processes needs to continuously write to the pipe while the other pipe needs to read? This example seems to work only for one write and one read. I need multi read and write void executarComandoURJTAG(int newSock) { int input[2], output[2], estado, d; pid_t pid; char buffer[256]; char linha[1024]; pipe(input); pipe(output); pid = fork(); if (pid == 0) {// child close(0); close(1); close(2); dup2(input[0], 0); dup2(output[1], 1); dup2(output[1], 2); close(input[1]); close(output[0]); execlp("jtag", "jtag", NULL); } else { // parent close(input[0]); close(output[1]); do { read(newSock, linha, 1024); /* Escreve o buffer no pipe */ write(input[1], linha, strlen(linha)); close(input[1]); while ((d = read(output[0], buffer, 255))) { //buffer[d] = '\0'; write(newSock, buffer, strlen(buffer)); puts(buffer); } write(newSock, "END", 4); } while (strcmp(linha, "quit") != 0); } }

    Read the article

  • How to avoid concurrent execution of a time-consuming task without blocking?

    - by Diego V
    I want to efficiently avoid concurrent execution of a time-consuming task in a heavily multi-threaded environment without making threads wait for a lock when another thread is already running the task. Instead, in that scenario, I want them to gracefully fail (i.e. skip its attempt to execute the task) as fast as possible. To illustrate the idea considerer this unsafe (has race condition!) code: private static boolean running = false; public void launchExpensiveTask() { if (running) return; // Do nothing running = true; try { runExpensiveTask(); } finally { running = false; } } I though about using a variation of Double-Checked Locking (consider that running is a primitive 32-bit field, hence atomic, it could work fine even for Java below 5 without the need of volatile). It could look like this: private static boolean running = false; public void launchExpensiveTask() { if (running) return; // Do nothing synchronized (ThisClass.class) { if (running) return; running = true; try { runExpensiveTask(); } finally { running = false; } } } Maybe I should also use a local copy of the field as well (not sure now, please tell me). But then I realized that anyway I will end with an inner synchronization block, that still could hold a thread with the right timing at monitor entrance until the original executor leaves the critical section (I know the odds usually are minimal but in this case we are thinking in several threads competing for this long-running resource). So, could you think in a better approach?

    Read the article

< Previous Page | 270 271 272 273 274 275 276 277 278 279 280 281  | Next Page >