Search Results

Search found 8049 results on 322 pages for 'concerned parent'.

Page 246/322 | < Previous Page | 242 243 244 245 246 247 248 249 250 251 252 253  | Next Page >

  • Prototypal inheritance should save memory, right?

    - by Techpriester
    Hi Folks, I've been wondering: Using prototypes in JavaScript should be more memory efficient than attaching every member of an object directly to it for the following reasons: The prototype is just one single object. The instances hold only references to their prototype. Versus: Every instance holds a copy of all the members and methods that are defined by the constructor. I started a little experiment with this: var TestObjectFat = function() { this.number = 42; this.text = randomString(1000); } var TestObjectThin = function() { this.number = 42; } TestObjectThin.prototype.text = randomString(1000); randomString(x) just produces a, well, random String of length x. I then instantiated the objects in large quantities like this: var arr = new Array(); for (var i = 0; i < 1000; i++) { arr.push(new TestObjectFat()); // or new TestObjectThin() } ... and checked the memory usage of the browser process (Google Chrome). I know, that's not very exact... However, in both cases the memory usage went up significantly as expected (about 30MB for TestObjectFat), but the prototype variant used not much less memory (about 26MB for TestObjectThin). I also checked: The TestObjectThin instances contain the same string in their "text" property, so they are really using the property of the prototype. Now, I'm not so sure what to think about this. The prototyping doesn't seem to be the big memory saver at all. I know that prototyping is a great idea for many other reasons, but I'm specifically concerned with memory usage here. Any explanations why the prototype variant uses almost the same amount of memory? Am I missing something?

    Read the article

  • Returning Database Blobs in TurboGears 2.x / FCGI / Lighttpd extremely slow

    - by Tom
    Hey everyone, I am running a TG2 App on lighttpd via flup/fastcgi. We are reading images (~30kb each) from BlobFields in a MySQL database and return those images with a custom mime type via a controller method. Caching these images on the hard disk makes no sense because they change with every request, the only reason we cache these in the DB is that creating these images is quite expensive and the data used to create the images is also present in plain text on the website. Now to the problem itself: When returning such an image, things get extremely slow. The code runs totally fine on paster itself with no visible delay, but as soon as its running via fcgi/lighttpd the described phenomenon happens. I profiled the method of my controller that returns my blob, and the entire method runs in a few miliseconds, but when "return" executes, the entire app hangs for roughly 10 seconds. We could not reproduce the same error with PHP on FCGI. This only seems to happen with Turbogears or Pylons. Here for your consideration the concerned piece of source code: @expose(content_type=CUSTOM_CONTENT_TYPE) def return_img(self, img_id): """ Return a DB persisted image when requested """ img = model.Images.by_id(img_id) #get image from DB response.headers['content-type'] = 'image/png' return img.data # this causes the app to hang for 10 seconds

    Read the article

  • How much detail should be in a project plan or spec?

    - by DeanMc
    I have an issue that I feel many programmers can relate to... I have worked on many small scale projects. After my initial paper brain storm I tend to start coding. What I come up with is usually a rough working model of the actual application. I design in a disconnected fashion so I am talking about underlying code libraries, user interfaces are the last thing as the library usually dictates what is needed in the UI. As my projects get bigger I worry that so should my "spec" or design document. The above paragraph, from my investigations, is echoed all across the internet in one fashion or another. When a UI is concerned there is a bit more information but it is UI specific and does not relate to code libraries. What I am beginning to realise is that maybe code is code is code. It seems from my extensive research that there is no 1:1 mapping between a design document and the code. When I need to research a topic I dump information into OneNote and from there I prioritise features into versions and then into related chunks so that development runs in a fairly linear fashion, my tasks tend to look like so: Implement Binary File Reader Implement Binary File Writer Create Object to encapsulate Data for expression to the caller Now any programmer worth his salt is aware that between those three to do items could be a potential wall of code that could expand out to multiple files. I have tried to map the complete code process for each task but I simply don't think it can be done effectively. By the time one mangles pseudo code it is essentially code anyway so the time investment is negated. So my question is this: Am I right in assuming that the best documentation is the code itself. We are all in agreement that a high level overview is needed. How high should this be? Do you design to statement, class or concept level? What works for you?

    Read the article

  • Pushing a local mercurial repository to a remote server or cloning at server from local

    - by Samaursa
    I have a local repository that I have now decided to push to a remote server (for example, I have a host that allows mercurial repositories and I am also trying to push to bitbucket). The repository has a lot of files and is a little more than 200mb. Locally, I am able to clone the repository without problems. Now I have a lot of changes in this repository, and I have wasted a couple of days trying to figure out how to get the remote server to clone my repository. I cannot get hg serve to work outside of the LAN. I have tried everything. So instead, I created a new repository at the remote servers (both at the host and bitbucket) with nothing in it. Now I am pushing the complete repository that I have locally to these remote locations. So far it has been unsuccessful, as the push operation is stuck on searching for changes and does not give me any other useful output. I have let it go for about an hour with no change. Now my questions is, what am I doing wrong as far as hg serve is concerned? I can access it locally but not remotely (through DynDns - I have configured it properly and the router forwards the ports correctly) so that I can get the server to clone the repository the first time after which I will be pushing to it. My second question is, assuming the clone at server does not work (for example, if I was to push my current repository to bitbucket), is creating an empty repository at the server and then pushing a local repository to the new remote repository ok? Is that the source of the searching for changes problem? Any help in this regard would be greatly appreciated.

    Read the article

  • Ruby Design Problem for SQL Bulk Inserter

    - by crunchyt
    This is a Ruby design problem. How can I make a reusable flat file parser that can perform different data scrubbing operations per call, return the emitted results from each scrubbing operation to the caller and perform bulk SQL insertions? Now, before anyone gets narky/concerned, I have written this code already in a very unDRY fashion. Which is why I am asking any Ruby rockstars our there for some assitance. Basically, everytime I want to perform this logic, I create two nested loops, with custom processing in between, buffer each processed line to an array, and output to the DB as a bulk insert when the buffer size limit is reached. Although I have written lots of helpers, the main pattern is being copy pasted everytime. Not very DRY! Here is a Ruby/Pseudo code example of what I am repeating. lines_from_file.each do |line| line.match(/some regex/).each do |sub_str| # Process substring into useful format # EG1: Simple gsub() call # EG2: Custom function call to do complex scrubbing # and matching, emitting results to array # EG3: Loop to match opening/closing/nested brackets # or other delimiters and emit results to array end # Add processed lines to a buffer as SQL insert statement @buffer << PREPARED INSERT STATEMENT # Flush buffer when "buffer size limit reached" or "end of file" if sql_buffer_full || last_line_reached @dbc.insert(SQL INSERTS FROM BUFFER) @buffer = nil end end I am familiar with Proc/Lambda functions. However, because I want to pass two separate procs to the one function, I am not sure how to proceed. I have some idea about how to solve this, but I would really like to see what the real Rubyists suggest? Over to you. Thanks in advance :D

    Read the article

  • Does a CS PhD Help for Software Engineering Career?

    - by SiLent SoNG
    I would like to seek advice on whether or not to take a PhD offer from a good university. My only concern is the PhD will take at least 4 year's commitment. During the period I won't have good monetary income. I am also concerned whether the PhD will help my future career development. My career goal is software engineering only. Some of the PhD info: The PhD is CS related. The research area is Information Retrieval, Machine Learning, and Nature Language Processing. More specifically, the research topic is Deep Web search. Some of backgrounds: I worked in Oracle for 3 years in database development after obtained a CS degree from some good university. In last year I received an email telling an interesting project from a professor and thereafter I was lured into his research team. The team consists of 4 PhD students; those students have little or no industry experiences and their coding skills are really really bad. By saying bad I mean they do not know some common patterns and they do not know pitfalls of the programming languages as well as idioms for doing things right. I guess a at least 4 year commitment is worth of serious consideration. I am 27 at this moment. If I take the offer, that implies I will be 31+ upon graduation. Wah... becoming.. what to say, no longer young. Hence, here I am seeking advice on whether it is good or not to take the PhD offer, and whether a CS PhD is good for my future career growth as a software engineer? I do not intent to go for academia.

    Read the article

  • Java try finally variations

    - by Petr Gladkikh
    This question nags me for a while but I did not found complete answer to it yet (e.g. this one is for C# http://stackoverflow.com/questions/463029/initializing-disposable-resources-outside-or-inside-try-finally). Consider two following Java code fragments: Closeable in = new FileInputStream("data.txt"); try { doSomething(in); } finally { in.close(); } and second variation Closeable in = null; try { in = new FileInputStream("data.txt"); doSomething(in); } finally { if (null != in) in.close(); } The part that worries me is that the thread might be somewhat interrupted between the moment resource is acquired (e.g. file is opened) but resulting value is not assigned to respective local variable. Is there any other scenarios the thread might be interrupted in the point above other than: InterruptedException (e.g. via Thread#interrupt()) or OutOfMemoryError exception is thrown JVM exits (e.g. via kill, System.exit()) Hardware fail (or bug in JVM for complete list :) I have read that second approach is somewhat more "idiomatic" but IMO in the scenario above there's no difference and in all other scenarios they are equal. So the question: What are the differences between the two? Which should I prefer if I do concerned about freeing resources (especially in heavily multi-threading applications)? Why? I would appreciate if anyone points me to parts of Java/JVM specs that support the answers.

    Read the article

  • Using Javascript to generate and save a file

    - by Toji
    I've been fiddling with WebGL lately, and have gotten a Collada reader working. Problem is it's pretty slow (Collada is a very verbose format), so I'm going to start converting files to a easier to use format (probably JSON). Thing is, I already have the code to parse the file in Javascript, so I may as well use it as my exporter too! The problem is saving. Now, I know that I can parse the file, send the result to the server, and have the browser request the file back from the server as a download. But in reality the server has nothing to do with this particular process, so why get it involved? I already have the contents of the desired file in memory. Is there any way that I could present the user with a download using pure javascript? (I doubt it, but might as well ask...) And to be clear: I am not trying to access the filesystem without the users knowledge! The user will provide a file (probably via drag and drop), the script will transform the file in memory, and the user will be prompted to download the result. All of which should be "safe" activities as far as the browser is concerned.

    Read the article

  • What are the fastest-performing options for a read-only, unordered collection of unique strings?

    - by Dan Tao
    Disclaimer: I realize the totally obvious answer to this question is HashSet<string>. It is absurdly fast, it is unordered, and its values are unique. But I'm just wondering, because HashSet<T> is a mutable class, so it has Add, Remove, etc.; and so I am not sure if the underlying data structure that makes these operations possible makes certain performance sacrifices when it comes to read operations -- in particular, I'm concerned with Contains. Basically, I'm wondering what are the absolute fastest-performing data structures in existence that can supply a Contains method for objects of type string. Within or outside of the .NET framework itself. I'm interested in all kinds of answers, regardless of their limitations. For example I can imagine that some structure might be restricted to strings of a certain length, or may be optimized depending on the problem domain (e.g., range of possible input values), etc. If it exists, I want to hear about it. One last thing: I'm not restricting this to read-only data structures. Obviously any read-write data structure could be embedded inside a read-only wrapper. The only reason I even mentioned the word "read-only" is that I don't have any requirement for a data structure to allow adding, removing, etc. If it has those functions, though, I won't complain.

    Read the article

  • What's the most efficient query?

    - by Aaron Carlino
    I have a table named Projects that has the following relationships: has many Contributions has many Payments In my result set, I need the following aggregate values: Number of unique contributors (DonorID on the Contribution table) Total contributed (SUM of Amount on Contribution table) Total paid (SUM of PaymentAmount on Payment table) Because there are so many aggregate functions and multiple joins, it gets messy do use standard aggregate functions the the GROUP BY clause. I also need the ability to sort and filter these fields. So I've come up with two options: Using subqueries: SELECT Project.ID AS PROJECT_ID, (SELECT SUM(PaymentAmount) FROM Payment WHERE ProjectID = PROJECT_ID) AS TotalPaidBack, (SELECT COUNT(DISTINCT DonorID) FROM Contribution WHERE RecipientID = PROJECT_ID) AS ContributorCount, (SELECT SUM(Amount) FROM Contribution WHERE RecipientID = PROJECT_ID) AS TotalReceived FROM Project; Using a temporary table: DROP TABLE IF EXISTS Project_Temp; CREATE TEMPORARY TABLE Project_Temp (project_id INT NOT NULL, total_payments INT, total_donors INT, total_received INT, PRIMARY KEY(project_id)) ENGINE=MEMORY; INSERT INTO Project_Temp (project_id,total_payments) SELECT `Project`.ID, IFNULL(SUM(PaymentAmount),0) FROM `Project` LEFT JOIN `Payment` ON ProjectID = `Project`.ID GROUP BY 1; INSERT INTO Project_Temp (project_id,total_donors,total_received) SELECT `Project`.ID, IFNULL(COUNT(DISTINCT DonorID),0), IFNULL(SUM(Amount),0) FROM `Project` LEFT JOIN `Contribution` ON RecipientID = `Project`.ID GROUP BY 1 ON DUPLICATE KEY UPDATE total_donors = VALUES(total_donors), total_received = VALUES(total_received); SELECT * FROM Project_Temp; Tests for both are pretty comparable, in the 0.7 - 0.8 seconds range with 1,000 rows. But I'm really concerned about scalability, and I don't want to have to re-engineer everything as my tables grow. What's the best approach?

    Read the article

  • Testing ActionFilterAttributes with MSpec

    - by Tomas Lycken
    I'm currently trying to grasp MSpec, mainly to learn new ways of (T/B)DD to be able to make an educated decision on which technology to use. Previously, I've mostly (read: only) used the built-in MSTest framework with Moq, so BDD is quite new for me. I'm writing an ASP.NET MVC app, and I want to implement PRG. Last time I did this, I used action filters to export and import ModelState via TempData, so that I could return a RedirectResult and the validation errors would still be there when the user got the view. I tested that scenario by verifying two things: a) That the ExportModelStateAttribute I had written was applied (among tests for my controller) b) That the attribute worked (among tests for action filter attributes) However, in BDD I've understood I should be even more concerned with behavior, and even less with implementation. This means I should probably just verify that the model state is in tempdata when the action has finished executing - not necessarily that it's done via an attribute. To further complicate things, attributes are not run when calling the action directly in the test, so I can't just call the action and see if the job's been done. How should I spec/test this in MSpec?

    Read the article

  • ASP MVC - Routing Required?

    - by evo_9
    I've been reading up on MVC2 which came in VS2010 and it sounds pretty interesting. I'm actually in the middle of a large multi-tenant application project, and have just started coding the UI. I'm considering changing to MVC as I'm not that far along at this point. I have some questions about the Routing capabilities, namely are they required to use MVC or can I more or less ignore Routing? Or do I have to setup a default routing record that will make things work like standard ASPX (as far as routing alone is concerned)? The reason why I don't want to use Routing is because I've already defined a custom URL 'rewrite' mechanism of my own (which fires on session_start). In addition, I'm using jquery and opens-standards for the entire UI, and MVC's aspx overhead-free approach seems like a better fit based on how I've already started to build the application (I am not using viewstate at all, for example). I guess my big concern is whether the routing can be ignored, of if I will have to re-implement my custom URL rewriting to work with MVC, and if that's the case, how would I do that? As a new Routing routine, or stick with the session_start (if that's even possible?). Lastly, I don't want to use anything even remotely 'intelligent/readable' for the url - for a site like StackOverflow, the readability of the URL is a positive, but the opposite is true if it's not a public website like this one. In fact, it would seem to me that the more friendly MVC routing URL (which indirectly show method names) could pose a security risk on a private, non-public website app like I'm developing. For all these reasons I would love to use the lightweight aspects of MVC but skip the Routing entirely - is this possible?

    Read the article

  • codegen:nullValue vs msprop:nullValue

    - by Ken
    Ok, I have datasets that I created way back in 1.1 framework to which we used codegen:nullValue within the XSD to handle null values. However if I open one of these datasets with vs 2005 (i.e. 2.0 framework) and add a column, it removes the codegen setting from the entire xsd but adds in msprop:nullValue However, unlike previous years, I noticed this time the proper property code was NOT over riden from returning the null value specified in codegen as it was doing in the past. Meaning the msprop appears to be creating the proper code behind the scenes (See example). Anyone know of any other differnces? Should I be concerned with deploying a new xsd, WITHOUT the codegen code but instead with the msprop xml? Example: Original creates _ Public Property ParentID() As Integer Get If Me.IsParentIDNull Then Return -1 Else Return CType(Me(Me.tableCompany.ParentIDColumn),Integer) End If End Get Set Me(Me.tableCompany.ParentIDColumn) = value End Set End Property New creates _ Public Property ParentID() As Integer Get If Me.IsParentIDNull Then Return -1 Else Return CType(Me(Me.tableCompany.ParentIDColumn),Integer) End If End Get Set Me(Me.tableCompany.ParentIDColumn) = value End Set End Property BUT is there anything else that might be occuring that I am NOT seeing thus MAKING me re-enter all the codegen settings? THANKS!

    Read the article

  • Push TFS 2008 code to remote VSS over VPN?

    - by drovani
    We have a local Team Foundation Server 2008 that we keep our code under version control. However, we also have a paranoid client that has their own Visual Source Safe installation that wants us to keep a running copy of the code on their server as well. As such, I'm hoping there is a way I can just do a nightly push from our TFS repository to their VSS repository. I'm not concerned about keeping each changeset on TFS as a different changeset on the VSS, just a once-nightly push that creates a new changeset on the VSS and uploads the latest changeset from TFS. I guess the first part is if it is even possible for TFS to push an update to VSS. I've noticed that most replies to this question have been something to the tune of "don't do it", but I can't find anything that specifically states that it cannot be done. The second part would then be automating the process by having the TFS server connect to the client's VPN, then push the code changes. I have full control over the TFS server and I can customize the VSS install, if there are settings that need changing, but I'm limited on what I can do about settings on either firewall or server specific settings on the client's VSS server.

    Read the article

  • Dealing with expired authentication for a partially filled form?

    - by aaronls
    I have a large webform, and would like to prompt the user to login if their session expires, or have them login when they submit the form. It seems that having them login when they submit the form creates alot of challenges because they get redirected to the login page and then the postback data for the original form submission is lost. So I'm thinking about how to prompt them to login asynchrounsly when the session expires. So that they stay on the original form page, have a panel appear telling them the session has expired and they need to login, it submits the login asynchronously, the login panel disapears, and the user is still on the original partially filled form and can submit it. Is this easily doable using the existing ASP.NET Membership controls? When they submit the form will I need to worry about the session key? I mean, I am wondering if the session key the form submits will be the original one from before the session expired which won't match the new one generated after logging in again asynchrounously(I still do not understand the details of how ASP.NET tracks authentication/session IDs). Edit: Yes I am actually concerned about authentication expiration. The user must be authenticated for the submitted data to be considered valid.

    Read the article

  • SQL Query Math Gymnastics

    - by keruilin
    I have two tables of concern here: users and race_weeks. User has many race_weeks, and race_week belongs to User. Therefore, user_id is a fk in the race_weeks table. I need to perform some challenging math on fields in the race_weeks table in order to return users with the most all-time points. Here are the fields that we need to manipulate in the race_weeks table. races_won (int) races_lost (int) races_tied (int) points_won (int, pos or neg) recordable_type(varchar, Robots can race, but we're only concerned about type 'User') Just so that you fully understand the business logic at work here, over the course of a week a user can participate in many races. The race_week record represents the summary results of the user's races for that week. A user is considered active for the week if races_won, races_lost, or races_tied is greater than 0. Otherwise the user is inactive. So here's what we need to do in our query in order to return users with the most points won (actually net_points_won): Calculate each user's net_points_won (not a field in the DB). To calculate net_points, you take (1000 * count_of_active_weeks) - sum(points__won). (Why 1000? Just imagine that every week the user is spotted a 1000 points to compete and enter races. We want to factor-out what we spot the user because the user could enter only one race for the week for 100 points, and be sitting on 900, which we would skew who actually EARNED the most points.) This one is a little convoluted, so let me know if I can clarify further.

    Read the article

  • How should I secure my webapp written using Wicket, Spring, and JPA?

    - by Martin
    So, I have an web-based application that is using the Wicket 1.4 framework, and it uses Spring beans, the Java Persistence API (JPA), and the OpenSessionInView pattern. I'm hoping to find a security model that is declarative, but doesn't require gobs of XML configuration -- I'd prefer annotations. Here are the options so far: Spring Security (guide) - looks complete, but every guide I find that combines it with Wicket still calls it Acegi Security, which makes me think it must be old. Wicket-Auth-Roles (guide 1 and guide 2) - Most guides recommend mixing this with Spring Security, and I love the declarative style of @Authorize("ROLE1","ROLE2",etc). I'm concerned about having to extend AuthenticatedWebApplication, since I'm already extending org.apache.wicket.protocol.http.WebApplication, and Spring is already proxying that behind org.apache.wicket.spring.SpringWebApplicationFactory. SWARM / WASP (guide) - This looks the newest (though the main contributor passed away years ago), but I hate all of the JAAS-styled text files that declare permissions for principals. I also don't like the idea of making an Action class for every single thing a user might want to do. Secure models also aren't immediately obvious to me. Plus, there isn't an Authn example. Additionally, it looks like lots of folks recommend mixing the first and second options. I can't tell what the best practice is at all, though.

    Read the article

  • Does running IIS7 in classic mode affect MVC output caching?

    - by Bob
    I have a need to run an application in classic mode for backwards compatibility with a specific application, and am trying to understand what kind of impact that will have on the performance of an MVC application that is running on the site. If we put a few static file maps (for .js, .css, .png, etc) above the ASP.NET wildcard map to reduce the amount of processing by the ASP.NET handler, will we be approaching the integrated mode in terms of performance? The thing i'm primarily concerned with is any effect this might have on output caching. I understand that integrated mode might (?) allow for the output cache to handle non ASP.NET content, but that isn't really a concern. We're more interested in ensuring that the MVC application has full use of the output cache. Empirically i've found that the two configurations operate on par when things go well, but if the page references resources that are not available, the integrated mode tends to fail much more quickly than the classic mode (e.g. 500 ms vs 10 seconds), reducing 'hang time' on the page load. Thanks for any feedback.

    Read the article

  • Mercurial repository usage with binary files for building setup files

    - by Ryan
    I have an existing Mercurial repository for a C++ application in a small corporate environment. I asked a co-worker to add the setup script to the repository and he added all of the dependency binaries, PDFs, and executable to the repository under an Install directory. I dislike having the binaries and dependencies in the same repository, but I'd like recommendations on best practices. Here are the options I am considering: Create a separate repository for the Installer and related files Create a subrepository for the Installer and related files Use a (yet to be identified) build dependency manager I am concerned with using a subrepository with Mercurial based on what I've read so far and the (apparently) incomplete implementation. I would like to get a project dependency system, e.g. Ivy, but I don't know all of the options and haven't had time yet to try out any options. I thought I'd use TortoiseHg as a basis, and it does not have the TortoiseHg binaries in the repository although it does have some binaries such as kdiff3.exe. Instead it uses setup.py to clone multiple repositories and build the apps. This seems reasonable for OSS, but not so much for corporate environments. Recommendations?

    Read the article

  • How to setup relationships in contact Database

    - by Walt
    I am currently embarking on a new venture to learn PHP and MySQL. I have done some simple databases in the past using Access, but this one is to be a web-centric database for tracking a myriad of data including contacts and project information. I will need to link the various tables in various relationships, and I am not sure the best way to do that. Since I am just starting out with PHP/MySQL I am researching online sources for learning as much as possible. If anyone has recommendations on books or websites, I would appreciate it. In setting up my tables, one major area that I am concerned with is contacts. I will have a variety of contacts that include: employees, clients, vendors, subcontractors, etc.. and a single contact can be multiple types and each type would have various additional fields that pertain to them. My thought was to have one contacts table that links to other tables for the various contact types. I'm not sure which field type or setup of table options are best... Thoughts? This scenario will likely play out in other areas of the database as well for projects and products. Any pointers/direction would be appreciated. WES

    Read the article

  • Force sending a user to custom QuerySet.

    - by Jack M.
    I'm trying to secure an application so that users can only see objects which are assigned to them. I've got a custom QuerySet which works for this, but I'm trying to find a way to force the use of this additional functionality. Here is my Model: class Inquiry(models.Model): ts = models.DateTimeField(auto_now_add=True) assigned_to_user = models.ForeignKey(User, blank=True, null=True, related_name="assigned_inquiries") objects = CustomQuerySetManager() class QuerySet(QuerySet): def for_user(self, user): return self.filter(assigned_to_user=user) (The CustomQuerySetManager is documented over here, if it is important.) I'm trying to force everything to use this filtering, so that other methods will raise an exception. For example: Inquiry.objects.all() ## Should raise an exception. Inquiry.objects.filter(pk=69) ## Should raise an exception. Inquiry.objects.for_user(request.user).filter(pk=69) ## Should work. inqs = Inquiry.objects.for_user(request.user) ## Should work. inqs.filter(pk=69) ## Should work. It seems to me that there should be a way to force the security of these objects by allowing only certain users to access them. I am not concerned with how this might impact the admin interface.

    Read the article

  • The perfect web UI framework (with Microsoft stack?) - architecture question?

    - by Igorek
    I'm looking for suggestions for the following issue, and I realize there is really not going to be a perfect answer to my question: I have a UI built in WinForms.NET (v4.0 framework) with WCF back-end and EF4 model objects, that I am looking to port to the web. UI is not huge and is not super complex and is structured well. But it is not a super simple system either. I am looking to pick a technology stack for the web-frontend that will target desktop & partially mobile platforms, provide a good development platform to build on, and facilitate code reuse across UI and back-end tiers... I would rather avoid: custom coding of UI-centric scripts, because they are hard to debug, non-compiled, usually a maintenance nightmare, almost always start to contain business logic, and duplicate some of the logic that back-end tiers have (especially validation) custom-coding for Desktop Web and Mobile Web UI's separately (although I realize that mobile web UI will likely contain fewer of data-entry screens and more reporting screens) non-.NET technology stacks I would love to: target the reporting capabilities of the system toward mobile web browsers not have to write a single line of script (javascript, jquery, etc.) utilize a good collection of controls that produces an elegant UI use .NET for everything The way I see it right now, I need to re-write this app in Silverlight, utilize a 3rd party UI framework like Telerik, and re-do the reports UI again for mobile platforms separately. However, I'm rather concerned about the shelf-life of Silverlight and the needed to deploy a different architecture to deal with mobile platform. Is there an ASP.NET/MVC/Ajax architecture/framework/library that would allow me to get at the power of .NET and without painful (imho) client-side scripting, while providing a decent user experience Thank you

    Read the article

  • Is there a standard pattern for scanning a job table executing some actions?

    - by Howiecamp
    (I realize that my title is poor. If after reading the question you have an improvement in mind, please either edit it or tell me and I'll change it.) I have the relatively common scenario of a job table which has 1 row for some thing that needs to be done. For example, it could be a list of emails to be sent. The table looks something like this: ID Completed TimeCompleted anything else... ---- --------- ------------- ---------------- 1 No blabla 2 No blabla 3 Yes 01:04:22 ... I'm looking either for a standard practice/pattern (or code - C#/SQL Server preferred) for periodically "scanning" (I use the term "scanning" very loosely) this table, finding the not-completed items, doing the action and then marking them completed once done successfully. In addition to the basic process for accomplishing the above, I'm considering the following requirements: I'd like some means of "scaling linearly", e.g. running multiple "worker processes" simultaneously or threading or whatever. (Just a specific technical thought - I'm assuming that as a result of this requirement, I need some method of marking an item as "in progress" to avoid attempting the action multiple times.) Each item in the table should only be executed once. Some other thoughts: I'm not particularly concerned with the implementation being done in the database (e.g. in T-SQL or PL/SQL code) vs. some external program code (e.g. a standalone executable or some action triggered by a web page) which is executed against the database Whether the "doing the action" part is done synchronously or asynchronously is not something I'm considering as part of this question.

    Read the article

  • Checking if any of a list of values falls within a table of ranges

    - by Conspicuous Compiler
    I'm looking to check whether any of a list of integers fall in a list of ranges. The ranges are defined in a table defined something like: # Extra Type Field Default Null Key 0 int(11) rangeid 0 NO PRI 1 int(11) max 0 NO MUL 2 int(11) min 0 NO MUL Using MySQL 5.1 and Perl 5.10. I can check whether a single value, say 7, is in any of the ranges with a statement like SELECT 1 FROM range WHERE 7 BETWEEN min AND max If 7 is in any of those ranges, I get a single row back. If it isn't, no rows are returned. Now I have a list of, say, 50 of these values, not stored in a table at present. I assemble them using map: my $value_list = '(' . ( join ', ', map { int $_ } @values ) . ')' ; I want to see if any of the items in the list fall inside of any of the ranges, but am not particularly concerned with which number nor which range. I'd like to use a syntax such as: SELECT 1 FROM range WHERE (1, 2, 3, 4, 5, 6, 7, 42, 309, 10000) BETWEEN min AND max MySQL kindly chastises me for such syntax: Operand should contain 1 column(s) I pinged #mysql who were quite helpful. However, having already written this up by the time they responded and thinking it'd be helpful to fix the answer in a more permanent medium, I figured I'd post the question anyhow. Maybe SO will provide a different solution?

    Read the article

  • Insert code into a method - Java

    - by DutrowLLC
    Is there a way to automatically insert code into a method? I have the following and I would like to insert the indicated code: public class Person { Set<String> updatedFields = new LinkedHashSet<String>(); String firstName; public String getFirstName(){ return firstName; } boolean isFirstNameChanged = false; // Insert public void setFirstName(String firstName){ if( !isFirstNameChanged ){ // Insert isFirstNameChanged = true; // Insert updatedFields.add("firstName"); // Insert } // Insert this.firstName = firstName; } } I'm also not sure if I can the subset of the method name as a string from inside the method itself as indicated on the line where I add the fieldName as a string into the set of updated fields: updatedFields.add("firstName");. And I'm not sure how to insert fields into a class where I add the boolean field that tracks if the field has been modified or not before (for efficiency to prevent having to manipulate the Set): boolean isFirstNameChanged = false; It seems to most obvious answer to this would be to use code templates inside eclipse, but I'm concerned about having to go back and change the code later.

    Read the article

< Previous Page | 242 243 244 245 246 247 248 249 250 251 252 253  | Next Page >