Search Results

Search found 28672 results on 1147 pages for 'best practise'.

Page 199/1147 | < Previous Page | 195 196 197 198 199 200 201 202 203 204 205 206  | Next Page >

  • best approach to do jsf form validation

    - by gurupriyan.e
    If I have many input controls in a form (There are separate validators for each of these input controls - like required,length and so on ) , there is a command button which submits the form and calls an action method. The requirement is - though the input control values are , say , individually okay - the combination of these values should be okay to process them together after the form submission - Where do i place the code to validate them together? 1) Can i add a custom validator for the command button and validate the combination together? like validate(FacesContext arg0, UIComponent arg1, Object value) but even then I will not have values of the other input controls except for the command button's/component's value right ? 2) can i do the validation of the combination in the action method and add validation messages using FacesMessage ? or do you suggest any other approach? Thanks for your time.

    Read the article

  • what are the best practices to prevent sql injections

    - by s2xi
    Hi, I have done some research and still confused, This is my outcome of that research. Can someone please comment and advise to how I can make these better or if there is a rock solid implementation already out there I can use? Method 1: array_map('trim', $_GET); array_map('stripslashes', $_GET); array_map('mysql_real_escape_string', $_GET); Method 2: function filter($data) { $data = trim(htmlentities(strip_tags($data))); if (get_magic_quotes_gpc()) $data = stripslashes($data); $data = mysql_real_escape_string($data); return $data; } foreach($_GET as $key => $value) { $data[$key] = filter($value); }

    Read the article

  • Best protocol for client/server communication, from PHP/Perl to C++/Qt4

    - by Kyle
    I'm the author of an Open Source kiosk management system, Libki. The current version, though functional, was very much a learning experience for me. I'm working on a complete rewrite and am having a hard time deciding what protocol to use. The server will be written in PHP or Perl. Most likely PHP because I need to support some uncommon protocols that Library software use, ( SIP and NCIP ). So far I've only found a SIP2 library in PHP. The client is written in C++/Qt4. I'm looking at RPC and REST for client/server communication. I've found RPC client libraries for Qt4, and REST is already part of the Qt4 libraries. Is there an alternative I've missed? So far, REST seems to be the winner.

    Read the article

  • What's best Drupal deployment strategy?

    - by Horace Ho
    I am working on my first Drupal project on XAMPP in my MacBook. It's a prototype and receives positive feedback from my client. I am going to deploy the project on a Linux VPS two weeks later. Is there a better way than 're-do'ing everything on the server from scratch? install Drupal download modules (CCK, Views, Date, Calendar) create the Contents ... Thanks

    Read the article

  • Best way to perform DELETE that uses ids from a SELECT statement in MYSQL

    - by Aglystas
    I'm working on a stored procedure, that needs to delete specific rows based on a timestamp. Here's what I was going to use until I found out you can't include a select clause in the delete statement if they are both working on the same table. DELETE FROM product WHERE merchant_id = 2 AND product_id IN (SELECT product_id FROM product WHERE merchant_id = 1 AND timestamp_updated > 1275062558); Is there a good way to handle this within a stored procedure. Normally I would just throw the logic to build the product_id list in php, but I'm trying to have all the processing done on the data server.

    Read the article

  • Best tools to parse reports

    - by Andy Schaefer
    I have a report that I need to parse/scrape for loading into an alternate or query-able data store. The report looks like something akin to: this. My gut is that PERL would do a decent job, but I have several different permutations of the report and I don't really want to make a script around each form. This report is a pretty stock type report, and I have seen where Monarch Pro can parse these types of reports, but I have had a difficult time finding alternatives to how these could be parsed since I'm looking to do this working primarily in a Linux environment. Any suggestions?

    Read the article

  • Best way to implement a List(Of) with a maximum number of items

    - by Ben
    I'm trying to figure out a good way of implementing a List(Of) that holds a maximum number of records. e.g. I have a List(Of Int32) - it's being populated every 2 seconds with a new Int32 item. I want to store only the most current 2000 items. How can I make the list hold a maximum of 2000 items, then when the 2001'th item is attempted to be added, the List drops the 2000'th item (resulting in the current total being 1999). Thing is, I need to make sure I'm dropping only the oldest item and adding a new item into the List. Ben

    Read the article

  • What's the best Linux backup solution?

    - by Jon Bright
    We have a four Linux boxes (all running Debian or Ubuntu) on our office network. None of these boxes are especially critical and they're all using RAID. To date, I've therefore been doing backups of the boxes by having a cron job upload tarballs containing the contents of /etc, MySQL dumps and other such changing, non-packaged data to a box at our geographically separate hosting centre. I've realised, however that the tarballs are sufficient to rebuild from, but it's certainly not a painless process to do so (I recently tried this out as part of a hardware upgrade of one of the boxes) long-term, the process isn't sustainable. Each of the boxes is currently producing a tarball of a couple of hundred MB each day, 99% of which is the same as the previous day partly due to the size issue, the backup process requires more manual intervention than I want (to find whatever 5GB file is inflating the size of the tarball and kill it) again due to the size issue, I'm leaving stuff out which it would be nice to include - the contents of users' home directories, for example. There's almost nothing of value there that isn't in source control (and these aren't our main dev boxes), but it would be nice to keep them anyway. there must be a better way So, my question is, how should I be doing this properly? The requirements are: needs to be an offsite backup (one of the main things I'm doing here is protecting against fire/whatever) should require as little manual intervention as possible (I'm lazy, and box-herding isn't my main job) should continue to scale with a couple more boxes, slightly more data, etc. preferably free/open source (cost isn't the issue, but especially for backups, openness seems like a good thing) an option to produce some kind of DVD/Blu-Ray/whatever backup from time to time wouldn't be bad My first thought was that this kind of incremental backup was what tar was created for - create a tar file once each month, add incrementally to it. rsync results to remote box. But others probably have better suggestions.

    Read the article

  • Using a singleton database class in functions and multiple scripts(PHP) - best use methods

    - by dscher
    I have a singleton db connection which I get with: $dbConnect = myDatabase::getInstance(); which is easy enough. My question is what is the least rhetorical and legitimate way of using this connection in functions and classes? It seems silly to have to declare the variable global, pass it into every single function, and/or recreate this variable within every function. Is there another answer for this? Obviously I'm a noob and I can work my way around this problem 10 different ways, none of which is really attractive to me. It would be a lot easier if I could have that $dbConnect variable accessible in any function without needing to declare it global or pass it in. I do know I can add the variable to the $_SERVER array...is there something wrong with doing this? It seems somewhat inappropriate to me. Another quick question: Is it bad practice to do this: $result = myDatabase::getInstance()-query($query); from directly within a function?

    Read the article

  • What is the best way to find a processed memory allocations in terms of C# objects

    - by Shantaram
    I have written various C# console based applications, some of them long running some not, which can over time have a large memory foot print. When looking at the windows perofrmance monitor via the task manager, the same question keeps cropping up in my mind; how do I get a break down of the number objects by type that are contributing to this footprint; and which of those are f-reachable and those which aren't and hence can be collected. On numerous occasions I've performed a code inspection to ensure that I am not unnecessarily holding on objects longer than required and disposing of objects with the using construct. I have also recently looked at employing the CG.Collect method when I have released a large number of objects (for example held in a collection which has just been cleared). However, I am not so sure that this made that much difference, so I threw that code away. I am guessing that there are tools in sysinternals suite than can help to resolve these memory type quiestions but I am not sure which and how to use them. The alternative would be to pay for a third party profiling tool such as JetBrains dotTrace; but I need to make sure that I've explored the free options first before going cap in hand to my manager.

    Read the article

  • best articles about organizing code files in C

    - by kliketa
    Can you recommend me what should I read/learn in order to make a well organized code in C? One of the things I want to learn is the principles of splitting project in .h and .c files, what goes where and why, variable naming, when to use global variables ... I am interested in books and articles that deal with this specific problem.

    Read the article

  • Best Practice: Protecting Personally Identifiable Data in a ASP.NET / SQL Server 2008 Environment

    - by William
    Thanks to a SQL injection vulnerability found last week, some of my recommendations are being investigated at work. We recently re-did an application which stores personally identifiable information whose disclosure could lead to identity theft. While we read some of the data on a regular basis, the restricted data we only need a couple of times a year and then only two employees need it. I've read up on SQL Server 2008's encryption function, but I'm not convinced that's the route I want to go. My problem ultimately boils down to the fact that we're either using symmetric keys or assymetric keys encrypted by a symmetric key. Thus it seems like a SQL injection attack could lead to a data leak. I realize permissions should prevent that, permissions should also prevent the leaking in the first place. It seems to me the better method would be to asymmetrically encrypt the data in the web application. Then store the private key offline and have a fat client that they can run the few times a year they need to access the restricted data so the data could be decrypted on the client. This way, if the server get compromised, we don't leak old data although depending on what they do we may leak future data. I think the big disadvantage is this would require re-writing the web application and creating a new fat application (to pull the restricted data). Due to the recent problem, I can probably get the time allocated, so now would be the proper time to make the recommendation. Do you have a better suggestion? Which method would you recommend? More importantly why?

    Read the article

  • dynamic searchable fields, best practice?

    - by boblu
    I have a Lexicon model, and I want user to be able to create dynamic feature to every lexicon. And I have a complicate search interface that let user search on every single feature (including the dynamic ones) belonged to Lexicon model. I could have used a serialized text field to save all the dynamic information if they are not for searching. In case I want to let user search on all fields, I have created a DynamicField Model to hold all dynamically created features. But imagine I have 1,000,000,000 lexicon, and if one create a dynamic feature for every lexicon, this will result creating 1,000,000,000 rows in DynamicField model. So the sql search function will become quite inefficient while a lot of dynamic features created. Is there a better solution for this situation? Which way should I take? searching for a better db design for dynamic fields try to tuning mysql(add cache fields, add index ...) with current db design

    Read the article

  • Best Tools for Software Maintenance Engineering

    - by Pev
    Yes, the dreaded 'M' word. You've got a workstation, source control and half a million lines of source code that you didn't write. The documentation was out of date the moment that it was approved and published. The original developers are LTAO, at the next project/startup/loony bin and not answering email. What are you going to do? {favourite editor} and Grep will get you started on your spelunking through the gnarling guts of the code base but what other tools should be in the maintenance engineers toolbox? To start the ball-rolling; I don't think I could live without source-insight for C/C++ spelunking. (DISCLAIMER: I don't work for 'em).

    Read the article

  • Best approach to a customer portal in ASP.NET MVC

    - by DoodleWalker
    Hi All, The problem: client needs a website to serve 10+ customers, each customer has 5-10 people they wish to grant access using login & user name, once "logged in" the user can download files specific to their company. The files will be uploaded to a directory under the customer name, and displayed as a list. Currently using membership for all of the users, it's just the "by customer" segmentation I'm wondering about. the question being under ASP.NET MVC what is the cleanest or simplest approach to solving the customer segmentation, trying to avoid customer membership provider so was going to use the roles to assign customer group. Thoughts appreciated.

    Read the article

  • what is the best way of giving the feedback to the user

    - by Nubkadiya
    im using speech recognition by pressing a button in my application. i want to show the users that when they click the button they should speech. i was thinking about using a progress bar. but i dont think its a good idea. then i thought about putting a label saying whats going on. can someone suggest any more options. please

    Read the article

  • jqeury: best way to place dom element in the center of viewport

    - by Anthony Koval'
    hello! i'm looking for a proper way of placing popup div-elemnt in the center of current view area. for example: we have some div element with {display:none; position:absolute} and few buttons, one on the top of document, second in the center and last one, somewhere in the bottom. By clicking on any of this button, div should appear in the center of current viewing area $(".btnClass").click(function(){ //some actions for positioning here $(div_id).show() })

    Read the article

  • What is the best HTML editor for Eclipse?

    - by Farinha
    I was amazed to find out that apparently Eclipse doesn't come with a decent HTML editor by default (it opened my .html file in some kind of browser view and apparently tried to render it). And the basic text editor is not good enough (I need at least some syntax highlighting and automatic indenting). Any suggestions?

    Read the article

  • Best approach for using Scanner Objects in Java?

    - by devjeetroy
    Although I'm more of a C++/ASM guy, I have to work with java as a part of my undergrad course at college. Our teacher taught us input using Scanner(System.in), and told us that if multiple functions are were taking user input, it would be advisable that a single Scanner object is passed around so as to reduce chances of the input stream getting screwed up. Now using this approach has gotten me into a situation where I'm trying to use a Scanner.nextLine(), and this statement does not wait for user input. It just moves on to the following statement. I figured there may be some residual cr/lf or other characters in the Scanner that might not have been retrieved are causing the problem. Here is the code. while(lineScanner.hasNext()) { if(isPlaceHolder(temp = lineScanner.next())) { temp = temp.replace("<",""); temp = temp.replace(">", ""); System.out.print("Enter "+aOrA(temp.charAt(0)) +" " +temp + " : "); temp = consoleInput.nextLine(); } outputFileStream.print(temp + " "); } All of the code is inside a function which receives a Scanner object consoleInput. Ok, so what happens when i run it is that when the program enters the if() the first time, It carries out theSystem.out.print, does not wait for user input, and moves on to the second time that it enters the 'if' block. This time, it takes the input and the rest of the program operates normally. What is even more surprising is that when i check the output file created by the program, it is perfect, just as i want to be. Almost as if the first time input using the scanner is correct. I have solved this problem by creating a new system.in Scanner in the function itself, instead of receiving the Scanner object as a parameter. But I am still very curious to know what the hell is happening and why it couldn't be solved using a simple Scanner.reset(). Would it be better to just simply create a Scanner Object for each function? Thanks, Devjeet PS. Although I know how to take input using fileinputstreams and the like, we are not supposed to use it with the homework.

    Read the article

< Previous Page | 195 196 197 198 199 200 201 202 203 204 205 206  | Next Page >