Search Results

Search found 10556 results on 423 pages for 'practical approach'.

Page 10/423 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • Best approach to store username & password

    - by Zerotoinfinite
    Hi All, I have created a site in asp.net 3.5 & my query is this that I have only 2 or 3 login Id (user) who can login to my website, then what would be the better way to save the login details. What I know I am writing below, please let me know which would be the correct approach to achieve this also if their is any nice approach to get this done: 1: to save username and password in web.config file 2: to create a text file in directory and modify it 3: To use form authentication Please let me know the approach I can use to solve it. thanks in advance P.S. : I have to create only 2-3 users, or in future may be 1-2 more, that's it.

    Read the article

  • A tiered approach to cloning linux partitons

    - by Djurdjura
    I'm looking at a strategy for cloning Linux (root) partitions without having to use a Live CD. Literature suggests rightly that the source and target partitions must be umounted to be able to get a clean clone. This assumes that you need to use a LiveCD. I was wondering if instead of requiring a LiveCD, if using a 3rd partition that would emulate the LiveCD functionality, if we can't achieve the same functionality. In other words, at a high level a system with 3 partitions (all bootable): Rescue Partition (LiveCD emulation) Running Partition (Source) Backup Partition (Destination) All 3 partitions are LVMS. When it's time to clone the source partition to the backup (destination) partition, we would boot to the rescue partition, unmount the other 2 partitions (is it required?), run disk check on the source, copy to the destination (dd or simple copy to avoid replicating the defragmentation from the source), run disk check on the destination partition, update Grub menu list to force boot from either partition, and reboot into that partition. My question, is it an approach that you'd recommend? MBR in all this? Any gotchas or extra checks required? Thanks, D. PS. On recommendation from members, posting here instead of stackoverflow.com.

    Read the article

  • Practical value for concurrent-request-timeout parameter

    - by Andrei
    In the Seam Reference Guide, one can find this paragraph: We can set a sensible default for the concurrent request timeout (in ms) in components.xml: <core:manager concurrent-request-timeout="500" /> However, we found that 500 ms is not nearly enough time for most of the cases we had to deal with, especially with the severe restriction seam places on conversation access. In our application we have a combination of page scoped ajax requests (triggered by various use actions), some global scoped polling notification logic (part of the header, so included in every page) and regular links that invoke actions and/or navigate to other pages. Therefore, we get the dreaded concurrent access to conversation exception way too often, even without any significant load on the site. After researching the options for quite a bit, we ended up bumping this value to several seconds (we're debating whether to bump it up to 10s), as none of the recommended solutions seemed able to solve our issue completely (even forcing a global queue for all the ajax requests would still leave us exposed to a user deciding to click a link right when one of our polling calls was in progress). And we'd much rather have the users wait for a second or two instead of getting an error page just because they clicked a link at the wrong moment. And now to the question: is there something obvious we're missing (like a way to allow concurrent access to conversations and taking care of the needed locking ourselves, for instance :)? How do people solve this problem (ajax requests mixed with user driven interaction) in seam? Disabling all the links on the page while ajax requests are in progress (as suggested by one blog page) is really not a viable option. Any other suggestions? TIA, Andrei

    Read the article

  • Practical size limitations for RDBMS

    - by grenade
    I am working on a project that must store very large datasets and associated reference data. I have never come across a project that required tables quite this large. I have proved that at least one development environment cannot cope at the database tier with the processing required by the complex queries against views that the application layer generates (views with multiple inner and outer joins, grouping, summing and averaging against tables with 90 million rows). The RDBMS that I have tested against is DB2 on AIX. The dev environment that failed was loaded with 1/20th of the volume that will be processed in production. I am assured that the production hardware is superior to the dev and staging hardware but I just don't believe that it will cope with the sheer volume of data and complexity of queries. Before the dev environment failed, it was taking in excess of 5 minutes to return a small dataset (several hundred rows) that was produced by a complex query (many joins, lots of grouping, summing and averaging) against the large tables. My gut feeling is that the db architecture must change so that the aggregations currently provided by the views are performed as part of an off-peak batch process. Now for my question. I am assured by people who claim to have experience of this sort of thing (which I do not) that my fears are unfounded. Are they? Can a modern RDBMS (SQL Server 2008, Oracle, DB2) cope with the volume and complexity I have described (given an appropriate amount of hardware) or are we in the realm of technologies like Google's BigTable? I'm hoping for answers from folks who have actually had to work with this sort of volume at a non-theoretical level.

    Read the article

  • System architecture: simple approach for setting up background tasks behind a web application -- wil

    - by Tim Molendijk
    I have a Django web application and I have some tasks that should operate (or actually: be initiated) on the background. The application is deployed as follows: apache2-mpm-worker; mod_wsgi in daemon mode (1 process, 15 threads). The background tasks have the following characteristics: they need to operate in a regular interval (every 5 minutes or so); they require the application context (i.e. the application packages need to be available in memory); they do not need any input other than database access, in order to perform some not-so-heavy tasks such as sending out e-mail and updating the state of the database. Now I was thinking that the most simple approach to this problem would be simply to piggyback on the existing application process (as spawned by mod_wsgi). By implementing the task as part of the application and providing an HTTP interface for it, I would prevent the overhead of another process that is holding all of the application into memory. A simple cronjob can be setup that sends a request to this HTTP interface every 5 minutes and that would be it. Since the application process provides 15 threads and the tasks are quite lightweight and only running every 5 minutes, I figure they would not be hindering the performance of the web application's user-facing operations. Yet... I have done some online research and I have seen nobody advocating this approach. Many articles suggest a significantly more complex approach based on a full-blown messaging component (such as Celery, which uses RabbitMQ). Although that's sexy, it sounds like overkill to me. Some articles suggest setting up a cronjob that executes a script which performs the tasks. But that doesn't feel very attractive either, as it results in creating a new process that loads the entire application into memory, performs some tiny task, and destroys the process again. And this is repeated every 5 minutes. Does not sound like an elegant solution. So, I'm looking for some feedback on my suggested approach as described in the paragraph before the preceeding paragraph. Is my reasoning correct? Am I overlooking (potential) problems? What about my assumption that application's performance will not be impeded?

    Read the article

  • Personal Project - Next practical language/tech to learn

    - by Paul Nathan
    I'm working on a personal project doing some finance analysis. It's a totally new field for me, and I'm really having fun with it so far, plus working in the high-level language arena is a great break from my embedded systems daytime work. I have a MySQL backend on a non-local server with a pile of stock data. My task now is to do some analysis of the stocks and produce something approximating a useful result. There are a couple technical difficulties. (1) I have a lot of records. To be precise, I believe I'm near 100K records right now, and this number grows by 6.1K each weekday. I need to create a way to rummage through these fields and do data analysis - based on a given computation, go look at this other set. Fine and dandy, nothing too outre. But this means I could really use a straightforward API for talking to MySQL. (2) Ideally, it runs on OS X 10.4.11. No Windows/Linux machine at home. (3) I can use PHP, C++, Perl, etc. I even have an R installation. I'm pretty flexible with stuff, so long as it runs on OS X. (Lots of options here, pick water, H20, or dihydrogen monoxide ;-) ) (4)Lack of hassle. While I like clever and fun ways of doing things, I'm trying to get some analysis done, not spend ten hours doing installation work and scratching my head figuring out a theoretical syntax question needed to spout out "hello world". What's the question? I'd like to dig into something different than my usual PHP/C++/C toolset. I'm looking for recommendations for languages/technologies that will assist me and meet the above requirements. In particular, I've heard a lot of buzz about F# and Python on SO. I've used CLISP for small problems before, and kinda liked it. I'm seeking opinions about those in particular. edit:since I rent the DB server and have a limited amount of CPU time online, I'm trying to do the analysis on a local machine.

    Read the article

  • Is it practical to program with your feet?

    - by bmm
    Has anyone tried using foot pedals in addition to the traditional keyboard and mouse combo to improve your effectiveness in the editor? Any actual experiences out there? Does it work, or is it just for carpal tunnel relief? I found one blog entry from a programmer who actually tried it: So now I can type using my feet for most of the modifier keys. I am using the pedals as I type this. I am still getting used to them, but the burning in my left wrist has definitely reduced. I think I can also type a little faster, but I am too lazy to do the speed tests with and without the pedals to verify this. On the negative side: Working out where to put your feet when you aren’t typing can be a little awkward. The pedals tend to move around the carpet, despite being metal and quite heavy. Some small spikes might have helped. Although the travel on the pedals is small, they are surprisingly stiff. Another programmer's experience: Anybody with hand pain must get foot pedals, since they can remove a tremendous load from your hands. I have two foot pedals, and use one for the SHIFT key, and the other for the CONTROL key. (I still type META by hand.) I have found that in the process of using the Emacs text editor to compose computer programs, I tend to use the SHIFT, CONTROL and META keys constantly, and it is easy to remove most of this load from one's hands. Some foot switch products: Savant Elite Triple Foot Switch FragPedal Bilbo Step On It!

    Read the article

  • Practical Scheme Programming

    - by Ixmatus
    It's been a few months since I've touched Scheme and decided to implement a command line income partitioner using Scheme. My initial implementation used plain recursion over the continuation, but I figured a continuation would be more appropriate to this type of program. I would appreciate it if anyone (more skilled with Scheme than I) could take a look at this and suggest improvements. I'm that the multiple (display... lines is an ideal opportunity to use a macro as well (I just haven't gotten to macros yet). (define (ab-income) (call/cc (lambda (cc) (let ((out (display "Income: ")) (income (string->number (read-line)))) (cond ((<= income 600) (display (format "Please enter an amount greater than $600.00~n~n")) (cc (ab-income))) (else (let ((bills (* (/ 30 100) income)) (taxes (* (/ 20 100) income)) (savings (* (/ 10 100) income)) (checking (* (/ 40 100) income))) (display (format "~nDeduct for bills:---------------------- $~a~n" (real->decimal-string bills 2))) (display (format "Deduct for taxes:---------------------- $~a~n" (real->decimal-string taxes 2))) (display (format "Deduct for savings:-------------------- $~a~n" (real->decimal-string savings 2))) (display (format "Remainder for checking:---------------- $~a~n" (real->decimal-string checking 2)))))))))) Invoking (ab-income) asks for input and if anything below 600 is provided it (from my understanding) returns (ab-income) at the current-continuation. My first implementation (as I said earlier) used plain-jane recursion. It wasn't bad at all either but I figured every return call to (ab-income) if the value was below 600 kept expanding the function. (please correct me if that apprehension is incorrect!)

    Read the article

  • Practical non-Turing-complete languages?

    - by Kyle Cronin
    Nearly all programming languages used are Turing Complete, and while this affords the language to represent any computable algorithm, it also comes with its own set of problems. Seeing as all the algorithms I write are intended to halt, I would like to be able to represent them in a language that guarantees they will halt. Regular expressions used for matching strings and finite state machines are used when lexing, but I'm wondering if there's a more general, broadly language that's not Turing complete? edit: I should clarify, by 'general purpose' I don't necessarily want to be able to write all halting algorithms in the language (I don't think that such a language would exist) but I suspect that there are common threads in halting proofs that can be generalized to produce a language in which all algorithms are guaranteed to halt. There's also another way to tackle this problem - eliminate the need for theoretically infinite memory. Once you limit the amount of memory the machine is allowed, the number of states the machine is in is finite and countable, and therefore you can determine if the algorithm will halt (by not allowing the machine to move into a state it's been in before).

    Read the article

  • ezpublish for practical e-commerce?

    - by Olaseni
    For while now I have been using ezpublish as a framework, and CMS when my web projects are based on PHP, and I must say, I have grown accustomed to it because of its flexibility for most scenarios. However, I've had to build e-commerce sites now and then, and ezpublish includes a webshop that caters for the e-commerce needs of your installation, and of-course with all the tools you need to extend, should you need to. Is it worthwhile and optimal to use the inbuilt webshop for an e-commerce solution, or should I rather go with an all out e-commerce solution like Magento, which has made a significant impact in that sector?

    Read the article

  • Reducing the pain writing integration and system tests

    - by mdma
    I would like to make integration tests and system tests for my applications but producing good integration and system tests have often needed so much effort that I have not bothered. The few times I tried, I wrote custom, application-specific test harnesses, which felt like re-inventing the wheel each time. I wonder if this is the wrong approach. Is there a "standard" approach to integration and full system testing? EDIT: To clarify, it's automated tests, for desktop and web applications. Ideally a complete test suite that exercises the full functionality of the application.

    Read the article

  • Java constructor with large arguments or Java bean getter/setter approach

    - by deelo55
    Hi, I can't decide which approach is better for creating objects with a large number of fields (10+) (all mandatory) the constructor approach of the getter/setter. Constructor at least you enforce that all the fields are set. Java Beans easier to see which variables are being set instead of a huge list. The builder pattern DOES NOT seem suitable here as all the fields are mandatory and the builder requires you put all mandatory parameters in the builder constructor. Thanks, D

    Read the article

  • What is best approach for connection pooling?

    - by Bhushan
    I am implementing connection pooling in project. Performance wise which is better approach to do it? Hibernate (using C3PO or DBCP) Configuring JDBC data-source in Application server. Application server Portability is not an important factor for me. Please suggest the approach.

    Read the article

  • Practical Experience using Stripes?

    - by Andrew Whitehouse
    I am coming from an Enterprise Java background which involves a fairly heavyweight software stack, and have recently discovered the Stripes framework; my initial impression is that this seems to do a good job of minimising the unpleasant parts of building a web application in Java. Has anyone used Stripes for a project that has gone live? And can you share your experiences from the project? Also, did you consider any other technologies and (if so) why did you chose Stripes?

    Read the article

  • Object serialization practical uses?

    - by nash
    How many software projects have you worked on used object serialization? I personally never came across a scenario where object serialization was used. One use case i can think of is, a server software storing objects to disk to save memory. Are there other types of software where object serialization is essential or preferred over a database?

    Read the article

  • Practical programming test in interview

    - by redspike
    I have been invited to do a second interview for a company recruiting for a software engineer. This interview will consist of a 45 minute programmatic test on a laptop followed by a whiteboard presentation on the solution. This position is Java/J2EE based so I'm assuming (hoping) the test will be implemented using Java. Have you ever done anything like this? What was the nature of the problem you had to solve? What is a good way to prepare for this type of interview?

    Read the article

  • Practical refactoring

    - by ahb
    I've read about refactoring and probably did it before I even knew about it, however I don't really know much about it is actually done and what it practically means. What, from your view, is refactoring? How and when do you do it?

    Read the article

  • WordPress Business Directory - best approach

    - by NTulip
    I want to implement a business directory in WordPress and I am looking for feedback on the best approach: I have a categories and a businesses table Do I create a page for every business together with it's category relationship Do I create a page and assign it a template? What are the ups and downs with each approach? Looking for some answers from people that might of already done this and can speak from experience

    Read the article

  • Is String.concat slower than Array approach to join strings

    - by Rajat
    Strings in JavaScript are immutable. Across the web and here on Stack Overflow as well, I came across the Array approach to concatenate strings: var a = []; a.push(arg1,arg,2....); console.log(a.join('')); I know that this approach is better than the simple console.log(arg1 + arg2 +.....); for reasons of skipping creating intermediate objects but how does it fair better against : arg1.concat(arg2,arg3.....);

    Read the article

  • Unread email notifier, most practical approach

    - by Michael Pasqualone
    I'm in the process of writing a small php-cli script that will loop over over my personal inbox and then send me an SMS via a gateway. The question I have is: As will have the script launch via cron every 10 minutes, if there is an email sitting in my inbox that is not read before the next script launch then I will receive 2 sms. Does any one (pseudocode will do) have any idea what the best practice would be in php5 to ensure only 1 SMS is sent? What I am currently learning towards is towards storing the message ID in a sqlite DB and flagging a field whether an SMS has been sent or not - but wondering if there is an easier way?

    Read the article

  • jQuery.get() - Practical uses?

    - by RedWolves
    I am trying to understand why you would use jQuery.get() and jQuery.get(index). The docs say it is to convert the jQuery selection to the raw DOM object instead of working with the selection as a jQuery object and the methods available to it. So a quick example: $("div").get(0).innerHTML; is the same as: $("div").html(); Obviously this is a bad example but I am struggling to figure when you would use .get(). Can you help me understand when I would use this method in my code?

    Read the article

  • Choosing approach for an IM client-server app

    - by John
    Update: totally re-wrote this to be more succint. I'm looking at a new application, one part of which will be very similar to standard IM clients, i.e text chat, ability to send attachments, maybe some real-time interaction like a multi-user whiteboard. It will be client-server, i.e all traffic goes through my central server. That means if I want to support cross-communication with other IM systems, I am still free to pick any protocol for my own client<--server communication - my server can use XMPP or whatever to talk to other systems. Clients are expected to include desktop apps, but probably also browser-based as well either through Flex/Silverlight or HTML/AJAX. I see 3 options for my own client-server communication layer: XMPP. The benefits are clients already exist as do open-source servers. However it requires the most up-front research/learning and also appears like it might raise legal issues due to GPL. Custom sockets. A server app makes connections with the clients, allowing any text/binary data to be sent very fast. However this approach requires building said server from scratch, and also makes a JS client tricky Servlets (or similar web server). Using tried and tested Java web-stack, clients send HTTP requests similar to AJAX-based websites. The benefit is the server is easy to write using well-established technologies, and easy to talk to. But what restrictions would this bring? Is it appropriate technology for real-time communication? Advice and suggests are welcome, especially what pros and cons surround using a web-server approach as compared to a socket-based approach.

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >