Search Results

Search found 37647 results on 1506 pages for 'sql performance'.

Page 629/1506 | < Previous Page | 625 626 627 628 629 630 631 632 633 634 635 636  | Next Page >

  • Will these security functions be enough? (PHP)

    - by ggfan
    I am trying to secure my site so I don't have sql injections and xss scripting. Here's my code. //here's the from, for brevity, i just show a field for users to put firstname <form> <label for="first_name" class="styled">First Name:</label> <input type="text" id="first_name" name="first_name" value="<?php if (!empty($first_name)) echo $first_name; ?>" /><br /> //submit button etc </form> if (isset($_POST['submit'])) { //gets rid of extra whitesapce and escapes $first_name = mysqli_real_escape_string($dbc, trim($_POST['first_name'])); //check if $first_name is a string if(!is_string($first_name) { echo "not string"; } //then insert into the database. ....... } mysqli_real_espace_string: I know that this func escapes certain letters like \n \r, so when the data gets inputted into the dbc, it would have '\' next to all the escaped letters? --Will this script be enough to prevent most sql injections? just escaping and checking if the data is a string. For integers values(like users putting in prices), i just: is_numeric(). --How should I use htmlspecialchars? Should I use it only when echoing and displaying user data? Or should I also use this too when inputting data to a dbc? --When should I use strip_tags() or htmlspecialchars? SOO with all these function... if (isset($_POST['submit'])) { //gets rid of extra whitesapce and escapes $first_name = mysqli_real_escape_string($dbc, trim($_POST['first_name'])); //check if $first_name is a string if(!is_string($first_name) { echo "not string"; } //gets rid of any <,>,& htmlspecialchars($first_name); //strips any tags with the first name strip_tags($first_name) //then insert into the database. ....... } Which funcs should I use for sql injections and which ones should I use for xss?

    Read the article

  • Rails: How can I log all requests which take more than 4s to execute?

    - by Fedyashev Nikita
    I have a web app hosted in a cloud environment which can be expanded to multiple web-nodes to serve higher load. What I need to do is to catch this situation when we get more and more HTTP requests (assets are stored remotely). How can I do that? The problem I see from this point of view is that if we have more requests than mongrel cluster can handle then the queue will grow. And in our Rails app we can only count only after mongrel will receive the request from balancer.. Any recommendations?

    Read the article

  • Running Boggle Solver takes over an hour to run. What is wrong with my code?

    - by user1872912
    So I am running a Boggle Solver in java on the NetBeans IDE. When I run it, i have to quit after 10 minutes or so because it will end up taking about 2 hour to run completely. Is there something wrong with my code or a way that will make is substantially faster? public void findWords(String word, int iLoc, int jLoc, ArrayList<JLabel> labelsUsed){ if(iLoc < 0 || iLoc >= 4 || jLoc < 0 || jLoc >= 4){ return; } if(labelsUsed.contains(jLabels[iLoc][jLoc])){ return; } word += jLabels[iLoc][jLoc].getText(); labelsUsed.add(jLabels[iLoc][jLoc]); if(word.length() >= 3 && wordsPossible.contains(word)){ wordsMade.add(word); } findWords(word, iLoc-1, jLoc, labelsUsed); findWords(word, iLoc+1, jLoc, labelsUsed); findWords(word, iLoc, jLoc-1, labelsUsed); findWords(word, iLoc, jLoc+1, labelsUsed); findWords(word, iLoc-1, jLoc+1, labelsUsed); findWords(word, iLoc-1, jLoc-1, labelsUsed); findWords(word, iLoc+1, jLoc-1, labelsUsed); findWords(word, iLoc+1, jLoc+1, labelsUsed); labelsUsed.remove(jLabels[iLoc][jLoc]); } here is where I call this method from: public void findWords(){ ArrayList <JLabel> labelsUsed = new ArrayList<JLabel>(); for(int i=0; i<jLabels.length; i++){ for(int j=0; j<jLabels[i].length; j++){ findWords(jLabels[i][j].getText(), i, j, labelsUsed); //System.out.println("Done"); } } } edit: BTW I am using a GUI and the letters on the board are displayed by using a JLabel.

    Read the article

  • Are doubles faster than floats in c#?

    - by Trap
    I'm writing an application which reads large arrays of floats and performs some simple operations with them. I'm using floats because I thought it'd be faster than doubles, but after doing some research I've found that there's some confusion about this topic. Can anyone elaborate on this? Thanks.

    Read the article

  • Why is my regex so much slower compiled than interpreted ?

    - by miket2e
    I have a large and complex C# regex that runs OK when interpreted, but is a bit slow. I'm trying to speed this up by setting RegexOptions.Compiled, and this seems to take about 30 seconds for the first time and instantly after that. I'm trying to negate this by compiling the regex to an assembly first, so my app can be as fast as possible. My problem is when the compiling delay takes place: Regex myComplexRegex = new Regex(regexText, RegexOptions.Compiled); MatchCollection matches = myComplexRegex.Matches(searchText); foreach (Match match in matches) // <--- when the one-time long delay kicks in { } This is making compiling to an assembly basically useless, as I still get the delay on the first foreach call. What I want is for all the compiling delay to be done in advance when I compile to the assembly, not when the user runs the app. Where am I going wrong ? (The code I'm using to compile to an assembly is similar to http://www.dijksterhuis.org/regular-expressions-advanced/ , if that's relevant ).

    Read the article

  • How do I make "simple" throughput j2ee-filter?

    - by Tommy
    I'm looking to create a filter that can give me two things: number of request pr minute, and average responsetime pr minute. I already got the individual readings, I'm just not sure how to add them up. My filter captures every request, and it records the time each request takes: public void doFilter(ServletRequest request, ...() { long start = System.currentTimeMillis(); chain.doFilter(request, response); long stop = System.currentTimeMillis(); String time = Util.getTimeDifferenceInSec(start, stop); } This information will be used to create some pretty Google Chart charts. I don't want to store the data in any database. Just a way to get current numbers out when requested As this is a high volume application; low overhead is essential. I'm assuming my applicationserver doesn't provide this information.

    Read the article

  • calculating change (over a period) for a dated field

    - by morpheous
    I have two tables with the following schema: CREATE TABLE sales_data ( sales_time date NOT NULL, product_id integer NOT NULL, sales_amt double NOT NULL ); CREATE TABLE date_dimension ( id integer NOT NULL, datestamp date NOT NULL, day_part integer NOT NULL, week_part integer NOT NULL, month_part integer NOT NULL, qtr_part integer NOT NULL, year_part integer NOT NULL, ); I want to write two types of queries that will allow me to calculate: period on period change (e.g. week on week change) change in period on period change (e.g. change in week on week change) I would prefer to write this in ANSI SQL, since I dont want to be tied to any particular db. [Edit] In light of some of the comments, if I have to be tied to a single database (in terms of SQL dialect), it will have to be PostgreSQL The queries I want to write are of the form (pseudo SQL of course): Query Type 1 (Period on Period Change) ======================================= a). select product_id, ((sd2.sales_amt - sd1.sales_amt)/sd1.sales_amt) as week_on_week_change from sales_data sd1, sales_data sd2, date_dimension dd where {SOME CRITERIA) b). select product_id, ((sd2.sales_amt - sd1.sales_amt)/sd1.sales_amt) as month_on_month_change from sales_data sd1, sales_data sd2, date_dimension dd where {SOME CRITERIA) Query Type 2 (Change in Period on Period Change) ================================================= a). select product_id, ((a2.week_on_week_change - a1.week_on_week_change)/a1.week_on_week_change) as change_on_week_on_week_change from (select product_id, ((sd2.sales_amt - sd1.sales_amt)/sd1.sales_amt) as week_on_week_change from sales_data sd1, sales_data sd2, date_dimension dd where {SOME CRITERIA) as a1), (select product_id, ((sd2.sales_amt - sd1.sales_amt)/sd1.sales_amt) as week_on_week_change from sales_data sd1, sales_data sd2, date_dimension dd where {SOME CRITERIA) as a2) WHERE {SOME OTHER CRITERIA}

    Read the article

  • Will an IO blocked process show 100% CPU utilization in 'top' output?

    - by Alex Stoddard
    I have an analysis that can be parallelized over a different number of processes. It is expected that things will be both IO and CPU intensive (very high throughput short-read DNA alignment if anyone is curious.) The system running this is a 48 core linux server. The question is how to determine the optimum number of processes such that total throughput is maximized. At some point the processes will presumably become IO bound such that adding more processes will be of no benefit and possibly detrimental. Can I tell from standard system monitoring tools when that point has been reached? Would the output of top (or maybe a different tool) enable me to distinguish between a IO bound and CPU bound process? I am suspicious that a process blocked on IO might still show 100% CPU utilization.

    Read the article

  • XNA 2D/3D Drawing method?

    - by Adir
    What would be a better parctice, writing the drawing method inside the GameObject class or in the Game class? GameObject obj = new GameObject(); obj.Draw(); Or GameObject obj = new GameObject(); DrawGameObject(obj);

    Read the article

  • C#: How to implement a smart cache

    - by Svish
    I have some places where implementing some sort of cache might be useful. For example in cases of doing resource lookups based on custom strings, finding names of properties using reflection, or to have only one PropertyChangedEventArgs per property name. A simple example of the last one: public static class Cache { private static Dictionary<string, PropertyChangedEventArgs> cache; static Cache() { cache = new Dictionary<string, PropertyChangedEventArgs>(); } public static PropertyChangedEventArgs GetPropertyChangedEventArgsa(string propertyName) { if (cache.ContainsKey(propertyName)) return cache[propertyName]; return cache[propertyName] = new PropertyChangedEventArgs(propertyName); } } But, will this work well? For example if we had a whole load of different propertyNames, that would mean we would end up with a huge cache sitting there never being garbage collected or anything. I'm imagining if what is cached are larger values and if the application is a long-running one, this might end up as kind of a problem... or what do you think? How should a good cache be implemented? Is this one good enough for most purposes? Any examples of some nice cache implementations that are not too hard to understand or way too complex to implement?

    Read the article

  • Speed up PostGreSQL createdb?

    - by John
    Is there a way to speed up PostgreSQL's createdb command? Normally I wouldn't care, but doing unit testing in Django creates a database every time, and it takes about 5 seconds. I'm using openSUSE 11.2 64-bit, PostgreSQL 8.4.2

    Read the article

  • What does CPU Time consist of? [closed]

    - by Sid
    What does CPU time exactly consist of? For instance, is the time taken to access a page from the RAM (at which point, the CPU is most likely idling) part of the CPU time? I'm not talking about fetching the page from the disk here, just fetching it from the RAM. Thanks

    Read the article

  • C# Dictionary Loop Enhancment

    - by Toto
    Hi, I have a dictionary with around 1 milions items. I am constantly looping throw the dictionnary : public void DoAllJobs() { foreach (KeyValuePair<uint, BusinessObject> p in _dictionnary) { if(p.Value.MustDoJob) p.Value.DoJob(); } } The execution is a bit long, around 600 ms, I would like to deacrese it. Here is the contraints : MustDoJob values mostly stay the same beetween two calls to DoAllJobs() 60-70% of the MustDoJob values == false From time to times MustDoJob change for 200 000 pairs. Some p.Value.DoJob() can not be computed at the same time (COM object call) Here, I do not need the key part of the _dictionnary objet but I really do need it somewhere else I wanted to do the following : Parallelizes but I am not sure is going to be effective due to 4. Sorts the dictionnary since 1. and 2. (and stop want I find the first MustDoJob == false) but I am wondering what 3. would result in I did not implement any of the previous ideas since it could be a lot of job and I would like to investigate others options before. So...any ideas ?

    Read the article

  • JavaScript: Is there a better way to retain your array but efficiently concat or replace items?

    - by Michael Mikowski
    I am looking for the best way to replace or add to elements of an array without deleting the original reference. Here is the set up: var a = [], b = [], c, i, obj; for ( i = 0; i < 100000; i++ ) { a[ i ] = i; b[ i ] = 10000 - i; } obj.data_list = a; Now we want to concatenate b INTO a without changing the reference to a, since it is used in obj.data_list. Here is one method: for ( i = 0; i < b.length; i++ ) { a.push( b[ i ] ); } This seems to be a somewhat terser and 8x (on V8) faster method: a.splice.apply( a, [ a.length, 0 ].concat( b ) ); I have found this useful when iterating over an "in-place" array and don't want to touch the elements as I go (a good practice). I start a new array (let's call it keep_list) with the initial arguments and then add the elements I wish to retain. Finally I use this apply method to quickly replace the truncated array: var keep_list = [ 0, 0 ]; for ( i = 0; i < a.length; i++ ){ if ( some_condition ){ keep_list.push( a[ i ] ); } // truncate array a.length = 0; // And replace contents a.splice.apply( a, keep_list ); There are a few problems with this solution: there is a max call stack size limit of around 50k on V8 I have not tested on other JS engines yet. This solution is a bit cryptic Has anyone found a better way?

    Read the article

  • How to explain to a developer that adding extra if - else if conditions is not a good way to "improv

    - by Lilit
    Recently I've bumped into the following C++ code: if (a) { f(); } else if (b) { f(); } else if (c) { f(); } Where a, b and c are all different conditions, and they are not very short. I tried to change the code to: if (a || b || c) { f(); } But the author opposed saying that my change will decrease readability of the code. I had two arguments: 1) You should not increase readability by replacing one branching statement with three (though I really doubt that it's possible to make code more readable by using else if instead of ||). 2) It's not the fastest code, and no compiler will optimize this. But my arguments did not convince him. What would you tell a programmer writing such a code? Do you think complex condition is an excuse for using else if instead of OR?

    Read the article

  • Hibernate Relationship Mapping/Speed up batch inserts

    - by manyxcxi
    I have 5 MySQL InnoDB tables: Test,InputInvoice,InputLine,OutputInvoice,OutputLine and each is mapped and functioning in Hibernate. I have played with using StatelessSession/Session, and JDBC batch size. I have removed any generator classes to let MySQL handle the id generation- but it is still performing quite slow. Each of those tables is represented in a java class, and mapped in hibernate accordingly. Currently when it comes time to write the data out, I loop through the objects and do a session.save(Object) or session.insert(Object) if I'm using StatelessSession. I also do a flush and clear (when using Session) when my line count reaches the max jdbc batch size (50). Would it be faster if I had these in a 'parent' class that held the objects and did a session.save(master) instead of each one? If I had them in a master/container class, how would I map that in hibernate to reflect the relationship? The container class wouldn't actually be a table of it's own, but a relationship all based on two indexes run_id (int) and line (int). Another direction would be: How do I get Hibernate to do a multi-row insert?

    Read the article

  • Optimizing landing pages

    - by Oleg Shaldybin
    In my current project (Rails 2.3) we have a collection of 1.2 million keywords, and each of them is associated with a landing page, which is effectively a search results page for a given keywords. Each of those pages is pretty complicated, so it can take a long time to generate (up to 2 seconds with a moderate load, even longer during traffic spikes, with current hardware). The problem is that 99.9% of visits to those pages are new visits (via search engines), so it doesn't help a lot to cache it on the first visit: it will still be slow for that visit, and the next visit could be in several weeks. I'd really like to make those pages faster, but I don't have too many ideas on how to do it. A couple of things that come to mind: build a cache for all keywords beforehand (with a very long TTL, a month or so). However, building and maintaing this cache can be a real pain, and the search results on the page might be outdated, or even no longer accessible; given the volatile nature of this data, don't try to cache anything at all, and just try to scale out to keep up with traffic. I'd really appreciate any feedback on this problem.

    Read the article

  • Where should I place a function that I want to run before the cached page is served (Drupal)

    - by kidbrax
    We have a intranet site that runs on Drupal. If an employee hits the site from outside our network they are required to login first. If they are already in our network, they can browse around freely. So we have a function that checks where they are coming from and redirects them to a login page if they are from outside. If we enable caching, they are not redirected because the cached page is rendered without running our function. The code currently exists inside of the theme_preprocess function. Where can I put it so that it always runs before the cached pages are served?

    Read the article

  • Wpf. Chart optimization. More than million points

    - by Evgeny
    I have custom control - chart with size, for example, 300x300 pixels and more than one million points (maybe less) in it. And its clear that now he works very slowly. I am searching for algoritm which will show only few points with minimal visual difference. I have link to component which have functionallity exactly what i need (2 million points demo): http://www.mindscape.co.nz/demo/SilverlightElements/demopage.html#/ChartOverviewPage I will be grateful for any matherials, links or thoughts how to realize such functionallity.

    Read the article

  • Using FILE_FLAG_NO_BUFFERING will return noticeable speed gain?

    - by 9dan
    Recently noticed detail description of FILE_FLAG_NO_BUFFERING flag in MSDN, and read several Google search results about unbuffered I/O in Windows. http://msdn.microsoft.com/en-us/library/aa363858(v=vs.85).aspx I wondering now, is it really important to consider unbuffered option in file I/O programming? Because many programs use plain old C stream I/O or C++ iostream, I didn't gave any attention to FILE_FLAG_NO_BUFFERING flag before. Let's say we are developing photo explorer program like Picasa. If we implement unbuffered I/O, could thumbnail display speed show noticeable difference in ordinary users?

    Read the article

  • Maven + Tomcat acceleration

    - by Bar
    I am writing a web application with Maven in the Eclipse IDE, and use Tomcat servlet container. So, I run Maven like this: mvn clean compile. It is reasonable that after this oepration I must re-run Tomcat so it can reinitialize the context (Sysdeo Tomcat launcher helps a lot). The problem is Maven execution and subsequebt Tomcat re-running takes noticable amount of time (like 10+ seconds for Maven and 20+ sec. for Tomcat, because of logging, Hibernate mappings, etc.) every time I do it. Is there any automated and more faster solution for these two operatioins? As I see it, a way better solution can be moving re-compiled classes only to the target dir.

    Read the article

  • MySQL Insert Query Randomly Takes a Long Time

    - by ShimmerTroll
    I am using MySQL to manage session data for my PHP application. When testing the app, it is usually very quick and responsive. However, seemingly randomly the response will stall before finally completing after a few seconds. I have narrowed the problem down to the session write query which looks something like this: INSERT INTO Session VALUES('lvg0p9peb1vd55tue9nvh460a7', '1275704013', '') ON DUPLICATE KEY UPDATE sessAccess='1275704013',sessData=''; The slow query log has this information: Query_time: 0.524446 Lock_time: 0.000046 Rows_sent: 0 Rows_examined: 0 This happens about 1 out of every 10 times. The query usually only takes ~0.0044 sec. The table is InnoDB with about 60 rows. sessId is the primary key with a BTREE index. Since this is accessed on every page view, it is clearly not an acceptable execution time. Why is this happening? Update: Table schema is: sessId:varchar(32), sessAccess:int(10), sessData:text

    Read the article

  • Django: IE doesn't load locahost or loads very SLOWLY

    - by reedvoid
    I'm just starting to learn Django, building a project on my computer, running Windows 7 64-bit, Python 2.7, Django 1.3. Basically whatever I write, it loads in Chrome and Firefox instantly. But for IE (version 9), it just stalls there, and does nothing. I can load up "http://127.0.0.1:8000" on IE and leave the computer on for hours and it doesn't load. Sometimes, when I refresh a couple of times or restart IE it'll work. If I change something in the code, again, Chrome and Firefox reflects changes instantly, whereas IE doesn't - if it loads the page at all. What is going on? I'm losing my mind here....

    Read the article

  • Slowing process creation under Java?

    - by oconnor0
    I have a single, large heap (up to 240GB, though in the 20-40GB range for most of this phase of execution) JVM [1] running under Linux [2] on a server with 24 cores. We have tens of thousands of objects that have to be processed by an external executable & then load the data created by those executables back into the JVM. Each executable produces about half a megabyte of data (on disk) that when read right in, after the process finishes, is, of course, larger. Our first implementation was to have each executable handle only a single object. This involved the spawning of twice as many executables as we had objects (since we called a shell script that called the executable). Our CPU utilization would start off high, but not necessarily 100%, and slowly worsen. As we began measuring to see what was happening we noticed that the process creation time [3] continually slows. While starting at sub-second times it would eventually grow to take a minute or more. The actual processing done by the executable usually takes less than 10 seconds. Next we changed the executable to take a list of objects to process in an attempt to reduce the number of processes created. With batch sizes of a few hundred (~1% of our current sample size), the process creation times start out around 2 seconds & grow to around 5-6 seconds. Basically, why is it taking so long to create these processes as execution continues? [1] Oracle JDK 1.6.0_22 [2] Red Hat Enterprise Linux Advanced Platform 5.3, Linux kernel 2.6.18-194.26.1.el5 #1 SMP [3] Creation of the ProcessBuilder object, redirecting the error stream, and starting it.

    Read the article

< Previous Page | 625 626 627 628 629 630 631 632 633 634 635 636  | Next Page >