Search Results

Search found 37647 results on 1506 pages for 'sql performance'.

Page 549/1506 | < Previous Page | 545 546 547 548 549 550 551 552 553 554 555 556  | Next Page >

  • get mysql_real_escape is giving me errors when I try and add security to my website

    - by Mike
    I tried doing this: @ $db = new myConnectDB(); $beerName = mysql_real_escape_string($beerName); $beerID = mysql_real_escape_string($beerID); $brewery = mysql_real_escape_string($brewery); $style = mysql_real_escape_string($style); $userID = mysql_real_escape_string($userID); $abv = mysql_real_escape_string($abv); $ibu = mysql_real_escape_string($ibu); $breweryID = mysql_real_escape_string($breweryID); $icon = mysql_real_escape_string($icon); I get this error: Warning: mysql_real_escape_string() [function.mysql-real-escape-string]: Access denied for user

    Read the article

  • Apache2 BufferedLogs On - anybody using it ?

    - by Qiqi
    Greetings, I am wondering, whether anybody is using BufferedLogs On with Apache2 and found any issues ? Feature is marked as experimental, but for many years now, so I guess it's rather pretty stable. I am running some servers with constrained disk IO capacity at the moment, so I turned it on hoping that even a small benefit could help in the long run ;-) I do have several to several hundreds requests per seconds so by my thoughts there is really no need to write to log after each request, cause honestly I don't think that my filesystem is the best handler for many unnecessary writes. (OCFS2 shared among several DomUs in the Xen)

    Read the article

  • Is it reasonable that a random disk seek & read costs ~16ms?

    - by fzhang
    I am frustrated about the latency of random reading from a non-ssd disk. Based on results from following test program, it speeds ~16 ms for a random read of just 512 bytes without help of os cache. I tried changing 512 to larger values, such as 25k, and the latency did not increase as much. I guess it is because the disk seek dominates the time. I understand that random reading is inherently slow, but just want to be sure that ~16ms is reasonable, even for non-ssd disk. #include <sys/stat.h> #include <sys/time.h> #include <sys/types.h> #include <sys/unistd.h> #include <fcntl.h> #include <limits.h> #include <stdio.h> #include <string.h> int main(int argc, char** argv) { int fd = open(argv[1], O_RDONLY); if (fd < 0) { fprintf(stderr, "Failed open %s\n", argv[1]); return -1; } const size_t count = 512; const off_t offset = 25990611 / 2; char buffer[count] = { '\0' }; struct timeval start_time; gettimeofday(&start_time, NULL); off_t ret = lseek(fd, offset, SEEK_SET); if (ret != offset) { perror("lseek error"); close(fd); return -1; } ret = read(fd, buffer, count); if (ret != count) { fprintf(stderr, "Failed reading all: %ld\n", ret); close(fd); return -1; } struct timeval end_time; gettimeofday(&end_time, NULL); printf("tv_sec: %ld, tv_usec: %ld\n", end_time.tv_sec - start_time.tv_sec, end_time.tv_usec - start_time.tv_usec); close(fd); return 0; }

    Read the article

  • Which would be a better way to load data via ajax

    - by Mike
    I am using google maps and returning html/lat/long from my MySQL database Currently A user picks a business category e.g; "Video Production". an ajax call is sent to a CodeIgniter controller the Controller then queries the db, and returns the following data via JSON Lat/Long of the marker HTML for the popup window this is approximately 34 rows in the database across two tables per business the ajax call receives this data and then plots the marker along with the html onto the map The data that is returned from the controller is one big json object... This is done for all businesses that exist in the Video Production category (currently approx 40 businesses). As you can see, pulling this data for multiple categories (100s of businesses) can get very very taxing on the server. My question is Would it be more beneficial to modify the process flow as such: a user picks a business category e.g; "Video Production". an ajax call is sent to a CodeIgniter controller the controller then queries the database for the location base information lat/long level (used to change marker icon color) This would be a single row per business with several columns the ajax call receives this data and then plots the marker on the map when the user clicks a marker an ajax call is sent to a CodeIgniter Controller the controller queries the database for the HTML and additional data based on business_id and if not, what are some better suggestions to this problem? In summary this means rather than including the HTML and additional data along for each business, only submitting minimal location information and then re-query for that information when each business marker is clicked. Potential Downsides longer load times when a user clicks a marker icon more code?? more queries to the database

    Read the article

  • Does this prove a network bandwidth bottleneck?

    - by Yuji Tomita
    I've incorrectly assumed that my internal AB testing means my server can handle 1k concurrency @3k hits per second. My theory at at the moment is that the network is the bottleneck. The server can't send enough data fast enough. External testing from blitz.io at 1k concurrency shows my hits/s capping off at 180, with pages taking longer and longer to respond as the server is only able to return 180 per second. I've served a blank file from nginx and benched it: it scales 1:1 with concurrency. Now to rule out IO / memcached bottlenecks (nginx normally pulls from memcached), I serve up a static version of the cached page from the filesystem. The results are very similar to my original test; I'm capped at around 180 RPS. Splitting the HTML page in half gives me double the RPS, so it's definitely limited by the size of the page. If I internally ApacheBench from the local server, I get consistent results of around 4k RPS on both the Full Page and the Half Page, at high transfer rates. Transfer rate: 62586.14 [Kbytes/sec] received If I AB from an external server, I get around 180RPS - same as the blitz.io results. How do I know it's not intentional throttling? If I benchmark from multiple external servers, all results become poor which leads me to believe the problem is in MY servers outbound traffic, not a download speed issue with my benchmarking servers / blitz.io. So I'm back to my conclusion that my server can't send data fast enough. Am I right? Are there other ways to interpret this data? Is the solution/optimization to set up multiple servers + load balancing that can each serve 180 hits per second? I'm quite new to server optimization, so I'd appreciate any confirmation interpreting this data. Outbound traffic Here's more information about the outbound bandwidth: The network graph shows a maximum output of 16 Mb/s: 16 megabits per second. Doesn't sound like much at all. Due to a suggestion about throttling, I looked into this and found that linode has a 50mbps cap (which I'm not even close to hitting, apparently). I had it raised to 100mbps. Since linode caps my traffic, and I'm not even hitting it, does this mean that my server should indeed be capable of outputting up to 100mbps but is limited by some other internal bottleneck? I just don't understand how networks at this large of a scale work; can they literally send data as fast as they can read from the HDD? Is the network pipe that big? In conclusion 1: Based on the above, I'm thinking I can definitely raise my 180RPS by adding an nginx load balancer on top of a multi nginx server setup at exactly 180RPS per server behind the LB. 2: If linode has a 50/100mbit limit that I'm not hitting at all, there must be something I can do to hit that limit with my single server setup. If I can read / transmit data fast enough locally, and linode even bothers to have a 50mbit/100mbit cap, there must be an internal bottleneck that's not allowing me to hit those caps that I'm not sure how to detect. Correct? I realize the question is huge and vague now, but I'm not sure how to condense it. Any input is appreciated on any conclusion I've made.

    Read the article

  • Pros/cons to turning off cable modem

    - by Jay
    A little off the wall perhaps, but ... I have a cable modem and a router for a wireless home network. Is it a good or a bad idea to turn it off at night and during the day when we're all at work or school? Or should I leave it on 24/7. I was thinking that leaving it on constantly makes me more vulnerable to hackers, not to mention wasting electricity. (Though I'd guess the amount of electricity used by a cable modem and a router is probably pretty trivial. Still, every little bit helps.) When I have turned it off and turned it on again, it takes several minutes for it to go through its little dialog with the cable company and get me connected to the Internet again, which is annoying but not a big deal. Anyone know any good reasons one way or the other?

    Read the article

  • Mac has become insanely slow : Processes SystemUIServer, UserEventAgent and loginwindow using a lot of memory

    - by SatheeshJM
    I have been using my Mac for for many months without any problem. But recently all of a sudden the Mac became insanely slow. I opened Activity Manager to see what was happening. For three processes SystemUIServer, UserEventAgent and loginwindow, the memory gradually increases and reaches upto 2 GB for each process. This completely hangs up my Mac. I tried the following : 1. Restart Mac 2. Restart Mac in safe mode 3. Manually kill the processes 4. Remove Date and Time from Menu bar(this was supposed to be the problem for the SysteUIServer process's memory according to many users) 5. Removed the externally connected keyboard and mouse(some had suggested this for UserEventAgent's memory) No luck with any of those. The moment I log in, the memory spikes up. Any idea what the hell is happening? Please help.

    Read the article

  • Programs minimized for long time takes long time to "wake up"

    - by bart
    I'm working in Photoshop CS6 and multiple browsers a lot. I'm not using them all at once, so sometimes some applications are minimized to taskbar for hours or days. The problem is, when I try to maximize them from the taskbar - it sometimes takes longer than starting them! Especially Photoshop feels really weird for many seconds after finally showing up, it's slow, unresponsive and even sometimes totally freezes for minute or two. It's not a hardware problem as it's been like that since always on all on my PCs. Would I also notice it after upgrading my HDD to SDD and adding RAM (my main PC holds 4 GB currently)? Could guys with powerful pcs / macs tell me - does it also happen to you? I guess OSes somehow "focus" on active software and move all the resources away from the ones that run, but are not used. Is it possible to somehow set RAM / CPU / HDD priorities or something, for let's say, Photoshop, so it won't slow down after long period of inactivity?

    Read the article

  • PHP5 without PHP4 compatibility

    - by Serty Oan
    This is the opposite question of this one. My purpose is that there is no need when you write only PHP5 code to have PHP4 compatibility. It is not helpful at all and due to differences between object models in those two version, it must be penalizing on interpreter performances (or am I wrong ?). So my questions are : is it possible to recompile PHP5 without retrocompatibility with PHP4 or to disable it in any other way to gain performances ? if it is not possible, is there a project somewhere which aim that goal ?

    Read the article

  • ApacheTop and Plesk

    - by Tomccaul
    Hi, I am trying to find a way to monitor my apache server so I can see which domain is causing slow down on my server when they occur. I was hoping I would be able to do it with ApacheTop but I have to list our each log file separately as Plesk splits domains apache logs into individual files. Is there a way I can do this with ApacheTop or should I be using another tool? Thanks

    Read the article

  • Debian tuning for increasing read/write buffer.

    - by Claudiu
    Is there a way to modify Debian settings so the memory could be used more for disk read/write caching ? I am already using RAID 0 but thats not enough for multiple users, and the disk is almost struggled. Torrents use the disk very much and rTorrent doesn't have cache settings.

    Read the article

  • adding custom SSIS transformation to visual studio toolbox fails

    - by ryangaraygay
    Just very recently I encountered an issue in deploying a custom SSIS component assembly which turns out to be a relative "no-brainer" error if only the clues were more straightforward. Basically after deploying the assembly I could not find my component listed in the "SSIS Data Flow Items" tab list.It turns out that the assembly containing the component just had missing or referenced the incorrect assemblies.I have outlined the steps I took that guided me on the right direction on this blog post of mine : adding custom SSIS transformation to visual studio toolbox fails 

    Read the article

  • use of tcp_delack_min on redhat linux (kernel 2.6.18)

    - by user41466
    Hello, we're moving from Solaris to Redhat Linux, and trying to duplicate our low-latency setup, that, on solaris, includes the ndd settings related to TCP NO DELAY, and NAGLE ALGORITHM. I got the impression that those parameters are not all configurable system-wide, but still found some info. we have configured our applications to run with no nagle algorithm, but that is not sufficient. we have found an interesting RH article talking presenting the tcp_delack_min parameter, however, when browsing /proc/sys/net/ipv4/ , I can't find it there. would it be safe to assume that simply "adding" the parameter as it's said on the doc would be enough, or rather that the option is not supported by this version (would be strange, as RH specify that it "can be performed on a standard Red Hat Enterprise Linux installation") ? any other idea / recommendation to improve latency further ? thanks

    Read the article

  • Very low throughput on 10GbE network

    - by aix
    I have two Linux machines, each equipped with a Solarflare SFN5122F 10GbE NIC. The two NICs are connected together with an SFP+ Direct Attach cable. I am using netperf to measure TCP throughput between the two machines. On one box, I run: netserver and on the other: netperf -t TCP_STREAM -H 192.168.x.x -- -m 32768 I get: MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.x.x (192.168.x.x) port 0 AF_INET Recv Send Send Socket Socket Message Elapsed Size Size Size Time Throughput bytes bytes bytes secs. 10^6bits/sec 87380 16384 32768 10.02 1321.34 The measured throughput is 1.3Gb/s. This is 7.5x below the theoretical maximum, and only 30% faster than 1GbE. What steps can I take to troubleshoot this?

    Read the article

  • storing data for maps database

    - by Timigen
    I am working on an application that displays choropleth maps. These maps are of all different types, some display state by county, country by state/province, or world by country. How should I handle storing the map information in the database? My Thoughts: I won't need to do queries to find POI inside a region, so I don't think there is a need to use spatial datatypes. I am considering storing a map as a geoJSON object (I am using JS mapping library that accepts geoJSON). The only issue is what if I want a map of the US northeast. Then I would have geoJSON for the US and a separate one for the US northeast, which would be redundant. Would it make sense to have a shape database where I had each state then when I needed a map of the US I could query for each state, and when I needed a map of the US Northeast I could again query for what I need? Note: I am not concerned with storing the data for each region, just the region itself. I will query for the data on the fly for the specific region.

    Read the article

  • SQL SERVER Size of Index Table for Each Index Solution 3 Powershell

    Laerte Junior If you are a Powershell user, the name of the Laerte Junior is not a new name. He is the one man with exceptional knowledge of Powershell. He is not only very knowledgeable, but also very kind and eager to those in need. I have been attempting to setup Powershell for many days, [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • SQL SERVER Size of Index Table for Each Index Solution 3 Powershell

    Laerte Junior If you are a Powershell user, the name of the Laerte Junior is not a new name. He is the one man with exceptional knowledge of Powershell. He is not only very knowledgeable, but also very kind and eager to those in need. I have been attempting to setup Powershell for many days, [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • What TrustLeap G-WAN for?

    - by Rowan
    Hi, Some people said that TrustLeap G-WAN ( http://trustleap.ch/ ) is very fast. But after reading the manual, I dont see guide to use G-WAN with PHP or Python. So, what this application server for? Is it only works only for C language? Thanks in advance

    Read the article

  • how does a web application cope with thousands of requests?

    - by netrox
    I went to a few websites and noticed that they all use AJAX technology for many tasks such as chat, messages, and so forth. They use a lot of httprequests obviously. My question is if you build a simple website using AJAX and you expected only few people per hour and then you start to have like 1,000 members logged per hour - can a single web application handle more requests per hour if you just upgrade to faster bigger servers or do you have to rewrite the code? Exactly how do you "scale" the web application?

    Read the article

  • Best way to monitorize a Grid of computers?

    - by marc.riera
    Hello, I've installed sun grid in 10 nodes, and one virtual master host. Now I have to monitorize all the resourses prior to launch it to production, but I don't know which is the best way. I've tried using xml-qstat, but it seems unstable. Any tips or suggestions? Anyone got experience on this? thanks.

    Read the article

  • SQL Monitor and "The Cloud"

    - by Richard Mitchell
    So, how can we demo this thing? In the beginning there was a product, and it was a good product for the testers had decreed it so, and nobody argues with a tester. But then comes the inevitable question of how can somebody test it out without risk. Red Gate prides itself on the tools being easy for people to trial before they buy, and no cut down trial for you sir, oh no, for you sir only the best will do - a fully functional trial - suits you sir. The problem The problem comes when you get a...(read more)

    Read the article

  • Convenience of mySQL over xml

    - by Bonechilla
    Currently I use XML to store specific information to correctly load a few things such as a list of specfied characters, scenes and music, Once more I use JAXB in combination with standard compression/decompression(ZIP) functionality to store a list of extrenous data. This data is called to add functionality to the character, somewhat like Skills in an RPG. Each skill is seperated into its own XML file with a grandlist which contains the names of each file with their extensions omitted and zipped in folder that gets encrypted. At first using xml was working fine however as the skill list grow i worry about its stability. I was wondering if I should begin storing the data in mySQL. Originally I planned to simply convert everything to JSON over xml but i think possibly mySQL would be a better move. Can anyone inform me of the key difference and pros and cons of each I guess i'm looking for the best way to store the data more conviently and would be easier to operate on. The data is mostly primatives and strings and the only arraylist of values i have i can just concat into a single field and parse later Edit: If I am going in the right direction with XML would it make sense to convert it to JSON and use maybe Kyro or EclipseLink JAXB (MOXy)

    Read the article

  • Can anyone explain these differences between two similar i7 processors? [closed]

    - by Brian Frost
    I have two systems I've just built. They both have i7 processors and Asus P8Z77 motherboards. When I run a simple processor loop benchmark that I wrote in Delphi some time back I get one machine showing nealry twice as fast as the other. I then used CPU-Z to dump me the details of the hardware and I see that the fast machine shows: Processor 1 ID = 0 Number of cores 4 (max 8) Number of threads 8 (max 16) Name Intel Core i7 2700K Codename Sandy Bridge Specification Intel(R) Core(TM) i7-2700K CPU @ 3.50GHz Package (platform ID) Socket 1155 LGA (0x1) CPUID 6.A.7 Extended CPUID 6.2A Core Stepping D2 Technology 32 nm TDP Limit 95 Watts Core Speed 3610.7 MHz Multiplier x FSB 36.0 x 100.3 MHz Stock frequency 3500 MHz the slow machine shows: Processor 1 ID = 0 Number of cores 4 (max 8) Number of threads 8 (max 16) Name Intel Core i7 2600K Codename Sandy Bridge Specification Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz Package (platform ID) Socket 1155 LGA (0x1) CPUID 6.A.7 Extended CPUID 6.2A Core Stepping D2 Technology 32 nm TDP Limit 95 Watts Core Speed 1648.2 MHz Multiplier x FSB 16.0 x 103.0 MHz Stock frequency 3400 MHz i.e the slow machine has a 2600k to the fast machine 2700k. The very different "Multiplier x FSB" must be significant but I dont understand how two processors with a very 'similar' number can be so different. To get the machines the same must I copy the processors or is there some clever setting that I can change? Thanks for any help. Brian.

    Read the article

< Previous Page | 545 546 547 548 549 550 551 552 553 554 555 556  | Next Page >