Search Results

Search found 15380 results on 616 pages for 'man with python'.

Page 571/616 | < Previous Page | 567 568 569 570 571 572 573 574 575 576 577 578  | Next Page >

  • Android Broadcast Address

    - by Eef
    Hey, I am making a Client Server application for my Android phone. I have created a UDP Server in Python which sits and listens for connections. I can put either the server IP address in directly like 192.169.0.100 and it sends data fine. I can also put in 192.168.0.255 and it find the server on 192.169.0.100. Is it possible to get the broadcast address of the network my Android phone is connected to? I am only ever going to use this application on my Wifi network or other Wifi networks. Cheers

    Read the article

  • Linux filesystem with inodes close on the disk

    - by pts
    I'd like to make the ls -laR /media/myfs on Linux as fast as possible. I'll have 1 million files on the filesystem, 2TB of total file size, and some directories containing as much as 10000 files. Which filesystem should I use and how should I configure it? As far as I understand, the reason why ls -laR is slow because it has to stat(2) each inode (i.e. 1 million stat(2)s), and since inodes are distributed randomly on the disk, each stat(2) needs one disk seek. Here are some solutions I had in mind, none of which I am satisfied with: Create the filesystem on an SSD, because the seek operations on SSDs are fast. This wouldn't work, because a 2TB SSD doesn't exist, or it's prohibitively expensive. Create a filesystem which spans on two block devices: an SSD and a disk; the disk contains file data, and the SSD contains all the metadata (including directory entries, inodes and POSIX extended attributes). Is there a filesystem which supports this? Would it survive a system crash (power outage)? Use find /media/myfs on ext2, ext3 or ext4, instead of ls -laR /media/myfs, because the former can the advantage of the d_type field (see in the getdents(2) man page), so it doesn't have to stat. Unfortunately, this doesn't meet my requirements, because I need all file sizes as well, which find /media/myfs doesn't print. Use a filesystem, such as VFAT, which stores inodes in the directory entries. I'd love this one, but VFAT is not reliable and flexible enough for me, and I don't know of any other filesystem which does that. Do you? Of course, storing inodes in the directory entries wouldn't work for files with a link count more than 1, but that's not a problem since I have only a few dozen such files in my use case. Adjust some settings in /proc or sysctl so that inodes are locked to system memory forever. This would not speed up the first ls -laR /media/myfs, but it would make all subsequent invocations amazingly fast. How can I do this? I don't like this idea, because it doesn't speed up the first invocation, which currently takes 30 minutes. Also I'd like to lock the POSIX extended attributes in memory as well. What do I have to do for that? Use a filesystem which has an online defragmentation tool, which can be instructed to relocate inodes to the the beginning of the block device. Once the relocation is done, I can run dd if=/dev/sdb of=/dev/null bs=1M count=256 to get the beginning of the block device fetched to the kernel in-memory cache without seeking, and then the stat(2) operations would be fast, because they read from the cache. Is there a way to lock those inodes and/or blocks into memory once they have been read? Which filesystem has such a defragmentation tool?

    Read the article

  • creating PHP C/C++ extension modules using SWIG

    - by morpheous
    I have written some C/C++ extension modules for PHP, using the 'old fashioned way' - i.e. by using the manual way (as described by Sarah Golemon in her book). This is too fiddly for me, and since I am lazy, and would like to automate as much as possible. Also, I have used SWIG now to generate extensions to Python, and I am getting to like using it quite a lot. I am thinking of using SWIG to generate my future PHP extensions. I am using PHP v5.2 (and above) on my production servers. My questions are: Is SWIG PHP interface stable yet (i.e. ready for production)? If you answered yes to question 1 -are YOU using it in YOUR production site? Are there any 'gotchas' I need to be aware of when creating PHP extension ,modules using SWIG?

    Read the article

  • Are we in a functional programming fad?

    - by TraumaPony
    I use both functional and imperative languages daily, and it's rather amusing to see the surge of adoption of functional languages from both sides of the fence. It strikes me, however, that it looks rather like a fad. Do you think that it's a fad? I know the reasons for using functional languages at times and imperative languages in others, but do you really think that this trend will continue due to the cliched "many-core" revolution that has been only "18 months from now" since 2004 (sort of like communism's Radiant Future), or do you think that it's only temporary; a fascination of the mainstream developer that will be quickly replaced by the next shiny idea, like Web 3.0 or GPGPU? Note, that I'm not trying to start a flamewar or anything (sorry if it sounds bitter), I'm just curious as to whether people will think functional or functional/imperative languages will become mainstream. Edit: By mainstream, I mean, equal number of programmers to say, Python, Java, C#, etc

    Read the article

  • extend base.html problem

    - by momo
    I'm getting the following error: Template error In template /home/mo/python/django/templates/yoga/index.html, error at line 1 Caught TemplateDoesNotExist while rendering: base.html 1 {% extends "base.html" %} 2 3 {% block main %} 4 <p>{{ page.title }}</p> 5 <p>{{ page.info}}</p> 6 <a href="method/">Method</a> 7 {% endblock %} 8 this is my base.html file, which is located at the same place as index.html <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"> <div style="width:50%; marginleft:25%;"> {% block main %}{% endblock %} </div> what exactly is going on here? should the base.html file be located somewhere else?

    Read the article

  • How to choose the right web application framework?

    - by thenextwebguy
    http://en.wikipedia.org/wiki/Comparison_of_web_application_frameworks Since we are ambitiously aiming to be big, scalability is important, and so are globalization features. Since we are starting out without funding, price/performance and cost of licences/hardware is important. We definitely want to bring AJAX well present in the web interface. But apart from these, there's no further criteria I can come up with. I'm most experienced with C#/ASP.net, PHP and Java, in that order, but don't turn down other languages (Ruby, Python, Scala, etc.). How can we determine from the jungle of frameworks the one that suits best our goal? What other questions should we be asking ourselves? Reference material: articles, book recommendations, websites, etc.?

    Read the article

  • load-causing processes disappearing from "top" ps -o pcpu shows bogus numbers

    - by Alec Matusis
    I administer a large number of servers, and I have this problem only with Ubuntu 10.04 LTS: I run a server under normal load (say load average 3.0 on an 8-core server). The "top" command shows processes taking certain % of CPU that cause this load average: say PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 11008 mysql 20 0 25.9g 22g 5496 S 67 76.0 643539:38 mysqld ps -o pcpu,pid -p11008 %CPU PID 53.1 11008 , everything is consistent. The all of the sudden, the process causing the load average disappears from "top", but the process continues to run normally (albeit with a slight performance decrease), and the system load average becomes somewhat higher. The output of ps -o pcpu becomes bogus: # ps -o pcpu,pid -p11008 %CPU PID 317910278 1587 This happened to at least 5 different severs (different brand new IBM System X hardware), each running different software: one httpd 2.2, one mysqld 5.1, and one Twisted Python TCP servers. Each time the kernel was between 2.6.32-32-server and 2.6.32-40-server. I updated some machines to 2.6.32-41-server, and it has not happened on those yet, but the bug is rare (once every 60 days or so). This is from an affected machine: top - 10:39:06 up 73 days, 17:57, 3 users, load average: 6.62, 5.60, 5.34 Tasks: 207 total, 2 running, 205 sleeping, 0 stopped, 0 zombie Cpu(s): 11.4%us, 18.0%sy, 0.0%ni, 66.3%id, 4.3%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 74341464k total, 71985004k used, 2356460k free, 236456k buffers Swap: 3906552k total, 328k used, 3906224k free, 24838212k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 805 root 20 0 0 0 0 S 3 0.0 1493:09 fct0-worker 982 root 20 0 0 0 0 S 1 0.0 111:35.05 fioa-data-groom 914 root 20 0 0 0 0 S 0 0.0 884:42.71 fct1-worker 1068 root 20 0 19364 1496 1060 R 0 0.0 0:00.02 top Nothing causing high load is showing on top, but I have two highly loaded mysqld instances on it, that suddenly show crazy %CPU: #ps -o pcpu,pid,cmd -p1587 %CPU PID CMD 317713124 1587 /nail/encap/mysql-5.1.60/libexec/mysqld and #ps -o pcpu,pid,cmd -p1624 %CPU PID CMD 2802 1624 /nail/encap/mysql-5.1.60/libexec/mysqld Here are the numbers from # cat /proc/1587/stat 1587 (mysqld) S 1212 1088 1088 0 -1 4202752 14307313 0 162 0 85773299069 4611685932654088833 0 0 20 0 52 0 3549 27255418880 5483524 18446744073709551615 4194304 11111617 140733749236976 140733749235984 8858659 0 552967 4102 26345 18446744073709551615 0 0 17 5 0 0 0 0 0 the 14th and 15th numbers according to man proc are supposed to be utime %lu Amount of time that this process has been scheduled in user mode, measured in clock ticks (divide by sysconf(_SC_CLK_TCK). This includes guest time, guest_time (time spent running a virtual CPU, see below), so that applications that are not aware of the guest time field do not lose that time from their calculations. stime %lu Amount of time that this process has been scheduled in kernel mode, measured in clock ticks (divide by sysconf(_SC_CLK_TCK). On a normal server, these numbers are advancing, every time I check the /proc/PID/stat. On a buggy server, these numbers are stuck at a ridiculously high value like 4611685932654088833, and it's not changing. Has anyone encountered this bug?

    Read the article

  • immutable strings vs std::string

    - by Caspin
    I've recent been reading about immutable strings, here and here as well some stuff about why D chose immutable strings. There seem to be many advantages. trivially thread safe more secure more memory efficient in most use cases. cheap substrings (tokenizing and slicing) Not to mention most new languages have immutable strings, D2.0, Java, C#, Python, Ruby, etc. Would C++ benefit from immutable strings? Is it possible to implement an immutable string class in c++ (or c++0x) that would have all of these advantages?

    Read the article

  • Resolving patch conflicts manually

    - by Antony Hatchkins
    I've downloaded a patch from some site and trying to apply it (twisted, python web framework). Several hunks failed. How do I automate manual patching process using vim? Details: I'm trying to automate the process of applying failed hunks. Many tiny changes, each about adding/removing 1-2 chars. Difficult to see. I Have to create two new temporary files and :diffthis them manually to see the difference. Yes, outside VCS. I can imagine a neat way to deal with it using git, but I would prefer to avoid creating git repo for that.

    Read the article

  • CUPS Web Admin Error 500 Unknown

    - by Floyd Resler
    I keep getting a 500 Unknown error whenever I navigate off the home page of my CUPS web admin. I'm sure I have something misconfigured but I'm not sure what. Here's my configuration: # # "$Id: cupsd.conf.in 8805 2009-08-31 16:34:06Z mike $" # # Sample configuration file for the CUPS scheduler. See "man cupsd.conf" for a # complete description of this file. # # Log general information in error_log - change "warn" to "debug" # for troubleshooting... LogLevel warn # Administrator user group... SystemGroup lpadmin sys root # Only listen for connections from the local machine. Listen 192.168.6.101:631 Listen /var/run/cups/cups.sock ServerName 192.168.6.101 # Show shared printers on the local network. Browsing On BrowseOrder allow,deny BrowseAllow all BrowseLocalProtocols CUPS BrowseAddress 192.168.6.255 # Default authentication type, when authentication is required... DefaultAuthType Basic # Restrict access to the server... Order allow,deny Allow From All Allow From 127.0.0.1 # Restrict access to the admin pages... AuthType Default Require user @SYSTEM Order allow,deny Allow From All Allow From 127.0.0.1 # Restrict access to configuration files... AuthType Default Require user @SYSTEM Order allow,deny Allow From All Allow From 127.0.0.1 # Set the default printer/job policies... # Job-related operations must be done by the owner or an administrator... Require user @OWNER @SYSTEM Order deny,allow # All administration operations require an administrator to authenticate... AuthType Default Require user @SYSTEM Order deny,allow # All printer operations require a printer operator to authenticate... AuthType Default Require user @SYSTEM Order deny,allow # Only the owner or an administrator can cancel or authenticate a job... Require user @OWNER @SYSTEM Order deny,allow Order deny,allow # Set the authenticated printer/job policies... # Job-related operations must be done by the owner or an administrator... AuthType Default Order deny,allow AuthType Default Require user @OWNER @SYSTEM Order deny,allow # All administration operations require an administrator to authenticate... AuthType Default Require user @SYSTEM Order deny,allow # All printer operations require a printer operator to authenticate... AuthType Default Require user @SYSTEM Order deny,allow # Only the owner or an administrator can cancel or authenticate a job... AuthType Default Require user @OWNER @SYSTEM Order deny,allow Order deny,allow # # End of "$Id: cupsd.conf.in 8805 2009-08-31 16:34:06Z mike $". #

    Read the article

  • What are the things I use every day programmed with?

    - by sub
    It isn't so interesting to find out what this text editor here or that IRC client there was programmed with, also it isn't really hard and neither are there really suprising things to come out. Wow so it was programmed in Python, I didn't expect that. What I'm asking is: What are the things that we daily see, use or generally need programmed with? To name a few (really only a few of those out there): My alarm clock It has many features so it would probably be hard programming it with assembler or whatever, so did they probably use a programming language? If yes, which? My electrical tooth brush The (stupid) board computer of my car. (6 years old, has few features but a red LED display showing me how cold/warm it is outside and how much gas I'm using up per hour at the moment) Those (old) plastic mini-mini computers with the LCD(?) displays that only had one game available on them: PacMan, tetris or so. I'm not directly thinking of this but it may be similar: Other, probably more interesting, things I didn't mention

    Read the article

  • How to generate GIR files from the Vala compiler?

    - by celil
    I am trying to create python bindings to a vala library using pygi with gobject introspection. However, I am having trouble generating the GIR files (that I am planning to compile to typelib files subsequently). According to the documentation valac should support generating GIR files. Compiling the following helloworld.vala public struct Point { public double x; public double y; } public class Person { public int age = 32; public Person(int age) { this.age = age; } } public int main() { var p = Point() { x=0.0, y=0.1 }; stdout.printf("%f %f\n", p.x, p.y); var per = new Person(22); stdout.printf("%d\n", per.age); return 0; } with the command valac helloworld.vala --gir=Hello-1.0.gir doesn't create the Hello-1.0.gir file as one would expect. How can I generate the gir file?

    Read the article

  • Getting Bad file descriptor when running Tornado AsyncHTTPTestCase

    - by Will
    When running a test using the Tornado AsyncHTTPTestCase I'm getting a stack trace that isn't related to the test. The test is passing so this is probably happening on the test clean up? I'm using Python 2.7.2, Tornado 2.2. The test code is: class AllServersHandlerTest(AsyncHTTPTestCase): endpoint = AllServersHandler.endpoint # '/rest/test/' def test_server_status_with_advertiser(self): on_new_host(None, '127.0.0.1') response = self.fetch(self.endpoint, method='GET') result = json.loads(response.body, 'utf8').get('data') self.assertEquals(['127.0.0.1'], result) The test passes ok, but I get the following stack trace from the Tornado server. OSError: [Errno 9] Bad file descriptor INFO:root:200 POST /rest/serverStatuses (127.0.0.1) 0.00ms DEBUG:root:error closing fd 688 Traceback (most recent call last): File "C:\Python27\Lib\site-packages\tornado-2.2-py2.7.egg\tornado\ioloop.py", line 173, in close os.close(fd) OSError: [Errno 9] Bad file descriptor Any ideas how to cleanly shutdown the test case?

    Read the article

  • Which are your favorite programming language gadgets?

    - by FerranB
    There are some gadgets/features for programming languages that I like a lot because they save a lot of coding or simply because they are magical or nice. Some of my favorites are: C++ increment/decrement operator: my_array[++c]; C++ assign and sum or substract (...): a += b C# yield return: yield return 1; C# foreach: foreach (MyClass x in MyCollection) PLSQL for loop: for c in (select col1, col2 from mytable) PLSQL pipe row: for i in 1..x loop pipe row(i); end loop; Python Array access operator: a[:1] PLSQL ref cursors. Which are yours?

    Read the article

  • Entities groups in transactions

    - by Joel
    In the context of "Keys and Entity Groups" article by google: http://code.google.com/appengine/docs/python/datastore/transactions.html 1) "Only use entity groups when they are needed for transactions" 2) "Every entity belongs to an entity group, a set of one or more entities that can be manipulated in a single transaction." It seems like entity groups exist only for the use of transactions, i.e. making one transaction possible between all entities in a group. My question is then why are there parent-child relations between entities and not just a simple declaration of entities to be in a single group (that is defining A,B,C to be in the same group as opposed to defining relations between them "A (parent of) B, B (parent of C)"). What is the benefit from using parent-child relation model when the only purpose is for entities to be in the same group to make transaction possible? Thanks Joel

    Read the article

  • "Invalid signature": oAuth provider with Django-piston

    - by Martin Eve
    Hi, I'm working with django-piston to attempt to create an API that supports oAuth. I started out using the tutorial at: http://blog.carduner.net/2010/01/26/django-piston-and-oauth/ I added a consumer to piston's admin interface with key and secret both set to "abcd" for test purposes. The urls are successfully wired-up and the oAuth provider is called. However, running my get request token tests with tripit (python get_request_token.py "http://127.0.0.1:8000/api" abcd abcd), I receive the following error: Invalid signature. Expected signature base string: GET&http%3A%2F%2F127.0.0.1%3A8000%2Fapi%2Foauth%2Frequest_token%2F&oauth_consumer_key%3Dabcd%26oauth_nonce%3D0c0bdded5b1afb8eddf94f7ccc672658%26oauth_signature_method%3DHMAC-SHA1%26oauth_timestamp%3D1275135410%26oauth_version%3D1.0 The problem seems to lie inside the _check_signature method of Piston's oauth.py, where valid_sig = signature_method.check_signature(oauth_request, consumer, token, signature) is returning false. I can't, however, work out how to get the signature validated. Any ideas? -----Update----- If I remove the test consumer from piston's backend, the response returned is correctly set to "Invalid consumer", so this lookup appears to be working.

    Read the article

  • Where can I find the gtk-builder-convert script?

    - by Marty
    I've built a small GUI app for work that uses some .glade files for pop-up windows. Recently, the ground beneath me was shifted - my environment was upgraded. Newer pyGTK versions require GTKBuilder and .xml files instead of Glade and .glade files and now my poor app is broken. I need to convert the .glade file to the newer .xml file. Problem is Glade-3 is not on our system, and I can't find gtk-builder-convert on the web. I've looked at the Gnome GIT Browser, don't know where to start looking or how to search it. Would anyone be kind enough to point me to the gtk-builder-convert python script?

    Read the article

  • Handle mysql restart in SQLAlchemy

    - by wRAR
    My Pylons app uses local MySQL server via SQLAlchemy and python-MySQLdb. When the server is restarted, open pooled connections are apparently closed, but the application doesn't know about this and apparently when it tries to use such connection it receives "MySQL server has gone away": File '/usr/lib/pymodules/python2.6/sqlalchemy/engine/default.py', line 277 in do_execute cursor.execute(statement, parameters) File '/usr/lib/pymodules/python2.6/MySQLdb/cursors.py', line 166 in execute self.errorhandler(self, exc, value) File '/usr/lib/pymodules/python2.6/MySQLdb/connections.py', line 35 in defaulterrorhandler raise errorclass, errorvalue OperationalError: (OperationalError) (2006, 'MySQL server has gone away') This exception is not caught anywhere so it bubbles up to the user. If I should handle this exception somewhere in my code, please show the place for such code in a Pylons WSGI app. Or maybe there is a solution in SA itself?

    Read the article

  • What programs should I write to truly experience this fancy new language ?

    - by privatehuff
    Tried Scheme at one point, just built up half of a "math" and "string" library before getting bored... Similar experience with Java, but stopped early because I was appalled at the lack of operator overloading. When you try out a new language, is there a program/game/function/exercise/problem that you use to get into the hot meaty center and really EXPERIENCE the language? I've been wanted to try Python, Ruby, some lisps, etc but can't seem to find any meaningful work to do with them, or any reason to use them for anything over languages I already know. Sorry this is a discussion, but you are EXACTLY the people I want to get input from on this

    Read the article

  • Variadic functions and arguments assignment in C/C++

    - by Rizo
    I was wondering if in C/C++ language it is possible to pass arguments to function in key-value form. For example in python you can do: def some_function(arg0 = "default_value", arg1): # (...) value1 = "passed_value" some_function(arg1 = value1) So the alternative code in C could look like this: void some_function(char *arg0 = "default_value", char *arg1) { ; } int main() { char *value1 = "passed_value"; some_function(arg1 = value1); return(0); } So the arguments to use in some_function would be: arg0 = "default_value" arg1 = "passed_value" Any ideas?

    Read the article

  • How many layers are between my program and the hardware?

    - by sub
    I somehow have the feeling that modern systems, including runtime libraries, this exception handler and that built-in debugger build up more and more layers between my (C++) programs and the CPU/rest of the hardware. I'm thinking of something like this: 1 + 2 OS top layer Runtime library/helper/error handler a hell lot of DLL modules OS kernel layer Do you really want to run 1 + 2?-Windows popup (don't take this serious) OS kernel layer Hardware abstraction Hardware Go through at least 100 miles of circuits Eventually arrive at the CPU ADD 1, 2 Go all the way back to my program Nearly all technical things are simply wrong and in some random order, but you get my point right? How much longer/shorter is this chain when I run a C++ program that calculates 1 + 2 at runtime on Windows? How about when I do this in an interpreter? (Python|Ruby|PHP) Is this chain really as dramatic in reality? Does Windows really try "not to stand in the way"? e.g.: Direct connection my binary < hardware?

    Read the article

  • IIS httpTracing setting has no effect

    - by digahill
    I'm trying to troubleshoot some performance issues we are having on a specific ASP.NET page with Microsoft's Perfecto Tool on IIS 7.5. Perfecto uses the ETW hooks build in to IIS to report on specific HTTP request, and is working quite well. However, I only want IIS to emit traces for one specific page, say "Default.aspx" in my TestApp Web Application. Following the instructions on the httpTracing man page, I should be able to add the traceUrls element to my root web.config file for TestApp. This doesn't seem to affect tracing whatsoever when I do so. For example, I've used the following settings in the web.config file and every request that hits the IIS server is sending tracing messages that are in turn picked up by Perfecto. (In the System.WebServer section) <httpTracing> <traceUrls> <add value="/Default.aspx" /> </traceUrls> </httpTracing> I then found that the applicationHost.config file on the server had an empty element. I tried removing this element, as well as the httpTracing element in the web.config. After a machine reboot, I was still getting tracing messages! My understanding is that the presense of the httpTracing element is what controlls whether ETW tracing is on or not. I ensured there was no reference to httpTracing in the machine.config, too. At a loss, I decided to remove the IIS Tracing feature with Server Manager. After a reboot, I no longer got ETW tracing. I then reinstalled IIS Tracing feature with Server Manager. As expected, the httpTracing element reappeared in the applicationhost.config file. Tracing messages began sending again for all sites and pages. I then tried to use the traceUrls element at the applicationhost.config level. This also didn't filter out and traces. I must be misunderstanting something key with how httpTracing works. There aren't many resources on the web to help me, either. Can anyone tell me if what I'm trying should work? Has anyone else had success filtering tracing message per page with traceUrls? I should note that I also tried changing with the following setting in applicationhost.config to "allow". It didn't seem to help. <section name="httpTracing" overrideModeDefault="Allow" />

    Read the article

  • how do i claim a low-numbered port as non-root the "right way"

    - by qbxk
    I have a script that I want to run as a daemon listening on a low-numbered port (< 1024) Script is in python, though answers in perl are also acceptable. The script is being daemonized using start-stop-daemon in a startup script, which may complicate the answer What I really (think) don't want is to type ps -few and see this process running with a "root" on it's line. How do I do it? ( from my less-than-fully-educated-about-system-calls perspective, I can see 3 avenues, Run the script as root (no --user/--group/--chuid to start-stop-daemon), and have it de-escalate it's user after it claims the port Setuid root on the script (chmod u+s), and run the script as the running user, (via --user/--group/--chuid to start-stop-daemon, the startup script still has to be called as root), in the script, acquire root privileges, claim the port, and then revert back to normal user something else i'm unaware of )

    Read the article

  • Limiting the maximum number of concurrent requests django/apache

    - by Johan
    Hi, I have a django site that demonstrates the usage of a tool. One of my views takes a file as input and runs some fairly heavy computation trough an external python script and returns some output to the user. The tool runs fast enough to return the output in the same request though. I would however want to limit how many concurrent requests to this URL/view to keep the server from getting congested. Any tips on how i would go about doing this? The page in itself is very simple and the usage will be low.

    Read the article

  • How much did it cost our competitor to DDoS us at 50 Gbps for two weeks?

    - by MiniQuark
    I know that this question may sound like an invalid serverfault question, but I believe that it's quite valid: the amount of time and effort that a sysadmin should spend on DDoS protection is a direct function of typical DDoS prices. Let me rephrase this: protecting a web site against small attacks is one thing, but resisting 50 Gbps of UDP flood is another and requires time & money. Deciding whether or not to spend that time & money depends on whether such an attack is likely or not, and this in turn depends on how cheap and simple such an attack is for the attacker. So here's the full story: our company has been victim to a massive DDoS attack (over 50 Gbps of UDP traffic, full-time during 2 weeks). We are pretty sure that it's one of our competitors, and we actually know which one, because we were the only two remaining competitors on a very big request for proposal, and the DDoS attack magically stopped the day we won (double hurray, by the way)! These people have proved in the past that they are very dishonest, but we know that they are not technical at all, so we believe that they simply paid for some botnet DDoS service. I would like to know how much these services typically cost, for such a large scale attack. Please do not give any link to such services, I would really hate to give these people any publicity. I understand that a hacker could very well do this for free, but what's a typical price for such an attack if our competitors paid for it through some kind of botnet service? It is really starting to scare me (if we're talking thousands of dollars here, then I am really going to freak off: who knows, they might just hire a hit-man one day?). Of course we filed a complaint, but the police says that they cannot do much about it (DDoS attacks are virtually untraceable, so they say), and our suspicions are not enough to justify them raiding our competitor's offices to search for proofs. For your information, we now changed our infrastructure to be able to sustain such attacks: we now use a major CDN service so that our servers are not directly affected by DDoS attacks. Requests for dynamic pages do get proxied to our servers, but for low level attacks (UDP flood, or Syn floods, for example) we only receive legitimate trafic, so we're fine. If they decide to launch higher level attacks (HTTP flood or slowloris attacks for example), most of the load should be handled by the CDN... at least I hope so! Thank you very much for your help.

    Read the article

< Previous Page | 567 568 569 570 571 572 573 574 575 576 577 578  | Next Page >