Search Results

Search found 23098 results on 924 pages for 'multiple processes'.

Page 428/924 | < Previous Page | 424 425 426 427 428 429 430 431 432 433 434 435  | Next Page >

  • Mercurial Remote Subrepos

    - by Travis G
    I'm trying to set up my Mercurial repository system to work with multiple subrepos. I've basically followed these instructions to set up the client repo with Mercurial client v1.5 and I'm using HgWebDir to host my multiple projects. I have an HgWebDir with the following structure: http://myserver/hg fooproj mylib where mylib is some collection of common template library to be consumed by fooproj. The structure of fooproj looks like this: fooproj doc/ src/ .hgignore .hgsub .hgsubstate And .hgsub looks like: src/mylib = http://myserver/hg/mylib This should work, per my interpretation of the documentation: The first 'nested' is the path in our working dir, and the second is a URL or path to pull from. So, let's say I pull down fooproj to my home folder with: ~$ hg clone http://myserver/hg/fooproj foo Which pulls down the directory structure properly and adds the folder ~/foo/src/mylib which is a local Mercurial repository. This is where the problems begin: the mylib folder is empty aside from the items in .hg. With 2 seconds of investigation, one can see the src/mylib/.hg/hgrc is: [paths] default = http://myserver/hg/fooproj/src/mylib which is completely wrong (attempting a pull of that repo will give a 404 because, well, that URL doesn't make any sense). Logically, the default value should be what I specified in .hgsub or it would get the files from the repository in some way. None of the Mercurial commands return error codes (aside from a pull from within src/mylib), so it clearly believes that it is behaving properly (and just might be), although this does not seem logical at all. What am I doing wrong?

    Read the article

  • Mysql install and remove issues

    - by Matt
    I installed mysql on ubuntu server and i dont know what went wrong...it didnt install a mysql root user so i tried to uninstall and start over and now i cant unistall i tried this apt-get remove php5-mysql apt-get remove mysql-server mysql-client apt-get autoremove but when i do ps aux | grep mysql root 6066 0.0 0.0 1772 540 pts/1 S 03:21 0:00 /bin/sh /usr/bin/mysqld_safe mysql 7065 0.0 0.6 58936 11900 pts/1 Sl 03:33 0:00 /usr/sbin/mysqld -- basedir=/usr --datadir=/var/lib/mysql --user=mysql --pid-file=/var/run/mysqld/mysqld.pid -- socket=/var/run/mysqld/mysqld.sock --port=3306 root 7066 0.0 0.0 2956 688 pts/1 S 03:33 0:00 logger -t mysqld -p daemon.error root 22804 0.0 0.0 3056 780 pts/1 R+ 04:14 0:00 grep mysql so i killed the processes and then tried to reinstall like this apt-get -f install sudo apt-get install mysql-server mysql-client sudo mysqladmin -u root -h localhost password 'root' but i get this mysqladmin: connect to server at 'localhost' failed error: 'Access denied for user 'root'@'localhost' (using password: NO)' im confused..i keep installing and uninstalling mysql and the same result..any ideas

    Read the article

  • How can I poll different aws sqs in the same process?

    - by Luccas
    What is the right way to poll from differents AWS SQS in the same process? Suppose I have a ruby script: listen_queues.rb and run it. Should I need to create threads to wrap each SQS poll or start sub processes? t1 = Thread.new do queue1.poll do |msg| .... end t2 = Thread.new do queue2.poll do |msg| .... end t2.join I tried this code, but the poll is not receiving any of the messages available. When I run only one of them (t1 or t2), it works. But I need the 2 running. What is going on? Thanks!!

    Read the article

  • simple Stata program

    - by Cyrus S
    I am trying to write a simple program to combine coefficient and standard error estimates from a set of regression fits. I run, say, 5 regressions, and store the coefficient(s) and standard error(s) of interest into vectors (Stata matrix objects, actually). Then, I need to do the following: Find the mean value of the coefficient estimates. Combine the standard error estimates according to the formula suggested for combining results from "multiple imputation". The formula is the square root of the formula for "T" on page 6 of the following document: http://bit.ly/b05WX3 I have written Stata code that does this once, but I want to write this as a function (or "program", in Stata speak) that takes as arguments the vector (or matrix, if possible, to combine multiple estimates at once) of regression coefficient estimates and the vector (or matrix) of corresponding standard error estimates, and then generates 1 and 2 above. Here is the code that I wrote: (breg is a 1x5 vector of the regression coefficient estimates, and sereg is a 1x5 vector of the associated standard error estimates) mat ones = (1,1,1,1,1) mat bregmean = (1/5)*(ones*breg’) scalar bregmean_s = bregmean[1,1] mat seregmean = (1/5)*(ones*sereg’) mat seregbtv = (1/4)*(breg - bregmean#ones)* (breg - bregmean#ones)’ mat varregmi = (1/5)*(sereg*sereg’) + (1+(1/5))* seregbtv scalar varregmi_s = varregmi[1,1] scalar seregmi = sqrt(varregmi_s) disp bregmean_s disp seregmi This gives the right answer for a single instance. Any pointers would be great! UPDATE: I completed the code for combining estimates in a kXm matrix of coefficients/parameters (k is the number of parameters, m the number of imputations). Code can be found here: http://bit.ly/cXJRw1 Thanks to Tristan and Gabi for the pointers.

    Read the article

  • Need script to redirect STDIN & STDOUT to named pipes

    - by user54903
    I have an app that launches an authentication helper (my script) and uses STDIN/STDOUT to communicate. I want to re-direct STDIN and STDOUT from this script to two named pipes for interaction with another program. E.g.: SCRIPT_STDIN pipe1 SCRIPT_STDOUT < pipe2 Here is the flow I'm trying to accomplish: [Application] - Launches helper script, writes to helpers STDIN, reads from helpers STDOUT (example: STDIN:username,password; STDOUT:LOGIN_OK) [Helper Script] - Reads STDIN (data from app), forwards to PIPE1; reads from PIPE2, writes that back to the app on STDOUT [Other Process] - Reads from PIPE1 input, processes and returns results to PIPE2 The cat command can almost do what I want. If there were an option to copy STDIN to STDERR I could make cat do this with a command (assuming the fictitious option -e echos to STDERR rather than STDOUT): cat -e PIPE2 2PIPE1 (read from PIPE2 and write it to STDOUT, copy input, normally going to STDERR to PIPE1)

    Read the article

  • Secure PHP environments with PHP-FPM and SFTP

    - by pdd
    I'd like to set up secure environments for a small number of untrusted PHP websites on a Debian server. Right now everything runs on the same Apache2 with mod_php5 and vsftpd for administrative file access, so there is room for improvement. The idea is to use nginx instead of apache, SFTP through OpenSSH instead of vsftpd and chrooted (in sshd_config), individual users for each website with their own pool of PHP processes. All these users and nginx are part of the same group. Now in theory I can set 700 permissions on all PHP scripts and 750 on static files that nginx has to serve up. Theoretically, if a website is compromised all the other users' data is safe, right? Are there better solutions that require less setup time and memory per website? Cheers

    Read the article

  • Apache hanging with MaxClients is reached

    - by Ash White
    My Apache 2.2 (preform MPM) is hanging when MaxClients is reached, rather than queueing up requests and serving them when child processes become free. When this happens, the web server is totally unresponsive until it is manually restarted. The server stack is Ubuntu 8, MySQL 5, PHP 5. Hardware is Dual Xeons (2.8) with 2GB of RAM. It serves 30,000 - 50,000 pageviews per day. Static images, CSS, and JS are offloaded to a separate server and PHP is cached using eAccelerator. The HTML output of many pages is cached to the filesystem. Relevant Apache directives: KeepAlive On MaxKeepAliveRequests 50 KeepAliveTimeout 2 StartServers 2 MaxClients 150 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 2000

    Read the article

  • MySQL 5.1.49 freezing every two days

    - by maximus
    Hi all, our mysql system is "freezing" every two days. By "freezing" i mean the following: it doesn't respond to ping we can't login with SSH we don't get any answer from MySQL there is no entry in the error logs! neither from linux neither from MySQL. we have already changed to a completely new hardware, we have the same problem, so it's definitely not a hardware problem. we do not have any other software installed except a firewall (iptables rule) we can restart the server from another server using rsyslog (www.rsyslog.com)(software reset) Could someone help me, by giving me some pointers what could i do to figure out the problem? I have included every detail about our settings. Thank you in advance for your help. Max. Our system parameters and settings: System-Memory: 12GB Processor: Intel 7-920 Quadcore Operating system: Debian 5 (lenny) 64bit MySQL 5.1.49 Databases: (a) a small phpbb forum (b) a 6GB database 3 tables with about 15 million rows my.cnf # # The MySQL database server configuration file. # # You can copy this to one of: # - "/etc/mysql/my.cnf" to set global options, # - "~/.my.cnf" to set user-specific options. # # One can use all long options that the program supports. # Run program with --help to get a list of available options and with # --print-defaults to see which it would actually understand and use. # # For explanations see # http://dev.mysql.com/doc/mysql/en/server-system-variables.html # This will be passed to all mysql clients # It has been reported that passwords should be enclosed with ticks/quotes # escpecially if they contain "#" chars... # Remember to edit /etc/mysql/debian.cnf when changing the socket location. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock # Here is entries for some specific programs # The following values assume you have at least 32M ram # This was formally known as [safe_mysqld]. Both versions are currently parsed. [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] # # * Basic Settings # user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp language = /usr/share/mysql/english skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. bind-address = our-ip-address # # * Fine Tuning # key_buffer = 16M max_allowed_packet = 16M thread_stack = 256K thread_cache_size = 32 max_connections = 300 table_cache = 2048 #thread_concurrency = 4 # Used for InnoDB tables recommended to 50%-80% available memory innodb_buffer_pool_size = 6G # 20MB sometimes larger innodb_additional_mem_pool_size = 20M # 8M-16M is good for most situations innodb_log_buffer_size = 8M # Disable XA support because we do not use it innodb-support-xa = 0 # 1 is default wich is 100% secure but 2 offers better performance innodb_flush_log_at_trx_commit = 1 innodb_flush_method = O_DIRECT #innodb_thread_concurency = 8 # Recommended 64M - 512M depending on server size innodb_log_file_size = 512M # One file per table innodb_file_per_table # # * Query Cache Configuration # query_cache_limit = 1M query_cache_size = 16M #query_cache_type = 1 #query_cache_min_res_unit= 2K #join_buffer_size = 1M # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. # As of 5.1 you can enable the log at runtime! #general_log_file = /var/log/mysql/mysql.log #general_log = 1 # # Error logging goes to syslog. This is a Debian improvement :) # # Here you can see queries with especially long duration log_slow_queries = /var/log/mysql/mysql-slow.log long_query_time = 2 log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. #server-id = 1 log_bin = /var/log/mysql/mysql-bin.log # WARNING: Using expire_logs_days without bin_log crashes the server! See README.Debian! expire_logs_days = 10 max_binlog_size = 100M #binlog_do_db = include_database_name #binlog_ignore_db = include_database_name # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # * InnoDB plugin # As of MySQL 5.1.38, the InnoDB plugin from Oracle is included in the MySQL source code. # It has many improvements and better performances than the built-in InnoDB storage engine. # Please read http://www.innodb.com/products/innodb_plugin/ for more information. # Uncommenting the two following lines to use the InnoDB plugin. ignore_builtin_innodb plugin-load=innodb=ha_innodb_plugin.so # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 16M # # * NDB Cluster # # See /usr/share/doc/mysql-server-*/README.Debian for more information. # # The following configuration is read by the NDB Data Nodes (ndbd processes) # not from the NDB Management Nodes (ndb_mgmd processes). # # [MYSQL_CLUSTER] # ndb-connectstring=127.0.0.1 # # * IMPORTANT: Additional settings that can override those from this file! # !includedir /etc/mysql/conf.d/ UPDATE After installing sysstat and configuring it to collect data after every minute i have the following datas. I used sar to generate the following output: The log-file is too big so coudn't enter it here but uploaded to box.net. The link is http://www.box.net/shared/xc6rh7qqob SECOND UPDATE We started a ping command in the background, and that solved the problem. Now the server does work since more then a week. We still don't know what's the problem.

    Read the article

  • Linux's best filesystem to work with 10000's of files without overloading the system I/O

    - by mhambra
    Hi all. It is known that certain AMD64 Linuxes are subject of being unresponsive under heavy disk I/O (see Gentoo forums: AMD64 system slow/unresponsive during disk access (Part 2)), unfortunately have such one. I want to put /var/tmp/portage and /usr/portage trees to a separate partition, but what FS to choose for it? Requirements: * for journaling, performance is preffered over safe data read/write operations * optimized to read/write 10000 of small files Candidates: * ext2 without any journaling * BtrFS In Phoronix tests, BtrFS had demonstrated a good random access performance (fat better than XFS thereby it may be less CPU-aggressive). However, unpacking operation seems to be faster with XFS there, but it was tested that unpacking kernel tree to XFS makes my system to react slower for 51% disregard of any renice'd processes and/or schedulers. Why no ReiserFS? Google'd this (q: reiserfs ext2 cpu): 1 Apr 2006 ... Surprisingly, the ReiserFS and the XFS used significantly more CPU to remove file tree (86% and 65%) when other FS used about 15% (Ext3 and ... Is it same now?

    Read the article

  • Is it possible to show " New Task" of Task Manager from tasks of Task Manager only?

    - by WebMAOhist
    I work on remote Windows Server 2008 machine over RDP and frequently need to revive broken copy&pasting over RDP. Which is killing rdpclip process in Task Manger (tab "Processes") and launching it gain by: switching to tab Applications -- pressing button "New Task..." -- typing 'rdpclip'-- clicking OK. Well, I do not need to type 'rdpclip' if it is the last used command. The problem that in "Create New Task" of Windows Task Manager I always launch rdpclip only but it shows the last run program from Windows command prompt (Win +R), usually it is "notepad" for me. Is it possible to bind the last used command/task to Task Manager only? and how?

    Read the article

  • REST API Best practice: How to accept as input a list of parameter values

    - by whatupwilly
    Hi All, We are launching a new REST API and I wanted some community input on best practices around how we should have input parameters formatted: Right now, our API is very JSON-centric (only returns JSON). The debate of whether we want/need to return XML is a separate issue. As our API output is JSON centric, we have been going down a path where our inputs are a bit JSON centric and I've been thinking that may be convenient for some but weird in general. For example, to get a few product details where multiple products can be pulled at once we currently have: http://our.api.com/Product?id=["101404","7267261"] Should we simplify this as: http://our.api.com/Product?id=101404,7267261 Or is having JSON input handy? More of a pain? We may want to accept both styles but does that flexibility actually cause more confusion and head aches (maintainability, documentation, etc.)? A more complex case is when we want to offer more complex inputs. For example, if we want to allow multiple filters on search: http://our.api.com/Search?term=pumas&filters={"productType":["Clothing","Bags"],"color":["Black","Red"]} We don't necessarily want to put the filter types (e.g. productType and color) as request names like this: http://our.api.com/Search?term=pumas&productType=["Clothing","Bags"]&color=["Black","Red"] Because we wanted to group all filter input together. In the end, does this really matter? It may be likely that there are so many JSON utils out there that the input type just doesn't matter that much. I know our javascript clients making AJAX calls to the API may appreciate the JSON inputs to make their life easier. Thanks, Will

    Read the article

  • Monitor RAM usage on CentOS and restart Apache at a certain usage

    - by Chris
    Hi, I'm running a CentOS 5.3 server with a basic LAMP stack. I've optimized LAMP and my code to run efficiently as possible, but Apache has a memory leak somewhere that kills my server every hour or so. What is the best way to write a script that will monitor the memory usage and if it peaks over, say, 450MB kill all the Apache processes and restart Apache. I know C++/PHP and basic Linux server administration but I'm not familiar with Perl or bash scripting. I'd be open to learn any solutions, though, as a temporary solution while I find the issue.

    Read the article

  • Change Management Software

    - by Andrew
    I manage an 80,000 user CIS application written in Uniface. Every form in the application, and many of its processes, are represented by .frm files. We have hundreds of these files and 5 instances of the application. Instances include multiple production installations which must be kept sync'd. We do not get MD5 from our vendor for files that are released to us as patches. We have been using a spreadsheet to track changes, but this is far from ideal. Is there a commercial application that can be purchased that will allow us to track changes to the instances? Thank you all! EDIT: Patches are released as zip files with either FRM files in them or SQL files or a mix of both. SQL files will contain statements that need to be run in Oracle. Patches are also assigned unique patch numbers.

    Read the article

  • Starting out Silverlight 4 design

    - by Fermin
    I come from mainly a web development background (ASP.NET, ASP.NET MVC, XHTML, CSS etc) but have been tasked with creating/designing a Silverlight application. The application is utilising Bing Maps control for Silverlight, this will be contained in a user control and will be the 'main' screen in the system. There will be numerous other user controls on the form that will be used to choose/filter/sort/order the data on the map. I think of it like Visual Studio: the Bing Maps will be like the code editor window and the other controls will be like Solutions Explorer, Find Results etc. (although a lot less of them!) I have read up and I'm comfortable with the data side (RIA-Services) of the application. I've (kinda) got my head around databinding and using a view model to present data and keep the code behind file lite. What I do need some help on is UI design/navigation framework, specifically 2 aspects: How do I best implement a fluid design so that the various user controls which filter the map data can be resized/pinned/unpinned (for example, like the Solution Explorer in VS)? I made a test using a Grid with a GridSplitter control, is this the best way? Would it be best to create a Grid/Gridsplitter with Navigation Frames inside the grid to load the content? Since I have multiple user controls that basically use the same set of data, should I set the dataContext at the highest possible level (e.g. if using a grid with multiple frames, at the Grid level?). Any help, tips, links etc. will be very much appreciated!

    Read the article

  • Shut Down took way too long because of "Background Programs"

    - by Christopher Chipps
    I tried shutting my desktop PC (with Windows7) down but after several attempts (like 4 or 5) at Start -- Shut Down, the GUI was still there and it was not shutting down. I didn't think there were any programs running when I pressed Shut Down, so I went into the taskbar (Ctrl + Alt + Del) to check out the processes. Once I did that, a screen appeared with a message stating that there are "background programs" still running and it gave me an option to "Force Shut Down" which I pressed and it shut down normally. Does anyone know why this would happen?

    Read the article

  • Drop in solution for logging to DB

    - by Jake
    I'm considering setting up our servers to log to a Mongo Database rather than log files. Logs will then be all on one server, queryable, and overall easier to manage. I'd love to find a solution that will allow all the different processes I have running to write to DB rather than files (or perhaps something to read the files, pass the logs on and truncate the files). I don't want to have to find a different solution for every process if I can avoid it. So, does anyone know of an existing solution to this problem?

    Read the article

  • What does the suffix 'w' and 'd' mean with 'TIME+' in top?

    - by ssapkota
    Here's a chunk of the top from my server: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 18878 www-data 20 0 200m 13m 4704 S 0 0.2 0:00.07 apache2 12374 root 20 0 197m 9460 4480 S 0 0.1 21212906w apache2 9136 root 20 0 79100 3488 2716 S 0 0.0 54518724d sshd I know the TIME+ means the total CPU time the task has used since it started. But in the above output, I simply couldn't understand what 21212906w and 54518724d mean? some considerable no of processes are showing the TIME+ with w and d prefixed. What does this mean? Is the server in trouble? Just to let you know - the server uptime is 4days. EDIT: - I can guess these refer week and days. If so why is it so large considering the uptime? - The server has 8 cores.

    Read the article

  • Limiting choices from an intermediary ManyToMany junction table in Django

    - by Matthew Rankin
    Background I've created three Django models—Inventory, SalesOrder, and Invoice—to model items in inventory, sales orders for those items, and invoices for a particular sales order. Each sales order can have multiple items, so I've used an intermediary junction table—SalesOrderItems—using the through argument for the ManyToManyField. Also, partial billing of a sales orders is allowed, so I've created a ForeignKey in the Invoice model related to the SalesOrder model, so that a particular sales order can have multiple invoices. Here's where I deviate from what I've normally seen. Instead of relating the Invoice model to the Item model via a ManyToManyField, I've related the Invoice model to the SalesOrderItem intermediary junction table through the intermediary junction table InvoiceItem. I've done this because it better models reality—our invoices are tied to sales orders and can only include items that are tied to that sales order as opposed to any item in inventory. I will admit that it does seem strange having the intermediary junction table of a ManyToManyField related to the intermediary junction table of another ManyToManyField. Question How can I limit the choices available for the invoice_items in the Invoice model to just the sales_order_items of the SalesOrder model for that particular Invoice? (I tried using limit_choices_to= {'sales_order': self.invoice.sales_order}) as part of the item = models.ForeignKey(SalesOrderItem) in the InvoiceItem model, but that didn't work. Am I correct in thinking that limiting the choices for the invoice_items should be handled in the model instead of in a form? Code class Item(models.Model): item_num = models.SlugField(unique=True) default_price = models.DecimalField(max_digits=10, decimal_places=2, blank=True, null=True) class SalesOrderItem(models.Model): item = models.ForeignKey(Item) sales_order = models.ForeignKey('SalesOrder') unit_price = models.DecimalField(max_digits=10, decimal_places=2) quantity = models.DecimalField(max_digits=10, decimal_places=4) class SalesOrder(models.Model): customer = models.ForeignKey(Party) so_num = models.SlugField(max_length=40, unique=True) sales_order_items = models.ManyToManyField(Item, through=SalesOrderItem) class InvoiceItem(models.Model): item = models.ForeignKey(SalesOrderItem) invoice = models.ForeignKey('Invoice') unit_price = models.DecimalField(max_digits=10, decimal_places=2) quantity = models.DecimalField(max_digits=10, decimal_places=4) class Invoice(models.Model): invoice_num = models.SlugField(max_length=25) sales_order = models.ForeignKey(SalesOrder) invoice_items = models.ManyToManyField(SalesOrderItem, through='InvoiceItem')

    Read the article

  • In SSIS Convert European Currency Format to United States Currency Format

    - by Rob
    I have an interesting problem. I have an SSIS package that processes account data. We are now processing files from Europe. These files are in a CSV format using text qualifiers. For an example of the problem: In the United States the currency format is 123456.99 (We purposely leave the thousands separator out). The files sent from Europe are coming in with two formats. One is 123456,99 and the other is 123.456,00. SSIS is attempting to parse the text file and place it into a NUMERIC(20,2) field. This causes a parsing error in SSIS even with the text qualifiers. If I change the field to CURRENCY it sends a conversion error. I would like for SSIS to deal with this directly without requiring the data to be in the United States format. Has anyone had this problem? Any help will be greatly appreciated. Rob

    Read the article

  • Is there an unintrusive antivirus program that I can ask to scan object on demand only? [closed]

    - by Faken
    Possible Duplicate: Recommended offline on-demand virus scanners I'm looking for an unintrusive antivirus program that I can get to run scans on suspicious objects on demand and only on demand. Most other antivirus programs install many layers of protection and things running in the background and preform regular updates and system scans at inconvenient times. I want an antivirus program that I can simply right click and object and select "scan for viruses" and nothing more. Is there a reliable antivirus program out there that offers this and only this without the automatic updates, background processes, and intrusive automatic system scans? Note: this is for Windows.

    Read the article

  • Unable to logoff, disconnect, or reset terminal server user in production environment

    - by l0c0b0x
    I'm looking for some ideas on how to disconnect, logoff, or reset a user's session in a 2008 Terminal Server (unable to login as the user either as it is completely locked-up). This is a production environment, so rebooting the server or doing something system-wide is out of the question for now. Any Powershell tricks to help us with this? We've tried killing the session's processes too, directly from the same terminal server (from the task manager, Terminal Services Manager and the Resource Monitor) with no results. Help!

    Read the article

  • Using Squid on Debian, Cannot Connect Error

    - by Zed Said
    I am trying to set up Squid on Debian and am getting a connection refused error: squidclient http://www.apple.com/ > test client: ERROR: Cannot connect to 127.0.0.1:3128: Connection refused Here is my config: visible_hostname none cache_effective_user proxy cache_effective_group proxy cache_dir ufs /var/spool/squid 2048 16 256 cache_mem 512 MB cache_access_log /var/log/squid/access.log emulate_httpd_log on strip_query_terms off read_ahead_gap 128 Kb collapsed_forwarding on refresh_stale_hit 30 seconds retry_on_error on maximum_object_size_in_memory 1 MB acl all src 0.0.0.0/0.0.0.0 acl purgehosts src 127.0.0.1/255.255.255.255 # Caching static objects in __data is important. # Without that, apache processes sit around spooling static objects. acl QUERY urlpath_regex /cgi-bin/ /_edit /_admin /_login /_nocache /_recache /__lib /__fudge acl PURGE method PURGE acl POST method POST cache deny QUERY cache deny POST http_access allow PURGE purgehosts http_access deny PURGE http_access allow all http_port 127.0.0.1:80 http_port 50.56.206.139:80 cache_peer 127.0.0.1 parent 80 0 originserver no-query no-digest default redirect_rewrites_host_header off read_ahead_gap 128 Kb shutdown_lifetime 5 seconds Any ideas why this is happening? What have I missed?

    Read the article

  • MaxClients in apache. How to know the size of my proccess?

    - by Larry
    From http://httpd.apache.org/docs/2.2/misc/perf-tuning.html The single biggest hardware issue affecting webserver performance is RAM. A webserver should never ever have to swap, as swapping increases the latency of each request beyond a point that users consider "fast enough". This causes users to hit stop and reload, further increasing the load. You can, and should, control the MaxClients setting so that your server does not spawn so many children it starts swapping. This procedure for doing this is simple: determine the size of your average Apache process, by looking at your process list via a tool such as top, and divide this into your total available memory, leaving some room for other processes. The main issue is that I can't understand how to know the size, because, well i have the size of httpd on no more of 3888 But, if we need to determine the number for MaxClients, and I have 4GB of RAM, so I get: 972, so I should use like 900 in the MaxClients?

    Read the article

  • Receicing POST data in ASP.NET

    - by grast
    Hi, I want to use ASP for code generation in a C# desktop application. To achieve this, I set up a simple host (derived from System.MarshalByRefObject) that processes a System.Web.Hosting.SimpleWorkerRequest via HttpRuntime.ProcessRequest. This processes the ASPX script specified by the incoming request (using System.Net.HttpListener to wait for requests). The client-part is represented by a System.ComponentModel.BackgroundWorker that builds the System.Net.HttpWebRequest and receives the response from the server. A simplified version of my client-part-code looks like this: private void SendRequest(object sender, DoWorkEventArgs e) { // create request with GET parameter var uri = "http://localhost:9876/test.aspx?getTest=321"; var request = (HttpWebRequest)WebRequest.Create(uri); // append POST parameter request.Method = "POST"; request.ContentType = "application/x-www-form-urlencoded"; var postData = Encoding.Default.GetBytes("postTest=654"); var postDataStream = request.GetRequestStream(); postDataStream.Write(postData, 0, postData.Length); // send request, wait for response and store/print content using (var response = (HttpWebResponse)request.GetResponse()) { using (var reader = new StreamReader(response.GetResponseStream(), Encoding.UTF8)) { _processsedContent = reader.ReadToEnd(); Debug.Print(_processsedContent); } } } My server-part-code looks like this (without exception-handling etc.): public void ProcessRequests() { // HttpListener at http://localhost:9876/ var listener = SetupListener(); // SimpleHost created by ApplicationHost.CreateApplicationHost var host = SetupHost(); while (_running) { var context = listener.GetContext(); using (var writer = new StreamWriter(context.Response.OutputStream)) { // process ASP script and send response back to client host.ProcessRequest(GetPage(context), GetQuery(context), writer); } context.Response.Close(); } } So far all this works fine as long as I just use GET parameters. But when it comes to receiving POST data in my ASPX script I run into trouble. For testing I use the following script: // GET parameters are working: var getTest = Request.QueryString["getTest"]; Response.Write("getTest: " + getTest); // prints "getTest: 321" // don't know how to access POST parameters: var postTest1 = Request.Form["postTest"]; // Request.Form is empty?! Response.Write("postTest1: " + postTest1); // so this prints "postTest1: " var postTest2 = Request.Params["postTest"]; // Request.Params is empty?! Response.Write("postTest2: " + postTest2); // so this prints "postTest2: " It seems that the System.Web.HttpRequest object I'm dealing with in ASP does not contain any information about my POST parameter "postTest". I inspected it in debug mode and none of the members did contain neither the parameter-name "postTest" nor the parameter-value "654". I also tried the BinaryRead method of Request, but unfortunately it is empty. This corresponds to Request.InputStream==null and Request.ContentLength==0. And to make things really confusing the Request.HttpMethod member is set to "GET"?! To isolate the problem I tested the code by using a PHP script instead of the ASPX script. This is very simple: print_r($_GET); // prints all GET variables print_r($_POST); // prints all POST variables And the result is: Array ( [getTest] = 321 ) Array ( [postTest] = 654 ) So with the PHP script it works, I can access the POST data. Why does the ASPX script don't? What am I doing wrong? Is there a special accessor or method in the Response object? Can anyone give a hint or even know how to solve this? Thanks in advance.

    Read the article

  • Errors reported by "powercfg -energy"

    - by Tim
    Running "powercfg -energy" under Windows 7 command line, I received a report with following three errors: System Availability Requests:Away Mode Request The program has made a request to enable Away Mode. Requesting Process \Device\HarddiskVolume2\Program Files\Windows Media Player\wmpnetwk.exe CPU Utilization:Processor utilization is high The average processor utilization during the trace was high. The system will consume less power when the average processor utilization is very low. Review processor utilization for individual processes to determine which applications and services contribute the most to total processor utilization. Average Utilization (%) 49.25 Platform Power Management Capabilities:PCI Express Active-State Power Management (ASPM) Disabled PCI Express Active-State Power Management (ASPM) has been disabled due to a known incompatibility with the hardware in this computer. I was wondering for the first error, what does "enable away mode" mean? for the second, what utilization percentage of CPU is reasonable? for the third, what is "PCI Express Active-State Power Management (ASPM)"? How I can correct the three errors? Thanks and regards!

    Read the article

< Previous Page | 424 425 426 427 428 429 430 431 432 433 434 435  | Next Page >