Search Results

Search found 7175 results on 287 pages for 'asynchronous processing'.

Page 164/287 | < Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >

  • Rails unknown action suddenly everywhere

    - by Joe
    The weird thing is that my app was working perfectly on Sat, and when I check it out on Monday (after doing nothing to it) I kept getting this problem: This behaviour is only happening on my production server. When I try to login or create a new user or do something that interacts with a form I am getting an unknown action error. A simple retrieval of rows does not throw this error however. I don't have all CRUD operations in most of my controllers because it's not necessary - but Rails always looks for the one that doesn't exist - it seams so anyway. If I make a mistake in the form that would normally throw a validation message to the user it will throw this error too, does that mean it has something to do with the model too (I'm not too Rails experienced and didn't know if that would be the case or not)? This is a general error I am getting - I have super_exception_notifier gem installed, so that's what all the extra params are. Processing SessionsController#new (for OMITTED at 2010-04-12 09:11:12) [GET] Rendering template within layouts/application Rendering sessions/new Completed in 3ms (View: 2, DB: 0) | 200 OK [http://OMITTED.com/session/new] Processing SessionsController#show (for OMITTED at 2010-04-12 09:11:14) [GET] ActionController::UnknownAction (No action responded to show. Actions: create, destroy, error_class_status_codes, error_class_status_codes=, error_layout, error_layout=, exception_notifiable_notification_level, exception_notifiable_notification_level=, exception_notifiable_silent_exceptions, exception_notifiable_silent_exceptions=, exception_notifiable_verbose, exception_notifiable_verbose=, http_status_codes, http_status_codes=, and new): dragonfly (0.5.3) lib/dragonfly/middleware.rb:13:in `call' passenger (2.2.9) lib/phusion_passenger/rack/request_handler.rb:92:in `process_request' passenger (2.2.9) lib/phusion_passenger/abstract_request_handler.rb:207:in `main_loop' passenger (2.2.9) lib/phusion_passenger/railz/application_spawner.rb:400:in `start_request_handler' passenger (2.2.9) lib/phusion_passenger/railz/application_spawner.rb:351:in `handle_spawn_application' passenger (2.2.9) lib/phusion_passenger/utils.rb:184:in `safe_fork' passenger (2.2.9) lib/phusion_passenger/railz/application_spawner.rb:349:in `handle_spawn_application' passenger (2.2.9) lib/phusion_passenger/abstract_server.rb:352:in `__send__' passenger (2.2.9) lib/phusion_passenger/abstract_server.rb:352:in `main_loop' passenger (2.2.9) lib/phusion_passenger/abstract_server.rb:196:in `start_synchronously' passenger (2.2.9) lib/phusion_passenger/abstract_server.rb:163:in `start' passenger (2.2.9) lib/phusion_passenger/railz/application_spawner.rb:209:in `start' passenger (2.2.9) lib/phusion_passenger/spawn_manager.rb:262:in `spawn_rails_application' passenger (2.2.9) lib/phusion_passenger/abstract_server_collection.rb:126:in `lookup_or_add' passenger (2.2.9) lib/phusion_passenger/spawn_manager.rb:256:in `spawn_rails_application' passenger (2.2.9) lib/phusion_passenger/abstract_server_collection.rb:80:in `synchronize' passenger (2.2.9) lib/phusion_passenger/abstract_server_collection.rb:79:in `synchronize' passenger (2.2.9) lib/phusion_passenger/spawn_manager.rb:255:in `spawn_rails_application' passenger (2.2.9) lib/phusion_passenger/spawn_manager.rb:154:in `spawn_application' passenger (2.2.9) lib/phusion_passenger/spawn_manager.rb:287:in `handle_spawn_application' passenger (2.2.9) lib/phusion_passenger/abstract_server.rb:352:in `__send__' passenger (2.2.9) lib/phusion_passenger/abstract_server.rb:352:in `main_loop' passenger (2.2.9) lib/phusion_passenger/abstract_server.rb:196:in `start_synchronously' This is what one of my forms looks like (nothing special) <% form_tag session_path do -%> <p><%= label_tag 'Username' %><br /> <%= text_field_tag 'login', @login %></p> <p><%= label_tag 'password' %><br/> <%= password_field_tag 'password', nil %></p> <p><%= label_tag 'remember_me', 'Remember me' %> <%= check_box_tag 'remember_me', '1', @remember_me %></p> <p><%= submit_tag 'Log in' %></p> <% end -%> It looks like dragonfly is the culprit doesn't it, here's the section from the gem files it says is being naughty: module Dragonfly class Middleware def initialize(app, dragonfly_app_name) @app = app @dragonfly_app_name = dragonfly_app_name end def call(env) response = endpoint.call(env) if response[0] == 404 13 -->> @app.call(env) else response end end I don't know what goes on behind the scenes here so I probably haven't been looking in the right place to fix this issue. Like I said it only throws this in a production environment, which guess is what the 'env' variable is referencing. Thank you for your time! I've spent nearly my whole day trying to figure this out! :(

    Read the article

  • Unlock file on Windows Server 2003 by remote desktop without rebooting

    - by BalusC
    We've several Windows Server 2003 machines running, each with its own purposes. There are scheduled jobs which synchronizes some files over SFTP using WinSCP. Very sometimes a newly copied file is left locked in the "inbox" folder without any reason. The machine's own background task (programmed in Java) can't move it to the "processed" folder anymore after processing it. Manually moving it only yields the well known error message Cannot move [filename]: it is being used by another person or program. The only resort is to reboot the machine, but we would of course like to avoid that. Any suggestions? I tried Unlocker which works fine locally at WinXP, but doesn't work at those Win2K3 machines by remote desktop (unlock option doesn't show up in rightclick context menu). I tried Process Explorer as well as described in this blog article, but it caused the server to crash (not sure if that's because it's executed through remote desktop).

    Read the article

  • Postfix how to triggering my script when outgoing email status is sent?

    - by Laszlo Malina
    I want to run a program when postfix has successfully sent out a mail (local or remote). I would like to pass the headers to program and if possible also the destination ip or address (exclude spam filter delivery). I just have an idea: Delivery Status Notification processing via uniqe transport program, but I'd prefer the above. My goal is to be recorded lifetime (events) of email: it came, it went out (from, to, subject, datetime, message id, message status: bounce, sent). I would only need the state of the outgoing mail, because incoming and bounce program is working. It is possible to trigger a program (similar to a transport pipe/spawn) or DSN "cheat" stay? Thanks in advance for any reply!

    Read the article

  • Should I split my website into different servers

    - by Nyxynyx
    I have a website where a user uploads photos, the photos gets resized and thumbnailed, and stored on the server. At the same time, there are some INSERTS into a MySQL table regarding the photo uploaded (like description, user id etc). The site currently runs off a managed VPS, and I love the support it provides. However it is expensive to store the many small photos and the resizing and thumbnailing processes do cause spikes on the app performance. (Amazon S3 is pretty expensive, especially considering the costs for uploading many small files) Question: Will it be a good idea to move the image processing operations and image storage to another server which is an unmanaged dedicated server with a much lower cost/gb and keep the current VPS for its 24/7 support and hosting the webapp? Or should I move the entire site to the dedicated server? VPS Specs 16 cores 2.4GHz (E5620) 1GB memory 60GB Storage 3.5TB transfer $43/mth Managed (24/7) Dedicated Specs i3 2130 2 cores 3.4+ GHz 16 GB DDR3 2 x 1TB SATA2 storage 15 TB transfer $79/mth Unmanaged (Weekdays support) Software used Apache PHP MySQL Solr PostgreSQL ImageMagick

    Read the article

  • Windows Form hangs when running threads

    - by Benjamin Ortuzar
    JI have written a .NET C# Windows Form app in Visual Studio 2008 that uses a Semaphore to run multiple jobs as threads when the Start button is pressed. It’s experiencing an issue where the Form goes into a comma after being run for 40 minutes or more. The log files indicate that the current jobs complete, it picks a new job from the list, and there it hangs. I have noticed that the Windows Form becomes unresponsive when this happens. The form is running in its own thread. This is a sample of the code I am using: protected void ProcessJobsWithStatus (Status status) { int maxJobThreads = Convert.ToInt32(ConfigurationManager.AppSettings["MaxJobThreads"]); Semaphore semaphore = new Semaphore(maxJobThreads, maxJobThreads); // Available=3; Capacity=3 int threadTimeOut = Convert.ToInt32(ConfigurationManager.AppSettings["ThreadSemaphoreWait"]);//in Seconds //gets a list of jobs from a DB Query. List<Job> jobList = jobQueue.GetJobsWithStatus(status); //we need to create a list of threads to check if they all have stopped. List<Thread> threadList = new List<Thread>(); if (jobList.Count > 0) { foreach (Job job in jobList) { logger.DebugFormat("Waiting green light for JobId: [{0}]", job.JobId.ToString()); if (!semaphore.WaitOne(threadTimeOut * 1000)) { logger.ErrorFormat("Semaphore Timeout. A thread did NOT complete in time[{0} seconds]. JobId: [{1}] will start", threadTimeOut, job.JobId.ToString()); } logger.DebugFormat("Acquired green light for JobId: [{0}]", job.JobId.ToString()); // Only N threads can get here at once job.semaphore = semaphore; ThreadStart threadStart = new ThreadStart(job.Process); Thread thread = new Thread(threadStart); thread.Name = job.JobId.ToString(); threadList.Add(thread); thread.Start(); } logger.Info("Waiting for all threads to complete"); //check that all threads have completed. foreach (Thread thread in threadList) { logger.DebugFormat("About to join thread(jobId): {0}", thread.Name); if (!thread.Join(threadTimeOut * 1000)) { logger.ErrorFormat("Thread did NOT complete in time[{0} seconds]. JobId: [{1}]", threadTimeOut, thread.Name); } else { logger.DebugFormat("Thread did complete in time. JobId: [{0}]", thread.Name); } } } logger.InfoFormat("Finished Processing Jobs in Queue with status [{0}]...", status); } //form methods private void button1_Click(object sender, EventArgs e) { buttonStop.Enabled = true; buttonStart.Enabled = false; ThreadStart threadStart = new ThreadStart(DoWork); workerThread = new Thread(threadStart); serviceStarted = true; workerThread.Start(); } private void DoWork() { EmailAlert emailAlert = new EmailAlert (); // start an endless loop; loop will abort only when "serviceStarted" flag = false while (serviceStarted) { emailAlert.ProcessJobsWithStatus(0); // yield if (serviceStarted) { Thread.Sleep(new TimeSpan(0, 0, 1)); } } // time to end the thread Thread.CurrentThread.Abort(); } //job.process() public void Process() { try { //sets the status, DateTimeStarted, and the processId this.UpdateStatus(Status.InProgress); //do something logger.Debug("Updating Status to [Completed]"); //hits, status,DateFinished this.UpdateStatus(Status.Completed); } catch (Exception e) { logger.Error("Exception: " + e.Message); this.UpdateStatus(Status.Error); } finally { logger.Debug("Relasing semaphore"); semaphore.Release(); } I have tried to log what I can into a file to detect where the problem is happening, but so far I haven't been able to identify where this happens. Losing control of the Windows Form makes me think that this has nothing to do with processing the jobs. Any ideas?

    Read the article

  • Devise / Rails 4 Windows mobile authentication failure

    - by Nic Willemse
    Im using devise with a rails 4 application. Authentication works fine on most devices, including some old feature phones. I am however running into problems with the Nokia Lumia. Please see log snippet below. By the looks of things this appears to be a rails issue rather than a devise problem. Please Help! 014-05-30T09:47:38.668478+00:00 app[web.1]: Started POST "/users/sign_in" for 197.111.223.249 at 2014-05-30 09:47:38 +0000 2014-05-30T09:47:38.668505+00:00 app[web.1]: Started POST "/users/sign_in" for 197.111.223.249 at 2014-05-30 09:47:38 +0000 2014-05-30T09:47:38.672961+00:00 app[web.1]: Processing by Devise::SessionsController#create as HTML 2014-05-30T09:47:38.672968+00:00 app[web.1]: Processing by Devise::SessionsController#create as HTML 2014-05-30T09:47:38.674163+00:00 app[web.1]: Can't verify CSRF token authenticity 2014-05-30T09:47:38.673021+00:00 app[web.1]: Parameters: {"utf8"="?", "authenticity_token"="Ckyw9vAfxbgksugLMainfWoG2jRdq7GB5xBBGxqYhCs=", "user"={"email"="", "password"="[FILTERED]", "remember_me"="0"}, "commit"="Sign in"} 2014-05-30T09:47:38.673027+00:00 app[web.1]: Parameters: {"utf8"="?", "authenticity_token"="Ckyw9vAfxbgksugLMainfWoG2jRdq7GB5xBBGxqYhCs=", "user"={"email"="", "password"="[FILTERED]", "remember_me"="0"}, "commit"="Sign in"} 2014-05-30T09:47:38.674170+00:00 app[web.1]: Can't verify CSRF token authenticity 2014-05-30T09:47:38.677792+00:00 app[web.1]: Completed 422 Unprocessable Entity in 5ms 2014-05-30T09:47:38.677799+00:00 app[web.1]: Completed 422 Unprocessable Entity in 5ms 2014-05-30T09:47:38.683294+00:00 app[web.1]: ActionController::InvalidAuthenticityToken (ActionController::InvalidAuthenticityToken): 2014-05-30T09:47:38.683299+00:00 app[web.1]: vendor/bundle/ruby/2.0.0/gems/actionpack-4.0.1/lib/action_controller/metal/request_forgery_protection.rb:170:in handle_unverified_request' 2014-05-30T09:47:38.683289+00:00 app[web.1]: 2014-05-30T09:47:38.683298+00:00 app[web.1]: vendor/bundle/ruby/2.0.0/gems/actionpack-4.0.1/lib/action_controller/metal/request_forgery_protection.rb:163:inhandle_unverified_request' 2014-05-30T09:47:38.683303+00:00 app[web.1]: vendor/bundle/ruby/2.0.0/gems/actionpack-4.0.1/lib/action_controller/metal/request_forgery_protection.rb:177:in verify_authenticity_token' 2014-05-30T09:47:38.683305+00:00 app[web.1]: vendor/bundle/ruby/2.0.0/gems/activesupport-4.0.1/lib/active_support/callbacks.rb:417:in_run__3672081613755604432__process_action__callbacks' Form : <%= form_for(resource, :as => resource_name, :url => session_path(resource_name), :html => {:class => "form-signin"}) do |f| %> <h2 class="form-signin-heading">Sign in</h2> <%= devise_error_messages! %> <div><%= f.label :email %><br /> <%= f.email_field :email, :autofocus => true, :class=> "form-control" %></div> <div><%= f.label :password %><br /> <%= f.password_field :password , :class=> "form-control"%></div> <% if devise_mapping.rememberable? -%> <div><%= f.check_box :remember_me, :class=> "form-control"%> <%= f.label :remember_me %></div> <% end -%> <div><%= f.submit "Sign in" ,:class => "btn btn-lg btn-primary btn-block"%></div> <input name="authenticity_token" type="hidden" value="<%= form_authenticity_token %>"/> <%= render "devise/shared/links" %> <% end %>

    Read the article

  • OpenSWAN KLIPS not working

    - by bonzi
    I am trying to setup IPSec between 2 VM launched by OpenNebula. I'm using OpenSWAN for that. This is the ipsec.conf file config setup oe=off interfaces=%defaultroute protostack=klips conn host-to-host left=10.141.0.135 # Local IP address connaddrfamily=ipv4 leftrsasigkey=key right=10.141.0.132 # Remote IP address rightrsasigkey=key ike=aes128 # IKE algorithms (AES cipher) esp=aes128 # ESP algorithns (AES cipher) auto=add pfs=yes forceencaps=yes type=tunnel I'm able to establish the connection with netkey but klips doesnt work. ipsec barf shows #71: ERROR: asynchronous network error report on eth0 (sport=500) for message to 10.141.0.132 port 500, complainant 10.141.0.135: No route to host [errno 113, origin ICMP type 3 code 1 (not authenticated)] Tcpdump shows 22:50:20.592685 IP 10.141.0.132.isakmp > 10.141.0.135.isakmp: isakmp: phase 1 I ident 22:50:25.602182 ARP, Request who-has 10.141.0.135 tell 10.141.0.132, length 46 22:50:26.602082 ARP, Request who-has 10.141.0.135 tell 10.141.0.132, length 46 22:50:27.601985 ARP, Request who-has 10.141.0.135 tell 10.141.0.132, length 46 ipsec eroute shows 0 10.141.0.135/32 -> 10.141.0.132/32 => %trap What could be the problem?

    Read the article

  • How can I create multiple identical AWS EC2 server instances with large amounts of persistent data?

    - by mojones
    I have a CPU-intensive data-processing application that I want to run across many (~100,000) input files. The application needs a large (~20GB) data file in order to run. What I would like to do is create an EC2 machine image that has my application and associated data files installed boot up a large number (e.g. 100) of instances of this image split my input files up into 100 batches and send one batch to be processed on each instance I am having trouble figuring out the best way to ensure that each instance has access to the large data file. The data file is too big to fit on the root filesystem of an AMI. I could use Block Storage, but a given Block Storage volume can only be attached to a single instance, so I would need 100 clones. Is there some way to create a custom image that has more space on the root filsystem so that I can include my large data file? Or is there a better way to tackle this problem?

    Read the article

  • getting a 404/403 error for payment gateway

    - by Obay Ouano
    We are setting up an online payment facility using a payment gateway. After the payment gateway finishes processing the credit card details for a payment, the user is redirected to a "403 Forbidden" page. The logs show: [MY_IP_ADDRESS_HERE] - - [SOME_DATE_HERE] "GET /POSTBACK_URL.php?txnid=1338434567&result=failure&reason=The+remote+server+returned+an+error%3a+(404)+Not+Found.&digest=7a115270c56df5945c43ad86e56b2e930a3cfd50 HTTP/1.1" 404 - "PAYMENT_GATEWAY_URL_HERE" "BROWSER_DETAILS_HERE" It means that when the PAYMENT_GATEWAY_URL attempts to open our POSTBACK_URL, it gets a 404 error, is that correct? But why does the page say "403 Forbidden"? Anyway, we tried to copy-paste that same URL into the browser window, and the page is opened successfully, with our programmed error notification message. So, why couldn't it be opened when the payment gateway tried to redirect to it, but we could? Is this some sort of permissions issue? If so, the postback URL's file permissions are already 755. What am I missing?

    Read the article

  • can not connect via SSH to a remote Postgresql database

    - by tartox
    I am trying to connect via pgAdmin3 GUI to a Postgresql database on a remote server myHost on port 5432. Server side : I have a Unix myUser that match a postgresql role. pg_hba.conf is : local all all trust host all all 127.0.0.1/32 trust Client side : I open an ssh tunnel : ssh -L 3333:myHost:5432 myUser@myHost I connect to the server via pgAdmin3 ( or via psql -h localhost -p 3333 ). I get the following error message : server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. I have tried to access a specific database with the superuser role using psql -h localhost -p 3333 --dbname=myDB --user=mySuperUser with no more success. What did I forget in the setup ? Thank you

    Read the article

  • S#harp architecture mapping many to many and ado.net data services: A single resource was expected f

    - by Leg10n
    Hi, I'm developing an application that reads data from a SQL server database (migrated from a legacy DB) with nHibernate and s#arp architecture through ADO.NET Data services. I'm trying to map a many-to-many relationship. I have a Error class: public class Error { public virtual int ERROR_ID { get; set; } public virtual string ERROR_CODE { get; set; } public virtual string DESCRIPTION { get; set; } public virtual IList<ErrorGroup> GROUPS { get; protected set; } } And then I have the error group class: public class ErrorGroup { public virtual int ERROR_GROUP_ID {get; set;} public virtual string ERROR_GROUP_NAME { get; set; } public virtual string DESCRIPTION { get; set; } public virtual IList<Error> ERRORS { get; protected set; } } And the overrides: public class ErrorGroupOverride : IAutoMappingOverride<ErrorGroup> { public void Override(AutoMapping<ErrorGroup> mapping) { mapping.Table("ERROR_GROUP"); mapping.Id(x => x.ERROR_GROUP_ID, "ERROR_GROUP_ID"); mapping.IgnoreProperty(x => x.Id); mapping.HasManyToMany<Error>(x => x.Error) .Table("ERROR_GROUP_LINK") .ParentKeyColumn("ERROR_GROUP_ID") .ChildKeyColumn("ERROR_ID").Inverse().AsBag(); } } public class ErrorOverride : IAutoMappingOverride<Error> { public void Override(AutoMapping<Error> mapping) { mapping.Table("ERROR"); mapping.Id(x => x.ERROR_ID, "ERROR_ID"); mapping.IgnoreProperty(x => x.Id); mapping.HasManyToMany<ErrorGroup>(x => x.GROUPS) .Table("ERROR_GROUP_LINK") .ParentKeyColumn("ERROR_ID") .ChildKeyColumn("ERROR_GROUP_ID").AsBag(); } } When I view the Data service in the browser like: http://localhost:1905/DataService.svc/Errors it shows the list of errors with no problems, and using it like http://localhost:1905/DataService.svc/Errors(123) works too. The Problem When I want to see the Errors in a group or the groups form an error, like: "http://localhost:1905/DataService.svc/Errors(123)?$expand=GROUPS" I get the XML Document, but the browser says: The XML page cannot be displayed Cannot view XML input using XSL style sheet. Please correct the error and then click the Refresh button, or try again later. -------------------------------------------------------------------------------- Only one top level element is allowed in an XML document. Error processing resource 'http://localhost:1905/DataServic... <error xmlns="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata"> -^ I view the sourcecode, and I get the data. However it comes with an exception: <error xmlns="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata"> <code></code> <message xml:lang="en-US">An error occurred while processing this request.</message> <innererror xmlns="xmlns"> <message>A single resource was expected for the result, but multiple resources were found.</message> <type>System.InvalidOperationException</type> <stacktrace> at System.Data.Services.Serializers.Serializer.WriteRequest(IEnumerator queryResults, Boolean hasMoved)&#xD; at System.Data.Services.ResponseBodyWriter.Write(Stream stream)</stacktrace> </innererror> </error> A I missing something??? Where does this error come from?

    Read the article

  • Service haproxy error

    - by user128296
    I want to configure Haproxy for outgoing mail load balancing. my configuration file /etc/haproxy.cfg is. global maxconn 4096 # Total Max Connections. This is dependent on ulimit daemon nbproc 4 # Number of processing cores. Dual Dual-core Opteron is 4 cores for example. defaults mode tcp listen smtp_proxy 199.83.95.71:25 mode tcp option tcplog balance roundrobin # Load Balancing algorithm ## Define your servers to balance server r23.lbsmtp.org 74.117.x.x:25 weight 1 maxconn 512 check server r15.lbsmtp.org 199.71.x.x:25 weight 1 maxconn 512 check And when i start service haproxy i get this error. Starting HAproxy: [ALERT] 244/172148 (7354) : cannot bind socket for proxy smtp_proxy. Aborting. Please tell me where i am doing mistake.help will appreciated.

    Read the article

  • 32 vs 64-bit software for same machine?

    - by GorillaSandwich
    What is the difference between 32 and 64-bit software? My understanding is that 64-bit can use more RAM, if it's available, because it has a larger address space for it. Is this correct? And, specifically: If I have a 64-bit operating system with lots of RAM, and I install, say, the 32-bit version of MySQL instead of the 64-bit version, will it be unable to use all the available RAM and therefore run slower than the 64-bit version might on the same machine (assuming RAM becomes the bottleneck before processing speed or disk access speed or whatever)? If I have a 32-bit operating system and I install a 64-bit piece of software on it, will it (probably) fail to run?

    Read the article

  • How to organize deployment process in Chef-controlled environment?

    - by Alex
    I have a web Linux-based infrastructure which consists of 15 virtual machines and over 50 various services. It is fully controlled by Chef. Most of the services are developed internally. Basically the current deployment process is triggered by a shell script. A build system (a mix of Python and shell scripts) packages the services as .deb files and puts these packages into a repo. It runs apt-get update on all 15 nodes then because the standard Chef apt cookbook only runs apt-get once per day and we definitely do not want to run apt-get update unconditionally on each chef-client wake. The build system restarts chef-client daemons on all 15 nodes finally (we need this step because of pull Chef nature). The current process has a number of drawbacks we want to address. First off, it is asynchronous because the deployment script does not check chef-client logs after restart so we don't even know if the deployment was successful. It does not even wait for Chef clients to complete the cycle. Second, we definitely do not want to force chef-client restarts on all nodes because we usually deploy only a small number of packages. And third, I am not quite sure using chef-client for deployment is legitimate, probably we are just doing it wrong from the start. Please share your thoughts/experience.

    Read the article

  • Simple vLAN setup

    - by Logan Bissonnette
    I have a basic lab environment set up to try and get 2 vLANs working in hyper-v. I have the following equipment 1 hyper-v server 1 Desktop PC 1 Managed Switch (d-link DES-3052P) 1 cheap router (DI-604) My end goal is to have 1 VM and the desktop on one vLAN with internet, and 1 VM on a separate vLAN with internet access. I am having troubles getting an internet connection to both vLANs. The switch does not have the ability to have asynchronous vLANs. This is my switch configuration Port 1 - Trunk Port - Connected to router Port 2 - Trunk Port - Connected to hyper-v Server Port 3 - Access Port- Connected to Desktop Within hyper-v I have 1 switch and 2 VMs. When the VMs are set up to use vlan ID 1, everything works fine. As soon as a VM is set up to use vlan ID 2, they lose all network connection and cannot communicate with the router anymore. I believe this is because the router is not vLAN aware. Can anyone help me with what settings need to be set up on my switch? I believe I want an egress rule so traffic leaving towards the router is untagged, is that right? If not, any ideas or hints as to what needs to be set up?

    Read the article

  • Filtering Client IP from Access Log for Urchin

    - by Ram Prasad
    I have some apache logs to process, and since the webserver behind two levels of reverse proxies, I am getting two IPs in the X-Forwarded-For header.. for example: 208.34.234.55, 127.0.0.1 - - [29/Oct/2009:21:38:13 -0500] "GET /monkey.html HTTP/1.0" 200 20845 0 0 "http://www.monkey.com/" "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.15) Gecko/2009101601 Firefox/3.0.15 (.NET CLR 3.5.30729)" Now, how do I filter this in Urchin (or remove this in Apache logging) so, 127.0.0.1 is removed from processing. Currently urchin is not able to recognize the multuple IP address so it does not log the remote IP

    Read the article

  • Unable to set NTFS permissions for ApplicationPoolIdentity on Windows 2008 SP2

    - by Kev
    On Windows 2008 R2 I am able to set NTFS permissions for an application pool's synthesised ApplicationPoolIdentity account thus: ICACLS d:\websites\site1\www /grant "IIS AppPool\site1":(CI)(OI)(M) The website's application pool is named site1 and is configured to run as ApplicationPoolIdentity. The site's authentication is also configured to authenticate as ApplicationPoolIdentity. I've done this a thousand times on Windows 2008 Standard Edition R2 with never a hitch. However if I try to do the same in Windows 2008 Standard Edition SP2 I get the error: IIS AppPool\site1: No mapping between account names and security IDs was done. Successfully processed 0 files; Failed processing 1 files I also notice that this fails if I try to set permissions for the application pool identity via the security GUI as well. I've seen this before and a reboot has cleared this issue but I'd like to know why this happens periodically. Googling around suggests other folks have hit this problem but there's never a satisfactory explanation. Why would this be?

    Read the article

  • MySQL dump, output each table row on a new line whilst using --extended-insert

    - by soopadoubled
    I'm having an issue, where for ease of use, I'd like to be able to format a command line MySQL dump so that each row of a given table is on a new line when using the --extended-insert option. Usually when using --extended-insert, every row of a given table is outputted on one line, and as far as I am aware there's no way to change this, other than post-processing the dump with perl or such like. The format I'm looking for is: -- -- Dumping data for table `ww_tbCountry` -- INSERT INTO `ww_tbCountry` (`iCountryId_PK`, `vCountryName`, `vShortName`, `iSortFlag`, `fTax`, `vCountryCode`, `vSageTaxCode`) VALUES (22, 'Albania', 'AL', 1, 0.00, '8', 'T9'), (33, 'Austria', 'AT', 1, 15.00, '40', 'T9'), (40, 'Belarus', 'BY', 1, 0.00, '112', 'T9'), (41, 'Belgium', 'BE', 1, 15.00, '56', 'T9'), (51, 'Bulgaria', 'BG', 1, 15.00, '100', 'T9' However, when I dump a database using Phpmyadmin, using --extended-insert, each row is dumped on a new line (as shown by the example above). I've gone through Phpmyadmin and can't find any documentation that would explain this. Is anyone able to shed any light on this? Thanks in advance, Ian

    Read the article

  • Techniques to Monitor cron tasks?

    - by Tristan Juricek
    Are there good techniques for monitoring cron tasks over a cluster? We're starting to use cron to launch tasks at daily intervals. A few ideas for checking out information: Add special application handling that logs information into some "network aware" place, like a DB Build up a logfile system that transfers the cron log periodically to a central point for processing/querying (along with other possible log files) I'm wondering if people have had success with doing things separately for cron versus other things, or, if the tasks were integrated into a different approach completely. I'm leaning towards #2, but I'd like to know what more experienced folk might try out.

    Read the article

  • Supporting Piping (A Useful Hello World)

    - by blastthisinferno
    I am trying to write a collection of simple C++ programs that follow the basic Unix philosophy by: Make each program do one thing well. Expect the output of every program to become the input to another, as yet unknown, program. I'm having an issue trying to get the output of one to be the input of the other, and getting the output of one be the input of a separate instance of itself. Very briefly, I have a program add which takes arguments and spits out the summation. I want to be able to pipe the output to another add instance. ./add 1 2 | ./add 3 4 That should yield 6 but currently yields 10. I've encountered two problems: The cin waits for user input from the console. I don't want this, and haven't been able to find a simple example showing a the use of standard input stream without querying the user in the console. If someone knows of an example please let me know. I can't figure out how to use standard input while supporting piping. Currently, it appears it does not work. If I issue the command ./add 1 2 | ./add 3 4 it results in 7. The relevant code is below: add.cpp snippet // ... COMMAND LINE PROCESSING ... std::vector<double> numbers = multi.getValue(); // using TCLAP for command line parsing if (numbers.size() > 0) { double sum = numbers[0]; double arg; for (int i=1; i < numbers.size(); i++) { arg = numbers[i]; sum += arg; } std::cout << sum << std::endl; } else { double input; // right now this is test code while I try and get standard input streaming working as expected while (std::cin) { std::cin >> input; std::cout << input << std::endl; } } // ... MORE IRRELEVANT CODE ... So, I guess my question(s) is does anyone see what is incorrect with this code in order to support piping standard input? Are there some well known (or hidden) resources that explain clearly how to implement an example application supporting the basic Unix philosophy? @Chris Lutz I've changed the code to what's below. The problem where cin still waits for user input on the console, and doesn't just take from the standard input passed from the pipe. Am I missing something trivial for handling this? I haven't tried Greg Hewgill's answer yet, but don't see how that would help since the issue is still with cin. // ... COMMAND LINE PROCESSING ... std::vector<double> numbers = multi.getValue(); // using TCLAP for command line parsing double sum = numbers[0]; double arg; for (int i=1; i < numbers.size(); i++) { arg = numbers[i]; sum += arg; } // right now this is test code while I try and get standard input streaming working as expected while (std::cin) { std::cin >> arg; std::cout << arg << std::endl; } std::cout << sum << std::endl; // ... MORE IRRELEVANT CODE ...

    Read the article

  • Doesn't VirtualBox 4.0 support drag-drop file copy yet?

    - by Benjamin
    Version 4.0.0 will be new major release. The following major new features were added: -New settings/disk file layout for VM portability; see the manual for more information. -Open Virtualization Format Archive (OVA) support; see the manual for more information. -VMM: support more than 1.5/2 GB guest RAM on 32-bit hosts -Language bindings: uniform Java bindings for both local (COM/XPCOM) and remote (SOAP) -invocation APIs -Chipset: added support for the Intel ICH9 chipset with 3 PCI buses, PCI express and -Message Signaled Interrupts (MSI) -Audio: Intel HD Audio is now available as guest hardware, for better support with modern -guest operating systems (e.g. 64-bit Windows; bug #2785). -GUI: redesigned user interface with guest window preview -GUI: new display mode with downscaled guest display -Resource control: added support for limiting a VM's CPU time and IO bandwidth. -Storage: support asynchronous I/O for iSCSI, VMDK, VHD and Parallels images -Storage: support for resizing VDI and VHD images -Windows Additions: support for automatically updating the Guest Additions (requires -installed Windows Guest Additions 4.0 or later) -Guest Additions: support for copying files into the guest file system What does the last line mean? I thought this is a drag-drop file copy feature like VMWare. I tried that. But I couldn't copy by drag-drop, ctrl-c ctrl-v either. Edit: I mean VBox 4.0 beta, not 3.x The release note is here. Download link is here.

    Read the article

  • How do I analyze an Apache Bench result?

    - by Alan Hoffmeister
    I need some help with analyzing a log from Apache Bench: Benchmarking texteli.com (be patient) Completed 100 requests Completed 200 requests Completed 300 requests Completed 400 requests Completed 500 requests Completed 600 requests Completed 700 requests Completed 800 requests Completed 900 requests Completed 1000 requests Finished 1000 requests Server Software: Server Hostname: texteli.com Server Port: 80 Document Path: /4f84b59c557eb79321000dfa Document Length: 13400 bytes Concurrency Level: 200 Time taken for tests: 37.030 seconds Complete requests: 1000 Failed requests: 0 Write errors: 0 Total transferred: 13524000 bytes HTML transferred: 13400000 bytes Requests per second: 27.01 [#/sec] (mean) Time per request: 7406.024 [ms] (mean) Time per request: 37.030 [ms] (mean, across all concurrent requests) Transfer rate: 356.66 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 27 37 19.5 34 319 Processing: 80 6273 1673.7 6907 8987 Waiting: 47 3436 2085.2 3345 8856 Total: 115 6310 1675.8 6940 9022 Percentage of the requests served within a certain time (ms) 50% 6940 66% 6968 75% 6988 80% 7007 90% 7025 95% 7078 98% 8410 99% 8876 100% 9022 (longest request) What this results can tell me? Isn't 27 rps too slow?

    Read the article

  • Improve heavy work in a loop in multithreading

    - by xjaphx
    I have a little problem with my data processing. public void ParseDetails() { for (int i = 0; i < mListAppInfo.Count; ++i) { ParseOneDetail(i); } } For 300 records, it usually takes around 13-15 minutes. I've tried to improve by using Parallel.For() but it always stop at some point. public void ParseDetails() { Parallel.For(0, mListAppInfo.Count, i => ParseOneDetail(i)); } In method ParseOneDetail(int index), I set an output log for tracking the record id which is under processing. Always hang at some point, I don't know why.. ParseOneDetail(): 89 ... ParseOneDetail(): 90 ... ParseOneDetail(): 243 ... ParseOneDetail(): 92 ... ParseOneDetail(): 244 ... ParseOneDetail(): 93 ... ParseOneDetail(): 245 ... ParseOneDetail(): 247 ... ParseOneDetail(): 94 ... ParseOneDetail(): 248 ... ParseOneDetail(): 95 ... ParseOneDetail(): 99 ... ParseOneDetail(): 249 ... ParseOneDetail(): 100 ... _ <hang at this point> Appreciate your help and suggestions to improve this. Thank you! Edit 1: update for method: private void ParseOneDetail(int index) { Console.WriteLine("ParseOneDetail(): " + index + " ... "); ApplicationInfo appInfo = mListAppInfo[index]; var htmlWeb = new HtmlWeb(); var document = htmlWeb.Load(appInfo.AppAnnieURL); // get first one only HtmlNode nodeStoreURL = document.DocumentNode.SelectSingleNode(Constants.XPATH_FIRST); appInfo.StoreURL = nodeStoreURL.Attributes[Constants.HREF].Value; } Edit 2: This is the error output after a while running as Enigmativity suggest, ParseOneDetail(): 234 ... ParseOneDetail(): 87 ... ParseOneDetail(): 235 ... ParseOneDetail(): 236 ... ParseOneDetail(): 88 ... ParseOneDetail(): 238 ... ParseOneDetail(): 89 ... ParseOneDetail(): 90 ... ParseOneDetail(): 239 ... ParseOneDetail(): 92 ... Unhandled Exception: System.AggregateException: One or more errors occurred. --- > System.Net.WebException: The operation has timed out at System.Net.HttpWebRequest.GetResponse() at HtmlAgilityPack.HtmlWeb.Get(Uri uri, String method, String path, HtmlDocum ent doc, IWebProxy proxy, ICredentials creds) in D:\Source\htmlagilitypack.new\T runk\HtmlAgilityPack\HtmlWeb.cs:line 1355 at HtmlAgilityPack.HtmlWeb.LoadUrl(Uri uri, String method, WebProxy proxy, Ne tworkCredential creds) in D:\Source\htmlagilitypack.new\Trunk\HtmlAgilityPack\Ht mlWeb.cs:line 1479 at HtmlAgilityPack.HtmlWeb.Load(String url, String method) in D:\Source\htmla gilitypack.new\Trunk\HtmlAgilityPack\HtmlWeb.cs:line 1103 at HtmlAgilityPack.HtmlWeb.Load(String url) in D:\Source\htmlagilitypack.new\ Trunk\HtmlAgilityPack\HtmlWeb.cs:line 1061 at SimpleChartParser.AppAnnieParser.ParseOneDetail(ApplicationInfo appInfo) i n c:\users\nhn60\documents\visual studio 2010\Projects\FunToolPack\SimpleChartPa rser\AppAnnieParser.cs:line 90 at SimpleChartParser.AppAnnieParser.<ParseDetails>b__0(ApplicationInfo ai) in c:\users\nhn60\documents\visual studio 2010\Projects\FunToolPack\SimpleChartPar ser\AppAnnieParser.cs:line 80 at System.Threading.Tasks.Parallel.<>c__DisplayClass21`2.<ForEachWorker>b__17 (Int32 i) at System.Threading.Tasks.Parallel.<>c__DisplayClassf`1.<ForWorker>b__c() at System.Threading.Tasks.Task.InnerInvoke() at System.Threading.Tasks.Task.InnerInvokeWithArg(Task childTask) at System.Threading.Tasks.Task.<>c__DisplayClass7.<ExecuteSelfReplicating>b__ 6(Object ) --- End of inner exception stack trace --- at System.Threading.Tasks.Task.ThrowIfExceptional(Boolean includeTaskCanceled Exceptions) at System.Threading.Tasks.Task.Wait(Int32 millisecondsTimeout, CancellationTo ken cancellationToken) at System.Threading.Tasks.Parallel.ForWorker[TLocal](Int32 fromInclusive, Int 32 toExclusive, ParallelOptions parallelOptions, Action`1 body, Action`2 bodyWit hState, Func`4 bodyWithLocal, Func`1 localInit, Action`1 localFinally) at System.Threading.Tasks.Parallel.ForEachWorker[TSource,TLocal](TSource[] ar ray, ParallelOptions parallelOptions, Action`1 body, Action`2 bodyWithState, Act ion`3 bodyWithStateAndIndex, Func`4 bodyWithStateAndLocal, Func`5 bodyWithEveryt hing, Func`1 localInit, Action`1 localFinally) at System.Threading.Tasks.Parallel.ForEachWorker[TSource,TLocal](IEnumerable` 1 source, ParallelOptions parallelOptions, Action`1 body, Action`2 bodyWithState , Action`3 bodyWithStateAndIndex, Func`4 bodyWithStateAndLocal, Func`5 bodyWithE verything, Func`1 localInit, Action`1 localFinally) at System.Threading.Tasks.Parallel.ForEach[TSource](IEnumerable`1 source, Act ion`1 body) at SimpleChartParser.AppAnnieParser.ParseDetails() in c:\users\nhn60\document s\visual studio 2010\Projects\FunToolPack\SimpleChartParser\AppAnnieParser.cs:li ne 80 at SimpleChartParser.Program.Main(String[] args) in c:\users\nhn60\documents\ visual studio 2010\Projects\FunToolPack\SimpleChartParser\Program.cs:line 15

    Read the article

  • <authentication mode=“Windows”/>

    - by kareemsaad
    I had this error when I browse new web site that site sub from sub http://sharp.elarabygroup.com/ha/deault.aspx ha is new my web site Configuration Error Description: An error occurred during the processing of a configuration file required to service this request. Please review the specific error details below and modify your configuration file appropriately. Parser Error Message: It is an error to use a section registered as allowDefinition='MachineToApplication' beyond application level. This error can be caused by a virtual directory not being configured as an application in IIS. Source Error: Line 61: ASP.NET to identify an incoming user. Line 62: -- Line 63: Line 64: section enables configuration Line 65: of what to do if/when an unhandled error occurs I note other web site as sub from sub hadn't web.config is that realted with web.config and all subs and domain has one web.config

    Read the article

  • Is it possible to upgrade PHP to 5.3 on a Centos Kloxo installation?

    - by Malachi Soord
    I have a VPS running Centos with Kloxo on and I was wondering how I would upgrade the PHP to 5.3 - It's currently running 5.2.6. When I try and do a yum update I get the following errors: Resolving Dependencies --> Running transaction check --> Processing Dependency: libpq.so.4 for package: lxphp ---> Package postgresql-libs.i386 0:8.3.7-umask.7 set to be updated --> Finished Dependency Resolution lxphp-5.2.1-400.i386 from installed has depsolving problems --> Missing Dependency: libpq.so.4 is needed by package lxphp-5.2.1-400.i386 (installed) Error: Missing Dependency: libpq.so.4 is needed by package lxphp-5.2.1-400.i386 (installed) You could try using --skip-broken to work around the problem You could try running: package-cleanup --problems package-cleanup --dupes rpm -Va --nofiles --nodigest The program package-cleanup is found in the yum-utils package. Any help would be greatly appreciated.

    Read the article

< Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >