Search Results

Search found 22041 results on 882 pages for 'kill process'.

Page 310/882 | < Previous Page | 306 307 308 309 310 311 312 313 314 315 316 317  | Next Page >

  • upstart config to start sync daemon as non-root user

    - by Rudiger Wolf
    I am planning to use inosync to sync data from master server to several client servers. I have created a user called rsyncuser in both master and slaves with access permissions and passwordless ssh access from master to slave servers. Inosync is working when I use it from the command line as rsyncuser. Next I want this to start automatically when server is turned on. I figured upstart is the way to get this working. I am unable to find the right upstart command to get this working. Here is my upstart conf file. The problem seems to be around running "inosync -d -c /etc/inosync/inosync_rsyncuser.py" as a given user. As you can see I have tried a number of various options! description "start inosync to sync data to other CDN Servers as rsyncuser" console output #start on startup #stop on shutdown start on (net-device-up and local-filesystems) stop on runlevel [016] #start on runlevel [2345] #stop on runlevel [!2345] #kill timeout 30 env RUN_AS_USER=rsyncuser expect fork script echo "Inosync updtart job seems to have started" /tmp/upstart.log # exec sudo -u rsyncuser -c "ls -la" /tmp/upstart.log 2&1 # LOGFILE=/var/log/logfile.`date +%Y-%m-%d`.log # exec su - $RUN_AS_USER -c "inosync -d -c /etc/inosync/inosync_rsyncuser.py" $LOGFILE 2&1 # exec su -c "ls -la" /tmp/upstart.log 2&1 # emit inosync_running end script

    Read the article

  • SSH session closing whilst virtualenv session stays open (I think)

    - by ing0
    I've been developing some sites using Flask recently (running on debian within a virtualenv), and when I am testing I can run it on a port, let's say post 5000. So I run the script like so: . env/bin/activate <- go into virtual environment python file.py <- run python script And I will be given this message: Running on http://0.0.0.0:5000/ So this all works great and I can access my site on this port fine. However... my rubbish ISP always does this thing where it resets something around 1am every morning. I have no idea what this is, everything runs like normal but I always get disconnected from any SSH sessions open. This leaves it running and all I can do is call: lsof -i Which will show me the process but if I kill it and then rerun it things get weird. The: Running on http://0.0.0.0:5000 message still shows but I cannot connect to it anymore. I've tried changing the port number and it seems the only thing that works is trying again later on or on another day. Now I'm assuming that something on my server resets inbetween these times and I would like to think it was maybe that virtualenv session timing out, but I cannot find out how to do this manually, does anyone know?

    Read the article

  • Rails: show some examples of code from controllers, models and views

    - by Totty
    Hy, my controller example: class FriendsController < ApplicationController before_filter :authorize, :except => [:friends] ############## ############## ## REQUESTS ## ############## ############## ################## # GET MY FRIENDS # ################## # Get my friends. def friends @friends = @my_profile.friends.paginate({:page => params[:page], :per_page => 3}) @profile = @my_profile end ################### # REMOVED FRIENDS # ################### # Get my deleted friends. def removed_friends @removed_friends = @my_profile.friends('removed_friends', params[:page]) end ################### # PENDING FRIENDS # ################### # Friend requests made by other profiles to me. def pending_friends @pending_friends = @my_profile.friends('pending_friends', params[:page]) end ############################ # REJECTED PENDING FRIENDS # ############################ # Rejected friend requests made by other profiles to me. def rejected_pending_friends @rejected_pending_friends = @my_profile.friends('rejected_pending_friends', params[:page]) end ##################### # REQUESTED FRIENDS # ##################### # The friend requests I've sent to others profiles. def requested_friends @requested_friends = @my_profile.friends('requested_friends', params[:page]) end ############################# # DELETED REQUESTED FRIENDS # ############################# # The requests I've sent to others # profiles and then canceled. def deleted_requested_friends @deleted_requested_friends = @my_profile.friends('deleted_requested_friends', params[:page]) end ############# ############# ## ACTIONS ## ############# ############# ########################## # ADD FRIENDSHIP REQUEST # ########################## # Add a friendship request. def add_friendship_request friendship = @my_profile.add_friendship_request(params[:profile_id]) render :json => friendship end ############################# # REMOVE FRIENDSHIP REQUEST # ############################# # Removes a friendship request I've done. def remove_friendship_request friendship = @my_profile.remove_friendship_request(params[:profile_id]) render :json => friendship end ###################### # PROCESS FRIENDSHIP # ###################### # Process friendship: accept or reject a friend. # This will make a new friend or # will make a new rejected pending friend. def process_friendship friendship = @my_profile.process_friendship(params[:profile_id].to_i, params[:accepted].to_i) render :json => friendship end ################### # REMOVE A FRIEND # ################### # Remove a friend from my friends by id. def remove_friend friendship = @my_profile.remove_friend(params[:profile_id]) render :json => friendship end end

    Read the article

  • Handling Digital ID/Signature in Outlook Add-in

    - by CoSteve
    I have a C# Outlook Add-In application (VS2005 and 2003 Outlook) that reads incoming emails and strips out the attachments and the email text body for future processing. Occasionally I'll get an email that contains a digital signature. The application will fail when I try to access the mailitem.body property, throwing the following exception: System.Runtime.InteropServices.COMException (0xAB404001): The operation failed. at Microsoft.Office.Interop.Outlook._MailItem.get_Body() at MyLib.MyApp.OutlookAddin.MailProcessor.ProcessMailItem(MailItem mailItem) I'm pretty sure it is the digital signature causing the problem because if I forward the email back to myself, it will strip off the original sender's digital signature and the add-in application will process the email without any problems. I'm not sure what to do. I need to process the email, so I can't just ignore it. Somehow getting the body of the original email without throwing an exception would be ideal. Or I guess if I can identify that there is a digital signature associated with the email, I could forward the email to myself, but that seems a little messy. Does anyone have any suggestions/fixes? Thanks for any help.

    Read the article

  • Synchronizing an ERWin model with a Visual Studio 2008 GDR 2/2010 db project

    - by Grant Back
    I am looking for options to get our vast collection of DB objects across many DBs into source control (TFS 2010). Once we succeed here, we will work toward generating our alter scripts for a particular DB change via TFS build. The problem is, our data architecture group is responsible for maintaining the DB objects (excluding SPs), and they work within a model centric process, via ERWin. What this means, is that they maintain the DBs via ERWin models, and generate alters from them that are used to release changes. In order to achieve our goal of getting the DB objects (not just the ERWin models) into TFS, I believe the best option is to do this via Visual Studio DB projects. From what I can tell, there is very little urgency for CA to continue supporting an integration between ERWin and Visual Studio, that no longer works as of Visual Studio 2008 DB Ed. GDR. If I have been mislead in this regard, please feel free to set me straight. One potential solution is to: Perform changes in the ERWin model. Take the alter script generated from ERWin, and import the script into the appropriate Visual Studio DB project, updating the objects in the in the DB project Check the changed objects in the DB project into TFS. TFS Build executes to generate the alter scripts that will be used to push the changes through our release process. My question is, is this solution viable, or are there any other options?

    Read the article

  • jquery Ajax sortable list stops being sortable when Ajax call updates the list html

    - by Trevor
    Hi, I am using jquery 1.4.2 and trying to achieve the following: 1 - function call that sends a value to a php page to add/remove an item 2 - returns html list of the items 3 - list should still be sortable 4 - save (serialise list) onclick My full WIP is located here [http://www.chittak.co.uk/test4/index_nw3.php][1] I tried to delegate from the level above the UL but I could get this to work $("#construnctionstage").delegate('ul li', 'click', function(){ The initial list is sortable, when you click add/remove the ajax function returns a new list with the a number of items, BUT I am doing something wrong as the alert message continues to work while the list is no longer sortable. $(document).ready(function(){ $('ul').delegate('li', 'click', function(){ alert('You clicked on an li element!'); /*$("#test-list").sortable({ handle : '.handle', update : function () { var order = $('#test-list').sortable('serialize'); $("#info").load("process-sortable.php?"+order); } });*/ }).sortable({ handle : '.handle', update : function () { var order = $('#test-list').sortable('serialize'); $("#info").load("process-sortable.php?"+order); } }); }); <div id="construnctionstage"> <ul id="test-list"> <li id="listItem_1">

    Read the article

  • SerializationException Occurring Only in Release Mode

    - by Calvin Nguyen
    Hi, I am working on an ASP.NET web app using Visual Studio 2008 and a third-party library. Things are fine in my development environment. Things are also good if the web app is deployed in Debug configuration. However, when it is deployed in Release mode, SerializationExceptions appear intermittently, breaking other functionality. In the Windows event log, the following error can be seen: "An unhandled exception occurred and the process was terminated. Application ID: DefaultDomain Process ID: 3972 Exception: System.Runtime.Serialization.SerializationException Message: Unable to find assembly 'MyThirdPartyLibrary, Version=1.234.5.67, Culture=neutral, PublicKeyToken=3d67ed1f87d44c89'. StackTrace: at System.Runtime.Serialization.Formatters.Binary.BinaryAssemblyInfo.GetAssembly( ) at System.Runtime.Serialization.Formatters.Binary.ObjectReader.GetType(BinaryAsse mblyInfo assemblyInfo, String name) at System.Runtime.Serialization.Formatters.Binary.ObjectMap..ctor(String objectName, String[] memberNames, BinaryTypeEnum[] binaryTypeEnumA, Object[] typeInformationA, Int32[] memberAssemIds, ObjectReader objectReader, Int32 objectId, BinaryAssemblyInfo assemblyInfo, SizedArray assemIdToAssemblyTable) at System.Runtime.Serialization.Formatters.Binary.ObjectMap.Create(String name, String[] memberNames, BinaryTypeEnum[] binaryTypeEnumA, Object[] typeInformationA, Int32[] memberAssemIds, ObjectReader objectReader, Int32 objectId, BinaryAssemblyInfo assemblyInfo, SizedArray assemIdToAssemblyTable) at System.Runtime.Serialization.Formatters.Binary._BinaryParser.ReadObjectWithMa pTyped(BinaryObjectWithMapTyped record) at System.Runtime.Serialization.Formatters.Binary._BinaryParser.ReadObjectWithMa pTyped(BinaryHeaderEnum binaryHeaderEnum) at System.Runtime.Serialization.Formatters.Binary.__BinaryParser.Run() at System.Runtime.Serialization.Formatters.Binary.ObjectReader.Deserialize(Header Handler handler, __BinaryParser serParser, Boolean fCheck, Boolean isCrossAppDomain, IMethodCallMessage methodCallMessage) at System.Runtime.Serialization.Formatters.Binary.BinaryFormatter.Deserialize(Str eam serializationStream, HeaderHandler handler, Boolean fCheck, Boolean isCrossAppDomain, IMethodCallMessage methodCallMessage) at System.Runtime.Remoting.Channels.CrossAppDomainSerializer.DeserializeObject(Me moryStream stm) at System.AppDomain.Deserialize(Byte[] blob) at System.AppDomain.UnmarshalObject(Byte[] blob) For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp." Using FUSLOGVW.exe (i.e., Assembly Binding Log Viewer), I can see the problem is that IIS attempts to find MyThirdPartyLibrary in directory C:\windows\system32\inetsrv. It refuses to look in the bin folder of the web app, where the DLL is actually located. Does anyone know what the problem is? Thanks, Calvin

    Read the article

  • Server 2003 crashing intermittently, want to transfer function to other DC

    - by user1305332
    I have a Win2003R2 server that is intermittently crashing after some virus were introduced. I'm sure all virus have been cleaned thanks to Malwarebytes (were using McAfee - useless). When it crashes you can't login (local or remote) but can still access files remotely and ping it. After a while even file sharing stops and have to kill power to restart it (no BSOD) I need to either fix it (tried to reinstall SP2 and I tried to reinstall windows in repair mode but the repair option was not available when I booted from installation disks) or move it's functionality to another DC (another 2003R2 server). The server that's crashing is old with SCSI drives while the new server uses SATA drives and faster so it seems like a good idea to just transfer roles and ditch the old box. Finding replacement SCSI drives looks expensive if they ever fail. What would I need to transfer roles. If I just move the 5 FSMO roles and copy over the file shares. Would the new server have enough to run without the old server? Never done something like this, just want some tips. Thanks.

    Read the article

  • catching a deadlock in a simple odd-even sending

    - by user562264
    I'm trying to solve a simple problem with MPI, my implementation is MPICH2 and my code is in fortran. I have used the blocking send and receive, the idea is so simple but when I run it it crashes!!! I have absolutely no idea what is wrong? can anyone make quote on this issue please? there is a piece of the code: integer,parameter::IM=100,JM=100 REAL,ALLOCATABLE ::T(:,:),TF(:,:) CALL MPI_COMM_RANK(MPI_COMM_WORLD,RNK,IERR) CALL MPI_COMM_SIZE(MPI_COMM_WORLD,SIZ,IERR) prv = rnk-1 nxt = rnk+1 LIM = INT(IM/SIZ) IF (rnk==0) THEN ALLOCATE(TF(IM,JM)) prv = MPI_PROC_NULL ELSEIF(rnk==siz-1) THEN NXT = MPI_PROC_NULL LIM = LIM+MOD(IM,SIZ) END IF IF (MOD(RNK,2)==0) THEN CALL MPI_SEND(T(2,:),JM+2,MPI_REAL,PRV,10,MPI_COMM_WORLD,IERR) CALL MPI_RECV(T(1,:),JM+2,MPI_REAL,PRV,20,MPI_COMM_WORLD,STAT,IERR) ELSE CALL MPI_RECV(T(LIM+2,:),JM+2,MPI_REAL,NXT,10,MPI_COMM_WORLD,STAT,IERR) CALL MPI_SEND(T(LIM+1,:),JM+2,MPI_REAL,NXT,20,MPI_COMM_WORLD,IERR) END IF as I understood even processes are not receiving anything while the odd ones finish sending successfully, in some cases when I added some print to observe what is going on I saw that the variable NXT is changing during the sending procedure!!! for example all the odd process was sending message to process 0 not their next one!

    Read the article

  • Server high memory usage at same time every day

    - by Sam Parmenter
    Right, we moved one of our main sites onto a new AWS box with plenty of grunt as it would allow us more control that we had before and future proof ourselves. About a month ago we started running into issues with high memory usage at the same time every day. In the morning an export is run to export data to a file which is the FTPed to a local machine for processing. The issues were co-inciding with the rough time of the export but when we didn't run the export one day, the server still ran into the same issues. The export has been run at other times in the day since to monitor memory usage to see if it spikes. The conclusion is that the export is fine and barely touches the sides memory wise. No noticeable change in memory usage. When the issue happens, its effect is to kill mysql and require us to restart the process. We think it might be a mysql memory issue, but might just be that mysql is just the first to feel it. Looking at the logs there is no particular query run before the memory usage hits 90%. When it strikes at about 9:20am, the memory usage spikes from a near constant 25% to 98% and very quickly kills mysql to save itself. It usually takes about 3-4 minutes to die. There are no cron jobs running at that time of the day and we haven't noticed a spike in traffic over the period of the issues. Any help would be massively appreciated! thanks.

    Read the article

  • Can't get any speedup from parallelizing Quicksort using Pthreads

    - by Murat Ayfer
    I'm using Pthreads to create a new tread for each partition after the list is split into the right and left halves (less than and greater than the pivot). I do this recursively until I reach the maximum number of allowed threads. When I use printfs to follow what goes on in the program, I clearly see that each thread is doing its delegated work in parallel. However using a single process is always the fastest. As soon as I try to use more threads, the time it takes to finish almost doubles, and keeps increasing with number of threads. I am allowed to use up to 16 processors on the server I am running it on. The algorithm goes like this: Split array into right and left by comparing the elements to the pivot. Start a new thread for the right and left, and wait until the threads join back. If there are more available threads, they can create more recursively. Each thread waits for its children to join. Everything makes sense to me, and sorting works perfectly well, but more threads makes it slow down immensely. I tried setting a minimum number of elements per partition for a thread to be started (e.g. 50000). I tried an approach where when a thread is done, it allows another thread to be started, which leads to hundreds of threads starting and finishing throughout. I think the overhead was way too much. So I got rid of that, and if a thread was done executing, no new thread was created. I got a little more speedup but still a lot slower than a single process. The code I used is below. http://pastebin.com/UaGsjcq2 Does anybody have any clue as to what I could be doing wrong?

    Read the article

  • start-stop-daemon can't find executable that's right in front of it

    - by Bart van Heukelom
    root@mountain-lion:/opt/smartfox# ls -lha total 180K drwxr-xr-x 8 root root 4.0K 2012-06-01 14:09 . drwxr-xr-x 4 root root 4.0K 2012-06-01 09:41 .. drwxr-xr-x 8 root root 4.0K 2009-05-17 21:57 lib lrwxrwxrwx 1 root root 22 2012-06-01 09:41 logs -> /var/opt/smartfox/logs -rwxr-xr-x 1 root root 1.4K 2012-06-01 14:28 run.sh root@mountain-lion:/opt/smartfox# cat run.sh #!/bin/bash java -cp "./:./sfsExtensions/:lib/activation.jar:lib/commons-beanutils.jar:lib/commons-collections-3.2.jar:lib/commons-dbcp-1.2.1.jar:lib/commons-lang-2.3.jar:lib/commons-logging-1.1.jar:lib/commons-pool-1.2.jar:lib/concurrent.jar:lib/ezmorph-1.0.3.jar:lib/h2.jar:lib/js.jar:lib/json-lib-2.1-jdk15.jar:lib/json.jar:lib/jsr173_1.0_api.jar:lib/jysfs.jar:lib/jython.jar:lib/nanoxml-2.2.1.jar:lib/wrapper.jar:lib/xbean.jar:lib/javamail/imap.jar:lib/javamail/mailapi.jar:lib/javamail/pop3.jar:lib/javamail/smtp.jar:lib/jetty/jetty.jar:lib/jetty/jetty-util.jar:lib/jetty/jstl.jar:lib/jetty/multipartrequest.jar:lib/jetty/servlet-api.jar:lib/jetty/standard.jar:lib/jsp-2.1/commons-el-1.0.jar:lib/jsp-2.1/core-3.1.0.jar:lib/jsp-2.1/jsp-2.1.jar:lib/jsp-2.1/jsp-api-2.1.jar:lib/jsp-2.1/jstl.jar:lib/jsp-2.1/standard.jar:lib/lsc.jar:lib/commons-io-1.4.jar" \ it.gotoandplay.smartfoxserver.SmartFoxServer > logs/smartfox.out 2>&1 & JAVAPID=$! echo "Started Smartfox. JVM PID = $JAVAPID" trap "echo Stopping Smartfox.; kill $JAVAPID" INT TERM wait echo "Smartfox stopped." root@mountain-lion:/opt/smartfox# start-stop-daemon --start --make-pidfile --pidfile /var/opt/smartfox/smartfox.pid --exec ./run.sh start-stop-daemon: unable to start ./run.sh (No such file or directory) Why can't start-stop-daemon find the script?

    Read the article

  • How do I debug a crash when I run my garbage-collected app in Rosetta?

    - by Rob Keniger
    I have a Universal app which is targeting 10.5 and which uses garbage collection. I am building for ppc, i386 and x86_64. I don't have access to a physical PowerPC machine so I am trying to use Rosetta to confirm that the PowerPC portion of the app works correctly. However, as soon as the app is launched in Rosetta it immediately crashes with the following crash log: Process: FooApp [91567] Path: /Users/rob/Development/src/FooApp/build/Release 64-bit/FooApp.app/Contents/MacOS/FooApp Identifier: com.companyX.FooApp Version: 0.9 (build d540e05) (2) Code Type: PPC (Translated) Parent Process: launchd [708] Date/Time: 2010-04-09 18:32:23.962 +1000 OS Version: Mac OS X 10.6.3 (10D573) Report Version: 6 Exception Type: EXC_CRASH (SIGTRAP) Exception Codes: 0x0000000000000000, 0x0000000000000000 Crashed Thread: 5 ...snip non-relevant threads... Thread 5 Crashed: 0 libSystem.B.dylib 0x8023656a __pthread_kill + 10 1 libSystem.B.dylib 0x80235e17 pthread_kill + 95 2 com.companyX.FooApp 0xb80bfb30 0xb8000000 + 785200 3 com.companyX.FooApp 0xb80c0037 0xb8000000 + 786487 4 com.companyX.FooApp 0xb80dd8e8 0xb8000000 + 907496 5 com.companyX.FooApp 0xb8145397 spin_lock_wrapper + 1791 6 com.companyX.FooApp 0xb801ceb7 0xb8000000 + 118455 I have used the Apple docs on debugging translated apps and the information on this page to attach gdb to the app when it's running in Rosetta. The app immediately breaks into the debugger upon launch: Program received signal SIGTRAP, Trace/breakpoint trap. [Switching to thread 15107] 0x9151fdd4 in auto_fatal () (gdb) bt #0 0x9151fdd4 in auto_fatal () #1 0x91536d84 in Auto::Thread::get_register_state () #2 0x915372f8 in Auto::Thread::scan_other_thread () #3 0x91529be4 in Auto::Zone::scan_registered_threads () #4 0x91539114 in Auto::MemoryScanner::scan_thread_ranges () #5 0x9153b000 in Auto::MemoryScanner::scan () #6 0x9153049c in Auto::Zone::collect () #7 0x915198f4 in auto_collect_internal () #8 0x9151a094 in auto_collection_work () #9 0x96687434 in _dispatch_call_block_and_release () #10 0x9668912c in _dispatch_queue_drain () #11 0x96689350 in _dispatch_queue_invoke () #12 0x966895c0 in _dispatch_worker_thread2 () #13 0x966896fc in _dispatch_worker_thread () #14 0x965a97e8 in _pthread_body () (gdb) I have no idea where to start with this. It looks like the Garbage Collector is failing very badly. Are garbage-collected PowerPC apps not supported in Rosetta? I can't see any mention of this limitation in the docs if so. Does anyone have any ideas?

    Read the article

  • How to write a C program using the fork() system call that generates the Fibonacci sequence in the

    - by Ellen
    The problem I am having is that when say for instance the user enters 7, then the display shows: 0 11 2 3 5 8 13 21 child ends. I cannot seem to figure out how to fix the 11 and why is it displaying that many numbers in the sequence! Can anyone help? The number of the sequence will be provided in the command line. For example, if 5 is provided, the first five numbers in the Fibonacci sequence will be output by the child process. Because the parent and child processes have their own copies of the data, it will be necessary for the child to output the sequence. Have the parent invoke the wait() call to wait for the child process to complete before exiting the program. Perform necessary error checking to ensure that a non-negative number is passed on the command line. #include <stdio.h> #include <sys/types.h> #include <unistd.h> int main() { int a=0, b=1, n=a+b,i,ii; pid_t pid; printf("Enter the number of a Fibonacci Sequence:\n"); scanf("%d", &ii); if (ii < 0) printf("Please enter a non-negative integer!\n"); else { pid = fork(); if (pid == 0) { printf("Child is producing the Fibonacci Sequence...\n"); printf("%d %d",a,b); for (i=0;i<ii;i++) { n=a+b; printf("%d ", n); a=b; b=n; } printf("Child ends\n"); } else { printf("Parent is waiting for child to complete...\n"); wait(NULL); printf("Parent ends\n"); } } return 0; }

    Read the article

  • How do I subscribe to a MSMQ queue but only "peek" the message in .Net?

    - by lemkepf
    We have a MSMQ Queue setup that receives messages and is processed by an application. We'd like to have another process subscribe to the Queue and just read the message and log it's contents. I have this in place already, the problem is it's constantly peeking the queue. CPU on the server when this is running is around 40%. The mqsvc.exe runs at 30% and this app runs at 10%. I'd rather have something that just waits for a message to come in, get's notified of it, and then logs it without constantly pooling the server. Dim lastid As String Dim objQueue As MessageQueue Dim strQueueName As String Public Sub Main() objQueue = New MessageQueue(strQueueName, QueueAccessMode.SendAndReceive) Dim propertyFilter As New MessagePropertyFilter propertyFilter.ArrivedTime = True propertyFilter.Body = True propertyFilter.Id = True propertyFilter.LookupId = True objQueue.MessageReadPropertyFilter = propertyFilter objQueue.Formatter = New ActiveXMessageFormatter AddHandler objQueue.PeekCompleted, AddressOf MessageFound objQueue.BeginPeek() end main Public Sub MessageFound(ByVal s As Object, ByVal args As PeekCompletedEventArgs) Dim oQueue As MessageQueue Dim oMessage As Message ' Retrieve the queue from which the message originated oQueue = CType(s, MessageQueue) oMessage = oQueue.EndPeek(args.AsyncResult) If oMessage.LookupId <> lastid Then ' Process the message here lastid = oMessage.LookupId ' let's write it out log.write(oMessage) End If objQueue.BeginPeek() End Sub

    Read the article

  • Can't connect to SQL Server 2008 - looks like Shared Memory problem

    - by Proposition Joe
    I am unable to connect to my local instance of SQL Server 2008 Express using SQL Server Management Studio. I believe the problem is related to a change I made to the connection protocols. Before the error occurred, I had Shared Memory enabled and Named Pipes and TCP/IP disabled. I then enabled both Named Pipes and TCP/IP, and this is when I started experiencing the problem. When I try to connect to the server with SSMS (with either my SQL server sysadmin login or with windows authentication), I get the following error message: A connection was successfully established with the server, but then an error occurred during the login process. (provider: Named Pipes Provider, error: 0 - No process is on the other end of the pipe.) (Microsoft SQL Server, Error: 233) Why is it returning a Named Pipes error? Why would it not just use Shared Memory, as this has a higher priority order in the list of connection protocols? It seems like it is not listening on Shared Memory for some reason? When I set Named Pipes to enabled and try to connect, I get the same error message. My windows account is does not have administrator priviliges on my computer - perhaps this is making a difference in some way (as some of the discussions in this post about an "SuperSocketNetLib\Lpc" registry key seems to suggest).

    Read the article

  • Monit unable to start sidekiq on Opsworks server

    - by webdevtom
    I have used AWS Opsworks to create some servers. I have Sidekiq running as part of my Rails application. When I deploy Sidekiq restarts nicely. I am configuring Monit to watch the pid and start and stop Sidekiq if there are any issues. However when Monit trys to start Sidekiq I see that the wrong Ruby looks to be used. Oct 17 13:52:43 daitengu sidekiq: /usr/local/lib/ruby/gems/1.9.1/gems/bundler-1.3.4/lib/bundler/definition.rb:361:in `validate_ruby!': Your Ruby version is 1.8.7, but your Gemfile specified 1.9.3 (Bundler::RubyVersionMismatch) Oct 17 13:52:43 daitengu sidekiq: from /usr/local/lib/ruby/gems/1.9.1/gems/bundler-1.3.4/lib/bundler.rb:116:in `setup' Oct 17 13:52:43 daitengu sidekiq: from /usr/local/lib/ruby/gems/1.9.1/gems/bundler-1.3.4/lib/bundler/setup.rb:17 When I run the command from the cli Sidekiq launches correctly. $> cd /srv/www/myapp/current && RAILS_ENV=production nohup /usr/local/bin/bundle exec sidekiq -C config/sidekiq.yml >> /srv/www/myapp/shared/log/sidekiq.log 2>&1 & $> ps -aef |grep sidekiq root 1236 1235 8 20:54 pts/0 00:00:50 sidekiq 2.11.0 myapp [0 of 25 busy] My sidekiq.monitrc file check process unicorn with pidfile /srv/www/myapp/shared/pids/unicorn.pid start program = "/bin/bash -c 'cd /srv/www/myapp/current && /usr/local/bin/bundle exec unicorn_rails --env production --daemonize -c /srv/www/myapp/shared/config/unicorn.conf'" stop program = "/bin/bash -c 'kill -QUIT `cat /srv/www/myapp/shared/pids/unicorn.pid`'"

    Read the article

  • Unmanaged DLL in C# Web Service

    - by Telis
    Hi Guys, please help µe as I am new into accessing an unmanaged DLL from C#.. I have a large unmanaged DLL in C++ and I am trying to access the DLL's classes and functions from a C# Web Service. I have seen many examples how to use DLLImport, but for some reason I am stuck with my very first wrapper method spending many hours with no luck.. What should I do to return an object in my 'Marshaled' [DllImport..] function? I would like to do something like that: [DllImport("unmanaged.dll")] public static extern MyClass MyFunction(); Here is the definition of my C++ class and the function that I want to access: class __declspec(dllexport) TPDate { public: TPDate(); TPDate(const TPDate& rhs); ... //today's date. static TPDate AsOfDate(void); ... } In my Web service I have declared the following StructLayout: [StructLayout(LayoutKind.Sequential)] public class TPDate { public TPDate(TPDate d) { _tpDate = d; } public TPDate _tpDate; } and here's where I think that I'm not doing something right: class WrapperTPDate { [DllImport("TPTools.dll", ExactSpelling=false, EntryPoint = "?AsOfDate@TPDate@@SA?AV1@XZ", CallingConvention = CallingConvention.StdCall)] [return: MarshalAs(UnmanagedType.Struct)] **public static extern TPDate AsOfDate();**// HERE THERE IS PROBLEM }; I am calling the wrapper as follows from my WebMethod: [WebMethod] public void ConstructModel() { TPDate date1 = WrapperTPDate.AsOfDate();// Here I get exception TPDate date = new TPDate(date1); } The exception i am getting is: System.Web.Services.Protocols.SoapException: Server was unable to process request. ---> System.Runtime.InteropServices.MarshalDirectiveException: Cannot marshal 'return value': Invalid managed/unmanaged type combination (this type must be paired with LPStruct or Interface). If I change it to LPSTRUCT, I am getting another exception: System.Web.Services.Protocols.SoapException: Server was unable to process request. ---> System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt Could you please tell me where I'm doing wrong here Thanks

    Read the article

  • jQuery .load() XHTML issue

    - by Urfe
    I am having some strange problems loading content from another XHTML page via jQuery. When the second page I try to load from is served as XHTML I get the below error. I don't know if it helps but both documents validate when I get the error. Uncaught Error: NO_MODIFICATION_ALLOWED_ERR: DOM Exception 7 Currently the header on the second page I load from is: <?xml version="1.0" encoding="iso-8859-1" ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en"> <head> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" /> <meta name="language" content="en" /> <title>some title</title> <!-- CSS & Javascript included here --> </head> The content type is set as: application/xhtml+xml;charset=iso-8859-1 Interestingly, when I remove all the XHTML stuff from the header and stop setting the content type the error does not occur and everything works great. The load process currently looks like the below. It works fine when everything is plain HTML. $('#overpage').find(".wrap").load(this.getTrigger().attr("href")+" #op").show(); I'm curious why the process only does not work when the second page I load from is XHTML. I don't want to serve the page as just plain HTML and am looking for advice on what I am doing wrong. Both pages validate and I'm really scratching my head here. Many thanks!

    Read the article

  • uninstall string

    - by Sakhawat Ali
    Hi experts, I am developing an desktop based application using VB.NET, similar to add/remove program. everything was working fine until i start working on uninstall feature. Now what am i doing is that i get the uninstall string of the specific application from the registry and use System.Diagnostics.Process to run UninstallString. Dim proc As New Process() proc.StartInfo.FileName =UninstallString proc.Start() proc.WaitForExit() proc.Close() latter i found that it only work for straight file paths only, i mean with no command line argument like: C:\program files\someApp\uninstall.exe I make a list of list of all UninstallStrings of all application installed on my machine. i found few things like application installed using MSI, some were with rundll32 and few were with straight file path with some command argument like: My Silverlight SDK UninstallString, MSI Example MsiExec.exe /X{2012098D-EEE9-4769-8DD3-B038050854D4} My JetAudio UninstallString, RunDll32 Example RunDll32 C:\PROGRA~1\COMMON~1\INSTAL~1\engine\6\INTEL3~1\Ctor.dll,LaunchSetup "C:\Program Files\InstallShield Installation Information{91F34319-08DE-457A-99C0-0BCDFAC145B9}\Setup.exe" -l0x9 My Google Chrome UninstallString, straight file path with command argument example "C:\Program Files\Google\Chrome\Application\5.0.375.55\Installer\setup.exe" -uninstall The code i mentioned above does not work for these. i did some string parsing, separate two thing from UninstallString one is Filename and other is Arguments. like for MSI, filename is MSIEXEC.EXE and argument will be rest of the string, same for RunDLL32, same for straight file path with command argument. Now what am i facing is that, after every 2 or 3 days i come to know that this type of unistallstring is also not working. and why is that not working because it is a new type maybe abc C:\program files\someapp.exe -ddd so parse it too. is there any better way of doing that rather then parsing the string.

    Read the article

  • saslauthd using too much memory

    - by Brian Armstrong
    Woke up today to see my site slow/unresponsive. Pulled up top and it looks like a ton of saslauthd processes have spun up using about 64m of RAM each, causing the machine to enter swap space. I've never seen this many used on there. top - 16:54:13 up 85 days, 11:48, 1 user, load average: 0.32, 0.50, 0.38 Tasks: 143 total, 1 running, 142 sleeping, 0 stopped, 0 zombie Cpu(s): 0.7%us, 0.3%sy, 0.0%ni, 97.3%id, 0.2%wa, 0.0%hi, 0.0%si, 1.4%st Mem: 1048796k total, 1025904k used, 22892k free, 14032k buffers Swap: 2097144k total, 332460k used, 1764684k free, 194348k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 848 admin 20 0 263m 115m 4840 S 0 11.3 5:02.91 ruby 906 admin 20 0 265m 113m 4828 S 0 11.1 5:37.24 ruby 30484 admin 20 0 248m 91m 4256 S 6 9.0 219:02.30 delayed_job 4075 root 20 0 160m 65m 952 S 0 6.4 0:24.22 saslauthd 4080 root 20 0 162m 64m 936 S 0 6.3 0:24.48 saslauthd 4079 root 20 0 162m 64m 936 S 0 6.3 0:24.70 saslauthd 4078 root 20 0 164m 63m 936 S 0 6.2 0:24.66 saslauthd 4077 root 20 0 163m 62m 936 S 0 6.1 0:24.66 saslauthd 3718 mysql 20 0 312m 52m 3588 S 1 5.1 3499:40 mysqld 699 root 20 0 72744 7640 2164 S 0 0.7 0:00.50 ruby 15701 postfix 20 0 106m 5712 4164 S 1 0.5 0:00.50 smtpd 15702 postfix 20 0 52444 3252 2452 S 1 0.3 0:00.06 cleanup 4062 postfix 20 0 41884 3104 1788 S 0 0.3 125:26.01 qmgr 15683 root 20 0 51504 2780 2180 S 0 0.3 0:00.04 sshd 14595 postfix 20 0 52308 2548 2304 S 1 0.2 0:24.60 proxymap 15483 postfix 20 0 43380 2544 1992 S 0 0.2 0:00.38 smtp 15486 postfix 20 0 43380 2544 1992 S 0 0.2 0:00.36 smtp 15488 postfix 20 0 43380 2540 1992 S 0 0.2 0:00.38 smtp 15485 postfix 20 0 43380 2532 1984 S 0 0.2 0:00.36 smtp 15489 postfix 20 0 43380 2532 1984 S 0 0.2 0:00.40 smtp Wasn't sure what Saslauthd is, Google says it handles plantext authentication. The machine has been sending a lot of email through postfix, so this could be related. Anyone know why so many may have spun up? Are they safe to kill? Thanks!

    Read the article

  • AccessControlException when trying to redeploy webapp to Tomcat using Netbeans

    - by Venc
    I'm getting the following error trying redeploy an webapp on Tomcat from within Netbeans 6.8. It has probably something to do with the new deploy on save and hot swap functionality. Any ideas how to resolve this issue? INFO: Error registering wrapper with jmx StandardEngine[Catalina].StandardHost[localhost].StandardContext[/CubeAdSaSim2] Catalina:j2eeType=WebModule,name=//localhost/CubeAdSaSim2,J2EEApplication=none,J2EEServer=none java.security.AccessControlException: access denied (javax.management.MBeanTrustPermission register) java.security.AccessControlException: access denied (javax.management.MBeanTrustPermission register) at java.security.AccessControlContext.checkPermission(AccessControlContext.java:323) at java.lang.SecurityManager.checkPermission(SecurityManager.java:568) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.checkMBeanTrustPermission(DefaultMBeanServerInterceptor.java:1824) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:310) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:482) at org.apache.tomcat.util.modeler.Registry.registerComponent(Registry.java:805) at org.apache.catalina.core.StandardContext.registerJMX(StandardContext.java:5281) at org.apache.catalina.core.StandardContext.start(StandardContext.java:4482) at org.apache.catalina.manager.ManagerServlet.start(ManagerServlet.java:1249) at org.apache.catalina.manager.ManagerServlet.doGet(ManagerServlet.java:377) at javax.servlet.http.HttpServlet.service(HttpServlet.java:617) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.netbeans.modules.web.monitor.server.MonitorFilter.doFilter(MonitorFilter.java:196) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:525) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:849) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:583) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:454) at java.lang.Thread.run(Thread.java:619)

    Read the article

  • Google App Engine on Google Apps Domain

    - by Bob Ralian
    I'm having trouble getting my domain pointed to my website hosted with google app engine. Here's the background... take care to separate the concepts of "google apps" (domain hosting, email, etc.) and "google app engine" (website framework). I have a domain that's using Google Apps for Your Domain, let's call it company.com. So my login for my google apps account is [email protected]. I have a different domain that is aliased back to my google apps account, let's call it mycompany.com. It's been successfully aliased and registered with my primary google apps account using the cname method, and has updated mx records. We have a ton of domains, and I only want to use one "google apps" account to maintain them all. Now I have a website I've built using google app engine, and the url is effectively mycompany.appspot.com. I want to get mycompany.com to point to my website that currently resides at mycompany.appspot.com. There's a spot in the google app engine dashboard under application settings where you can add a domain. So I click there and enter mycompany.com and I get an error message saying that domain is not using google apps. If I back up to the page I submitted, there's a note saying I need to register the domain with google apps. So I click the link to do that and enter mycompany.com and I get an error message saying the domain has been registered and is in the process of ownership verification. But that process is already finished. So... what do I do? Does google app engine not support a domain that is only aliased to a primary google apps account? Does mycompany.com need to have its own primary google apps account?

    Read the article

  • How can I make a server log file available via my ASP.NET MVC website?

    - by Gary McGill
    I have an ASP.NET MVC website that works in tandem with a Windows Service that processes file uploads. For easy maintenance of the site, I'd like the log file for the Windows Service to be accessible (to me, only) via the website, so that I can hit http://myserver/logs/myservice to view the contents of the log file. How can I do that? At a guess, I could either have the service write its log file in a "Logs" folder at the top level of the site, or I could leave it where it is and set up a virtual directory to point to it. Which of these is better - or is there another, better way? Wherever the file is stored, I can see that there's going to be another problem. I tried out the first option (Logs folder in my website), but when I try to access the file via HTTP I get an error: The process cannot access the file 'foo' because it is being used by another process. Now, I know from experience that my service keeps the file locked for writing while it's running, but that I can still open the file in Notepad to view the current contents. (I'm surprised that IIS insists on write access, if that's what's happening). How can I get around that? Do I really have to write a handler to read the file and serve it to the browser myself? Or can I fix this with configuration or somesuch? PS. I'm using IIS7 if that helps.

    Read the article

  • EXC_BAD_ACCESS NSUrlConnection

    - by Lars
    Hi all, i got an EXC_BAD_ACCESS when i perform the last line of the function (webData). -(void)requestSoap{ NSString *requestUrl = @"http://www.website.com/webservice.php"; NSString *soapMessage = @"the soap message"; //website and soapmessage are valid in original code. NSError **error; NSURLResponse *response; //Convert parameter string to url NSURL *url = [NSURL URLWithString:requestUrl]; NSMutableURLRequest *theRequest = [NSMutableURLRequest requestWithURL:url cachePolicy:NSURLRequestReloadIgnoringCacheData timeoutInterval:10]; NSString *msgLength = [NSString stringWithFormat:@"%d", [soapMessage length]]; //Create an XML message for webservice [theRequest addValue: @"text/xml; charset=utf-8" forHTTPHeaderField:@"Content-Type"]; [theRequest addValue: msgLength forHTTPHeaderField:@"Content-Length"]; [theRequest setHTTPMethod:@"POST"]; [theRequest setHTTPBody: [soapMessage dataUsingEncoding:NSUTF8StringEncoding]]; NSData *webData = [NSURLConnection sendSynchronousRequest:theRequest returningResponse:&response error:error]; } I tried not to release a thing, because what i read on the net is it's almost always a memory thing. When i debug the code (NSZombieEnabled = YES) this is what i get: [Session started at 2010-05-31 15:56:13 +0200.] GNU gdb 6.3.50-20050815 (Apple version gdb-1461.2) (Fri Mar 5 04:43:10 UTC 2010) Copyright 2004 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "x86_64-apple-darwin".sharedlibrary apply-load-rules all Attaching to process 19856. test(19856) malloc: recording malloc stacks to disk using standard recorder test(19856) malloc: enabling scribbling to detect mods to free blocks test(19856) malloc: process 19832 no longer exists, stack logs deleted from /tmp/stack-logs.19832.test.w9Ek4L.index test(19856) malloc: stack logs being written into /tmp/stack-logs.19856.test.URRpQF.index Program received signal: “EXC_BAD_ACCESS”. Does anybody have a clue?? Thanks a lot! Lars

    Read the article

< Previous Page | 306 307 308 309 310 311 312 313 314 315 316 317  | Next Page >